From the inbox

Questions people ask me.

Every week, by Slack DM, by LinkedIn message, by the inbox at vladsnewsletter.com. The same dozen questions show up. Here are the answers in one place — short version, longer version, and the chapter that goes deep.

Type your question, or browse by category. If a question's not here that should be, the inbox is open.

From the inbox

What's the difference between Chat, Cowork, Claude Code, and the API?

Same model, three vehicles, plus the engine room.

Chat is the sedan — phone, casual, no connectors. Cowork is the SUV — daily ops, talks to Slack/HubSpot/calendar, scheduled tasks while you sleep. Claude Code is the pickup — repo work, swarms, hooks, real engineering. The Anthropic SDK is the engine — programmatic, what you reach for when CC and Cowork stop being the answer.

From the inbox

How do I install Claude Code and get my first prompt running?

Five minutes if your terminal works.

npm install -g @anthropic-ai/claude-code, claude --version, cd into a repo, type claude, /init, edit the auto-CLAUDE.md by hand. By minute 11 you've shipped a code change.

From the inbox

What should I do tomorrow morning if I want to actually use this?

Tick the boxes on the day-zero page.

Twelve concrete steps from clean machine to first scheduled task. GitHub → Vercel → Claude Pro → install CC → install Cowork → vault → CLAUDE.md → first prompt → first skill → first cron → first swarm → security tonight. Each step links to its deep chapter.

From the inbox

How do I create a skill?

If you've explained it three times, codify it.

A skill is a folder. SKILL.md is the only required file. Description on top — that's the trigger. Body underneath with steps, output format, anti-patterns. Drop in ~/.claude/skills/, test by typing the natural-language phrase, iterate the description until it fires when you want it.

From the inbox

How do I use swarms?

One conductor, N subagents, one tool batch.

In Claude Code: 'Spawn 3 Explore subagents in parallel — one looks at X, one at Y, one at Z.' One message, multiple Agent calls = true parallelism. Fan-out for breadth, pipeline for depth, map-reduce for scale, adversarial for truth.

From the inbox

How do I make cron jobs / scheduled tasks?

Saved instruction + trigger + delivery target.

In Cowork: type 'every weekday at 7 AM' and the UI generates the cron. In CC headless: claude --print piped through a real crontab. Morning briefing is the highest-leverage first one. Run for two weeks before adding a second.

From the inbox

How do I make agents like Rick — OpenClaw, NemoClaw, Hermes?

Pick a preset, port to a CC subagent when you outgrow it.

Rick is the training-wheels surface. Pick the archetype that matches the job (NemoClaw for sales, OpenClaw for research, Hermes for ops), install via meetrick.ai/install, give it 30 days. Graduate to a CC subagent only when you can name what the preset can't do.

From the inbox

How expensive are agents really, right now?

$50–2,000 per month per archetype, or $200–800 per CC subagent in tokens.

Rick presets sit at $50–150/mo per seat for Pro tier; $300–1,200/mo for a small team plan. Custom CC subagents cost $200–800/mo in tokens depending on volume. Compare to a senior eng at $120K/yr fully loaded — even at 10B tokens/mo you're at 5–10x leverage.

From the inbox

How do I use Codex AND Claude Code together?

Codex is the night shift. CC is the day shift.

Codex runs against Sentry/GitHub 24/7 — incident response, regression catching, simple fixes from logs. CC is your day driver — features, refactors, anything needing a strong opinion. They share the same .mcp.json and CLAUDE.md. Branch protection keeps Codex off main.

From the inbox

How do I make an agent browse or login into social networks?

Playwright + Claude. Save state.json. Read the kill switch chapter twice.

Use Playwright (headless browser) with the Anthropic SDK in a loop: read DOM → reason → click → verify. Save the session state once, reuse forever. CAPTCHA is real. ToS is real. Have a kill switch before the first run, not after the first wrong post.

From the inbox

How do I make someone (an agent) write on my behalf in Slack?

Voice clone in a SKILL.md, plus a hard human-approval gate.

Persona agent = a SKILL.md that encodes your voice rules + a hard rule: agent drafts, human approves, agent posts. Audit log every send. Four hard NEVERs: deals, hires, breakups, condolences — never agent-only. The day someone notices, the gate is what saves you.

From the inbox

When should I leave Claude Code for LangChain or CrewAI?

When you need 5+ agents with strict handoff contracts and persistent state.

CC's subagent system covers ~80% of orchestration. Reach for CrewAI when you need explicit task dependencies. Reach for LangGraph when you need a state machine that survives a process restart. AutoGen for research prototypes only. Anthropic SDK direct is the floor.

From the inbox

How do I keep my API keys safe?

Seven non-negotiables. Treat one chat-paste as compromised.

Never paste in a prompt. Never commit. Use scoped keys (read-only when possible). Different keys per env. Rotate quarterly. Use a secrets manager. Monitor usage. The bot in Sofia doesn't take a coffee break.

From the inbox

What's --dangerously-skip-permissions and when can I use it?

On a sandbox, not on your laptop. Ever.

It disables every approval gate. Use it only inside a Docker container, devcontainer, GitHub Codespace, or any disposable environment where the worst case is 'rebuild the container.' On your main machine with prod credentials sitting in .env, it's the kind of move that ends weekends.

From the inbox

How do I write evals for my AI workflows?

Three categories. Smoke, regression, golden-set. Schedule them like tests.

An eval is a test for your AI workflow. If you don't have one, you don't have a workflow — you have a hope. Smoke test the happy path. Regression-check on the failure modes you've already fixed. Golden-set lock the known-good outputs. Run them on cron 30 min before the real workflow fires.

From the inbox

Why does my agent corrupt the document after I delegate 20 edits in a row?

Bursty drift — invisible to vibes, visible only to a content-diff eval.

Microsoft Research's DELEGATE-52 (May 2026) ran 19 frontier models across 52 domains and found the top three corrupt ~25% of a document after 20 sequential edits; the average across all 19 is ~50%. Losses are bursty: ~80% come from rare single-step drops of 10-30%. Plugging in tools (search/code-exec/file-edit) makes it ~6% worse on average. Python is the safe domain; prose, recipes, music, financial reports are the worst. Operator move: break long edit chains into shorter sessions, run a content-checksum eval on cron, and don't reach for agentic tools by default in editing workflows.

From the inbox

How do I get my team to actually use this?

Adoption is a gravity problem, not a training problem.

The 4-3-2-2-1 distribution shows up on day 1 — 4 power users, 3 watchers, 2 holdouts, 2 'just send me the briefing,' 1 will-not-use. Make the AI path the path of least resistance, or the team routes around it. Skills as policy, not productivity. Track tab-count, not usage.

From the inbox

What's your stack? What should I copy?

Five tools, eighty percent of the output.

Claude (CC + Cowork) for ~80% of tokens. Gemini AI Studio for million-token bulk-PDF work. ChatGPT mobile + Codex on graveyard shift for ~7%. ElevenLabs for voice. The rest is noise dressed up as productivity.

Stay close

Edition 3 lands when this list says it does.

No course. No paywall. Operator playbooks weekly. 10K+ subscribers.