Glossary
Vocabulary you'll need.
31 terms. Linked from inside the chapters. Search Cmd-K from anywhere.
- Agent
- An LLM in a loop with tools, working toward a goal across multiple turns. The bar isn't "uses tools once" — it's "decides for itself what to do next."
- Claude Code
- Anthropic's CLI tool. `claude` in your terminal. Reads/writes your repo, spawns subagents, talks to MCP servers.
- CLAUDE.md
- A markdown file your project loads into every session. Working memory. Keep it under 100 lines.
- Connector
- Friendly UI for an MCP server. In Cowork & the Claude apps, "connector" is what you see; MCP is what's underneath.
- Context window
- The total tokens an instance can hold at once. The bigger the window, the more you can stuff in — but more isn't free.
- Cowork
- Anthropic's desktop app for knowledge work. Connectors, skills, scheduled tasks, mounted folders.
- Cron
- A scheduling syntax. `0 7 * * 1-5` = 7 AM weekdays. Most modern surfaces hide it behind a UI.
- Eval
- A test for your AI workflow. If you don't have evals, you don't have a workflow — you have a hope.
- Function calling
- A model output format that says "I want to call X with Y." Your code does the actual call.
- Hallucination
- When the model confidently invents something. Less common in 2026 than 2023, still possible. Verify destructive actions.
- Headless mode
- Running an AI tool without an interactive UI. `claude --print "..."` is the CC version. Use in CI and cron.
- Hook
- A shell script Claude Code runs before/after tool calls. PreToolUse, PostToolUse, Stop. Format-on-save, alert-on-finish.
- Inference
- The act of running a model on an input to produce an output. Each chat turn is one inference.
- Instance
- A single context window with a system prompt, tools, and history. When the window closes, the instance dies.
- Knowledge cutoff
- The date past which the model can't reliably answer. Mid-2025 for current frontier models.
- MCP
- Model Context Protocol. The open standard that lets any AI client talk to any tool. USB-C for AI.
- Multimodal
- Models that handle more than text — images, audio, video, code.
- Plugin
- A bundle of skills, MCP servers, commands, and hooks. Install once, get many capabilities.
- Prompt injection
- When adversarial text in tool output tricks an agent into doing something unintended. The new XSS.
- Quantization
- Compressing a model's weights to run on smaller hardware. Q4 quant of a 70B is shockingly close to full precision.
- RAG
- Retrieval-Augmented Generation. Pull relevant chunks from a vector DB, stuff into context, generate.
- Sandbox
- An isolated environment (Docker, VM, devcontainer) where an agent can run with skip-permissions safely. Rebuild the container if anything goes wrong.
- Skill
- A folder with a SKILL.md the model auto-loads when its description matches your request. Procedural memory.
- Subagent
- A specialized instance you spawn from your main session. Own context, own tools, returns one summary.
- Swarm
- Multiple subagents running in parallel. Fan-out / fan-in is the default pattern.
- System prompt
- The instruction that boots up an instance. Sets identity, constraints, defaults.
- Token
- About 0.75 of a word. Models count by tokens. So do bills.
- Vault
- Your second brain. A folder of markdown files an AI can read, link, and update. Obsidian is the typical home.
- Webhook
- An HTTP callback. Useful for event-triggered scheduled tasks.
- Worktree
- Git's built-in way to check out multiple branches of the same repo into separate folders. The single most useful Unix trick when running parallel CC sessions.
Related MCP
Related Tool use
Related Connector
Related Plugin
Related Subagent