Agents
Agents are specialized roles that skills delegate to. Each agent has a focused job, a dedicated context window, and a specific set of tools. Skills spawn agents in parallel to gather information fast without overloading the main context.
catalyst-dev Agents
Section titled “catalyst-dev Agents”Research Agents
Section titled “Research Agents”| Agent | Purpose | Tools | Model | Source |
|---|---|---|---|---|
codebase-locator | Find files, directories, and components | Grep, Glob, Bash(ls) | Haiku | Source |
codebase-analyzer | Understand implementation details and patterns | Read, Grep, Glob | Sonnet | Source |
codebase-pattern-finder | Find reusable patterns and code examples | Read, Grep, Glob | Sonnet | Source |
thoughts-locator | Search thoughts repository for relevant documents | Grep, Glob, Bash(ls) | Haiku | Source |
thoughts-analyzer | Analyze documentation and decisions | Read, Grep, Glob | Sonnet | Source |
external-research | Research external repos and libraries | DeepWiki, Context7, Exa | Sonnet | Source |
Infrastructure Agents
Section titled “Infrastructure Agents”| Agent | Purpose | Tools | Model | Source |
|---|---|---|---|---|
linear-research | Gather Linear data via CLI | Bash(linearis) | Haiku | Source |
github-research | Research GitHub PRs and issues | Bash(gh) | Haiku | Source |
sentry-research | Research Sentry errors | Bash(sentry-cli) | Haiku | Source |
catalyst-pm Agents
Section titled “catalyst-pm Agents”Research Agents
Section titled “Research Agents”| Agent | Purpose | Model | Source |
|---|---|---|---|
linear-research | Gather cycle, issue, and milestone data | Haiku | Source |
Analyzer Agents
Section titled “Analyzer Agents”| Agent | Purpose | Model | Source |
|---|---|---|---|
cycle-analyzer | Transform cycle data into health insights | Sonnet | Source |
milestone-analyzer | Analyze milestone progress toward target dates | Sonnet | Source |
backlog-analyzer | Analyze backlog health and organization | Sonnet | Source |
github-linear-analyzer | Correlate GitHub PRs with Linear issues | Sonnet | Source |
context-analyzer | Track context engineering adoption | Sonnet | Source |
When to Use Which Agent
Section titled “When to Use Which Agent”Most of the time, you don’t invoke agents directly. Skills like /research-codebase and /create-plan spawn the right agents automatically based on what you ask. When you run /research-codebase, Catalyst decides which combination of locators, analyzers, and pattern finders to launch in parallel — you just describe what you want to understand.
That said, you can invoke any agent directly when you have a quick, focused question and don’t need a full research workflow:
| Agent | Question it Answers | Example |
|---|---|---|
codebase-locator | Where is X? | ”Find all payment-related files” |
codebase-analyzer | How does X work? | ”Trace the authentication flow” |
codebase-pattern-finder | Show me examples of X | ”Show me how we handle API errors” |
thoughts-locator | What do we know about X? | ”Find research on caching strategy” |
thoughts-analyzer | What were the decisions about X? | ”What did we decide about the DB schema?” |
external-research | What do libraries/docs say about X? | ”How does React Server Components work?” |
Invoking Agents
Section titled “Invoking Agents”Use the @ prefix with the plugin name and agent name:
@catalyst-dev:codebase-locator find all payment-related files@catalyst-dev:codebase-analyzer trace the authentication flow in src/auth/@catalyst-pm:linear-research get current cycle issuesClaude Code has auto-complete for this — type @catalyst and it will suggest available agents.
When to Invoke Directly vs Use a Skill
Section titled “When to Invoke Directly vs Use a Skill”Use a skill (/research-codebase) when you want a comprehensive, multi-agent research workflow that saves artifacts for later phases. Skills coordinate multiple agents, manage context, and persist results.
Invoke an agent directly (@catalyst-dev:codebase-locator) when you have a quick, one-off question that doesn’t need the full workflow. This is faster and uses less context.
# Full research workflow — spawns multiple agents, saves to thoughts//research-codebase how does the payment system handle refunds?
# Quick direct question — just the locator agent@catalyst-dev:codebase-locator find refund handler filesParallel vs Sequential
Section titled “Parallel vs Sequential”Use parallel when researching independent aspects:
@catalyst-dev:codebase-locator find payment files@catalyst-dev:thoughts-locator search payment research@catalyst-dev:codebase-pattern-finder show payment patternsUse sequential when each step depends on the previous:
@catalyst-dev:codebase-locator find auth files# Wait for results@catalyst-dev:codebase-analyzer analyze src/auth/handler.jsAgent Patterns
Section titled “Agent Patterns”Agents follow five core patterns:
| Pattern | Purpose | Tools | Model |
|---|---|---|---|
| Locator | Find files and directories without reading contents | Grep, Glob, Bash(ls) | Haiku |
| Analyzer | Read specific files and trace data flow | Read, Grep, Glob | Sonnet |
| Pattern Finder | Find reusable patterns and examples | Read, Grep, Glob | Sonnet |
| Validator | Check correctness against specifications | Read, Bash, Grep | Sonnet |
| Aggregator | Collect and summarize from multiple sources | Read, Grep, Glob | Sonnet |
Design Principles
Section titled “Design Principles”- Documentarians, not critics — Report what exists without suggesting improvements
- Single responsibility — Each agent answers one type of question
- Tool minimalism — Only the tools needed for the task
- Structured output — Consistent format with file:line references
- Explicit boundaries — Include “What NOT to Do” sections
Three-Tier Model Strategy
Section titled “Three-Tier Model Strategy”| Tier | Model | Use Case |
|---|---|---|
| 1 | Opus | Planning, complex analysis, implementation orchestration |
| 2 | Sonnet | Code analysis, PR workflows, structured research |
| 3 | Haiku | Fast lookups, data collection, file finding |
Agent Teams
Section titled “Agent Teams”For complex implementations spanning multiple domains, agent teams enable multiple Claude Code instances working in parallel.
When to Use Teams vs Subagents
Section titled “When to Use Teams vs Subagents”| Scenario | Subagents | Agent Teams |
|---|---|---|
| Parallel research gathering | Best fit | Overkill |
| Code analysis / file search | Best fit | Overkill |
| Complex multi-file implementation | Can’t nest | Best fit |
| Cross-layer features (frontend + backend + tests) | Limited | Best fit |
| Cost-sensitive operations | Best fit | Too expensive |
Team Structure
Section titled “Team Structure”Lead (Opus) — Coordinates implementation├── Teammate 1 (Sonnet) — Frontend changes│ └── Subagents (Haiku/Sonnet)├── Teammate 2 (Sonnet) — Backend changes│ └── Subagents (Haiku/Sonnet)└── Teammate 3 (Sonnet) — Test changes └── Subagents (Haiku/Sonnet)Each teammate is a full Claude Code session that can spawn its own subagents — two-level parallelism that subagents alone cannot achieve.
/implement-plan --team/oneshot --team PROJ-123Requirements
Section titled “Requirements”export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1