Best Practices

Tips for getting the most out of intake.


Writing good requirement sources

Be specific

The LLM extracts requirements better when sources are clear and specific.

Good:

The system must allow registration with email and password.
The password must have at least 8 characters, one uppercase letter, and one number.
The system must send a confirmation email within 30 seconds.

Bad:

We need a good and secure login.

Include acceptance criteria

Acceptance criteria translate directly into verifiable checks:

## FR-01: User registration

The system must allow registration with email and password.

### Acceptance criteria
- Email must be unique in the system
- Password minimum 8 characters
- Confirmation email sent in < 30 seconds
- The endpoint returns 201 on success, 409 if the email already exists

Separate functional from non-functional

intake distinguishes between functional requirements (what the system does) and non-functional requirements (how it does it). Separate them in your sources for better extraction:

# Functional requirements
- The user can register with email
- The user can log in with OAuth2

# Non-functional requirements
- Response time < 200ms for all endpoints
- The system must support 1000 concurrent users
- 99.9% availability

Multi-source: combining formats

One of intake’s strengths is combining multiple sources. Each source contributes different information:

intake init "My feature" \
  -s user-stories.md \        # Business requirements (Markdown)
  -s jira-export.json \       # Technical requirements + current status (Jira)
  -s meeting-notes.txt \      # Informal decisions (plain text)
  -s wireframes.png           # Visual design (image)

What each format contributes

FormatBest for
MarkdownStructured requirements, formal specs, design documents
Jira JSONCurrent project status, priorities, links between issues
Plain textQuick notes, meeting decisions, raw ideas
YAMLAlready-structured requirements with IDs and priorities
Confluence HTMLExisting documentation, RFCs, team decisions
PDFExternal specs, regulatory documents, contracts
DOCXWord documents from stakeholders
ImagesWireframes, mockups, architecture diagrams
URLsDocumentation in wikis, online RFCs, reference pages
Slack JSONTeam decisions, action items, technical conversations
GitHub IssuesReported bugs, feature requests, backlog status

Automatic deduplication

When combining sources, intake automatically deduplicates similar requirements using Jaccard similarity (threshold 0.75). If two sources say the same thing with slightly different words, only the first occurrence is kept.

Conflict detection

intake also detects conflicts between sources. For example, if one document says “use PostgreSQL” and another says “use MongoDB”, it is reported as a conflict with a recommendation.


Choosing the generation mode

intake can auto-detect the optimal mode based on source complexity, or you can force it manually:

SituationModeWhy
Simple bug fix, 1 notes filequickOnly generates context.md + tasks.md, fast and cheap
New feature of normal complexitystandardAll 6 complete files
System with many sources or extensive textenterpriseMaximum detail and risks
Not sure(omit --mode)intake auto-detects
# Auto-detection (recommended)
intake init "My feature" -s reqs.md

# Force mode
intake init "Quick fix" -s bug.txt --mode quick
intake init "Critical system" -s reqs.md -s jira.json -s confluence.html --mode enterprise

Auto-detection works as follows:

  • quick: <500 words, 1 source, no structured content (jira, yaml, etc.)
  • enterprise: 4+ sources OR >5000 words
  • standard: everything else

Auto-detection can be disabled with spec.auto_mode: false in .intake.yaml.


Choosing the right preset

SituationPresetWhy
Prototyping an ideaminimalFast, cheap ($0.10), no extras
Normal team projectstandardBalance between detail and cost ($0.50)
Critical / regulated systementerpriseMaximum detail, traceability, risks ($2.00)
First time using intakestandardShows all capabilities
intake init "My feature" -s reqs.md --preset minimal

You can also start with minimal and switch to standard when needed:

# Quick first attempt
intake init "My feature" -s reqs.md --preset minimal

# Full version
intake init "My feature" -s reqs.md --preset standard

Cost management

Understanding the cost

Each intake init or intake add makes 2-3 LLM calls:

  1. Extraction — the most expensive, processes all source text
  2. Risk assessment — optional (disable with risk_assessment: false)
  3. Design — processes the extracted requirements

Reducing costs

StrategyHowSavings
Use minimal preset--preset minimal~80% (disables risks, lock, sources)
Disable risksrisk_assessment: false~30% (eliminates 1 of 3 LLM calls)
Use a cheaper model--model gpt-3.5-turboVariable, depends on the model
Reduce temperaturetemperature: 0.1Does not reduce cost but improves consistency
Budget enforcementmax_cost_per_spec: 0.25Protects against surprises

Monitoring costs

# View the cost of a generated spec
intake show specs/my-feature/
# Shows: Cost: $0.0423

The cost is also recorded in spec.lock.yaml:

total_cost: 0.0423

Spec versioning

Specs as code

Generated specs are text files — ideal for versioning with git:

# Generate spec
intake init "Auth system" -s reqs.md

# Commit
git add specs/auth-system/
git commit -m "Add auth system spec v1"

Comparing versions

Use intake diff to compare two versions:

# After regenerating with new sources
intake diff specs/auth-system-v1/ specs/auth-system-v2/

Shows added, removed, and modified requirements, tasks, and checks.

Detecting source changes

The spec.lock.yaml has source hashes. You can verify if sources have changed:

intake show specs/my-feature/

To integrate verification into CI/CD (GitHub Actions, GitLab CI, Jenkins, Azure DevOps), see CI/CD Integration.


Security

Redacting sensitive information

If your sources contain sensitive information, use security.redact_patterns:

security:
  redact_patterns:
    - "sk-[a-zA-Z0-9]{20,}"       # API keys
    - "\\b\\d{4}-\\d{4}-\\d{4}\\b" # Card numbers
    - "password:\\s*\\S+"            # Passwords in configs

Excluding sensitive files

By default, intake never includes these files in the output:

security:
  redact_files:
    - "*.env"
    - "*.pem"
    - "*.key"

Add more patterns for your project:

security:
  redact_files:
    - "*.env"
    - "*.pem"
    - "*.key"
    - "credentials.*"
    - "secrets.*"

For a complete security guide (threat model, air-gapped mode, compliance), see Security.


Project organization

my-project/
├── .intake.yaml              # intake configuration
├── specs/                    # Generated specs
│   ├── auth-system/          #   Spec 1
│   │   ├── requirements.md
│   │   ├── design.md
│   │   ├── tasks.md
│   │   ├── acceptance.yaml
│   │   ├── context.md
│   │   ├── sources.md
│   │   └── spec.lock.yaml
│   └── payments/             #   Spec 2
│       └── ...
├── docs/                     # Requirement sources
│   ├── requirements.md
│   ├── jira-export.json
│   └── meeting-notes.txt
├── src/                      # Source code
└── tests/                    # Tests

For containerization and team deployment, see Deployment.

One spec per feature

Generate one spec for each independent feature or component:

intake init "Auth system" -s docs/auth-reqs.md
intake init "Payments" -s docs/payment-stories.md -s docs/jira-payments.json
intake init "Notifications" -s docs/notif-ideas.txt

This allows:

  • Verifying each feature independently
  • Comparing versions of a specific feature
  • Assigning features to different teams or agents

Task tracking

After generating a spec, you can use intake task to track implementation progress directly in tasks.md:

# View the status of all tasks
intake task list specs/my-feature/

# Mark a task as in progress
intake task update specs/my-feature/ 1 in_progress

# Mark as completed with a note
intake task update specs/my-feature/ 1 done --note "Tests passing"

# Filter by status
intake task list specs/my-feature/ --status pending --status blocked

Available states: pending, in_progress, done, blocked

The state is persisted directly in tasks.md, so it is versioned with git along with the rest of the spec.


Using URLs as sources

You can pass URLs directly as sources without downloading manually:

# Internal wiki
intake init "API review" -s https://wiki.company.com/rfc/auth

# Public documentation
intake init "Integration" -s https://docs.example.com/api/v2

intake downloads the page, converts the HTML to Markdown, and processes it like any other source. It auto-detects if the content is from Confluence, Jira, or GitHub by patterns in the URL.


1. Gather requirements (any format, files or URLs)
           |
2. intake init "Feature" -s source1 -s source2
   (optionally: --mode quick|standard|enterprise)
           |
3. Review the generated spec
   - requirements.md: complete requirements?
   - tasks.md: reasonable tasks?
   - acceptance.yaml: verifiable checks?
           |
4. Iterate if necessary
   - intake add specs/feature/ -s new-source.md --regenerate
           |
5. Implement (manually or with an AI agent)
   - intake export specs/feature/ -f architect
   - intake task update specs/feature/ 1 in_progress
   - intake watch specs/feature/ -p .  (continuous verification)
           |
6. Track progress
   - intake task list specs/feature/
   - intake task update specs/feature/ 1 done --note "Implemented"
           |
7. Verify
   - intake verify specs/feature/ -p .
           |
8. Iterate until all checks pass

For workflow patterns by team size (individual, team, enterprise), see Workflows.


Using the MCP server

If you work with MCP-compatible AI agents (Claude Code, Claude Desktop, etc.), the intake MCP server lets the agent access specs directly:

# Start the server
intake mcp serve --specs-dir ./specs

The agent can then use tools like intake_verify, intake_get_tasks, intake_update_task without needing to export first. The implement_next_task prompt gives the agent the complete context to start implementing.

See MCP Server for configuration and usage.


Continuous verification with watch

During development, use intake watch to automatically re-verify every time you save a file:

intake watch specs/my-feature/ -p .

This is especially useful when:

  • You are implementing multiple tasks from a spec
  • You want immediate feedback on whether checks pass
  • You are pair programming with an AI agent

You can filter by tags to run only a subset of checks:

# Only tests and security
intake watch specs/my-feature/ -p . -t tests -t security

See Watch Mode for configuration and details.