Configuration

intake works without configuration — it only needs an LLM API key. To customize behavior, use an .intake.yaml file at the root of your project.


Loading priority

Configuration is loaded in layers. Each layer overrides the previous one:

CLI flags  >  .intake.yaml  >  preset  >  defaults
  1. Defaults: default values in the code
  2. Preset: if --preset is used, a predefined set is applied
  3. .intake.yaml: project configuration file
  4. CLI flags: command-line options always win

.intake.yaml file

Create an .intake.yaml file at the root of your project. Full example:

# LLM model configuration
llm:
  model: claude-sonnet-4         # Any model supported by LiteLLM
  api_key_env: ANTHROPIC_API_KEY  # Environment variable with the API key
  max_cost_per_spec: 0.50         # Maximum budget per spec (USD)
  temperature: 0.2                # 0.0 = deterministic, 1.0 = creative
  max_retries: 3                  # Retries on failure
  timeout: 120                    # Timeout per LLM call (seconds)

# Project configuration
project:
  name: my-project                # Name (auto-detected if empty)
  stack: []                       # Technology stack (auto-detected if empty)
  language: en                    # Language of generated content
  conventions: {}                 # Custom conventions (key: value)

# Spec configuration
spec:
  output_dir: ./specs             # Where to save generated specs
  requirements_format: ears       # ears | user-stories | bdd | free
  design_depth: moderate          # minimal | moderate | detailed
  task_granularity: medium        # coarse | medium | fine
  include_sources: true           # Include source traceability
  version_specs: true             # Create versioned directories
  generate_lock: true             # Generate spec.lock.yaml
  risk_assessment: true           # Include risk assessment
  auto_mode: true                 # Auto-detect quick/standard/enterprise

# Verification configuration
verification:
  auto_generate_tests: true       # Generate acceptance checks
  test_output_dir: ./tests/generated
  checks: []                      # Additional custom checks
  timeout_per_check: 120          # Timeout per check (seconds)

# Export configuration
export:
  default_format: generic         # architect | claude-code | cursor | kiro | generic
  architect_include_guardrails: true
  architect_pipeline_template: standard
  claude_code_generate_claude_md: true

# Connectors (preparation for Phase 2)
connectors:
  jira:
    url: ""                       # Jira base URL
    email: ""                     # Authentication email
    api_token_env: JIRA_API_TOKEN # Environment variable with the API token
  confluence:
    url: ""                       # Confluence base URL
    email: ""                     # Authentication email
    api_token_env: CONFLUENCE_API_TOKEN  # Environment variable with the API token
  github:
    token_env: GITHUB_TOKEN       # Environment variable with the token

# Security
security:
  redact_patterns: []             # Regex patterns to redact from output
  redact_files:                   # Files to never include
    - "*.env"
    - "*.pem"
    - "*.key"

Complete field reference

llm section

FieldTypeDefaultDescription
modelstringclaude-sonnet-4LLM model. Any model that LiteLLM supports.
api_key_envstringANTHROPIC_API_KEYName of the environment variable containing the API key.
max_cost_per_specfloat0.50Maximum budget per spec in USD. If exceeded, analysis stops.
temperaturefloat0.2Model temperature. Lower = more deterministic.
max_retriesint3Number of retries on LLM failures.
timeoutint120Timeout per LLM call in seconds.

Supported models:

ProviderExamplesEnvironment variable
Anthropicclaude-sonnet-4, claude-opus-4, claude-haiku-4-5ANTHROPIC_API_KEY
OpenAIgpt-4o, gpt-4, gpt-3.5-turboOPENAI_API_KEY
Googlegemini/gemini-pro, gemini/gemini-flashGEMINI_API_KEY
Localollama/llama3, ollama/mistral(no key needed)

project section

FieldTypeDefaultDescription
namestring""Project name. If empty, it is generated from the init description.
stacklist[string][]Technology stack. If empty, it is auto-detected from project files.
languagestringenLanguage for generated content (e.g.: en, es, fr).
conventionsdict{}Project conventions as key-value pairs.

Stack auto-detection looks for 28+ marker files in the project directory:

FileDetected stack
package.jsonjavascript, node
tsconfig.jsontypescript
pyproject.tomlpython
Cargo.tomlrust
go.modgo
pom.xmljava, maven
Dockerfiledocker
next.config.jsnextjs

It also inspects the contents of pyproject.toml and package.json to detect frameworks (fastapi, django, react, vue, etc.).

spec section

FieldTypeDefaultDescription
output_dirstring./specsOutput directory for specs.
requirements_formatstringearsRequirements format. Options: ears, user-stories, bdd, free.
design_depthstringmoderateLevel of design detail. Options: minimal, moderate, detailed.
task_granularitystringmediumTask granularity. Options: coarse, medium, fine.
include_sourcesbooltrueInclude sources.md with requirement-to-source traceability.
version_specsbooltrueCreate versioned subdirectories for specs.
generate_lockbooltrueGenerate spec.lock.yaml with hashes and metadata.
risk_assessmentbooltrueRun risk assessment (additional LLM phase).
auto_modebooltrueAuto-detect generation mode (quick/standard/enterprise) based on source complexity. Ignored if --mode is used in the CLI.

Generation modes:

ModeAuto-detection criteriaGenerated files
quick<500 words, 1 source, no structurecontext.md + tasks.md
standardEverything that is not quick or enterpriseAll 6 complete spec files
enterprise4+ sources OR >5000 wordsAll 6 files + detailed risks

Requirements formats:

FormatDescriptionBest for
earsEasy Approach to Requirements Syntax. Structured format with conditions.Formal specifications
user-stories”As a [role], I want [action] so that [benefit]”.Agile teams
bddGiven/When/Then. Behavior-driven development.Acceptance tests
freeFree format. No imposed structure.Quick prototypes

Design levels:

LevelDescription
minimalOnly main components and critical decisions.
moderateComponents, files, technical decisions, and dependencies.
detailedAll of the above plus interaction diagrams, edge cases, and performance considerations.

Task granularity:

LevelDescription
coarseLarge, few tasks. Each task covers a complete component.
mediumBalance between granularity and quantity.
fineSmall, atomic tasks. Each task is ~15-30 minutes of work.

verification section

FieldTypeDefaultDescription
auto_generate_testsbooltrueAutomatically generate acceptance checks from requirements.
test_output_dirstring./tests/generatedDirectory for generated tests.
checkslist[string][]Additional custom checks.
timeout_per_checkint120Maximum timeout per individual check in seconds.

export section

FieldTypeDefaultDescription
default_formatstringgenericDefault export format. Options: architect, claude-code, cursor, kiro, generic.
architect_include_guardrailsbooltrueInclude guardrails in architect pipelines.
architect_pipeline_templatestringstandardPipeline template for architect.
claude_code_generate_claude_mdbooltrueGenerate CLAUDE.md when exporting for Claude Code.

connectors section (preparation)

Configuration for direct API connectors. Connectors are not yet implemented (they will arrive in a future version), but the configuration infrastructure is already in place.

FieldTypeDefaultDescription
jira.urlstring""Base URL of the Jira instance.
jira.emailstring""Email for Jira authentication.
jira.api_token_envstring"JIRA_API_TOKEN"Environment variable with the Jira API token.
confluence.urlstring""Base URL of the Confluence instance.
confluence.emailstring""Email for Confluence authentication.
confluence.api_token_envstring"CONFLUENCE_API_TOKEN"Environment variable with the Confluence API token.
github.token_envstring"GITHUB_TOKEN"Environment variable with the GitHub token.

Note: Currently, when using URIs like jira://PROJ-123 in -s, intake shows a warning indicating that the connector is not available. In the meantime, export the data from the web interface and use JSON files.

security section

FieldTypeDefaultDescription
redact_patternslist[string][]Regex patterns that will be removed from generated content.
redact_fileslist[string]["*.env", "*.pem", "*.key"]Glob patterns of files that will never be included.

Presets

Presets are predefined configurations for common use cases. They are applied with --preset:

intake init "My feature" -s reqs.md --preset minimal

Comparison

Fieldminimalstandardenterprise
Use caseQuick prototypeNormal teamsRegulated / critical
max_cost_per_spec$0.10$0.50$2.00
temperature0.30.20.1
requirements_formatfreeearsears
design_depthminimalmoderatedetailed
task_granularitycoarsemediumfine
include_sourcesfalsetruetrue
risk_assessmentfalsetruetrue
generate_lockfalsetruetrue

When to use each preset

  • minimal: Quick prototyping, exploratory ideas, solo developer. Low cost, minimal output.
  • standard: The default option. Good balance between detail and cost for teams of 2-5 people.
  • enterprise: For large teams, regulated projects, or when complete traceability and exhaustive risk assessment are needed.

Environment variables

intake looks for these environment variables for LLM provider authentication:

VariableProviderExample
ANTHROPIC_API_KEYAnthropic (Claude)sk-ant-api03-...
OPENAI_API_KEYOpenAI (GPT)sk-...

Set the variable according to your provider:

# Anthropic
export ANTHROPIC_API_KEY=sk-ant-api03-your-key-here

# OpenAI
export OPENAI_API_KEY=sk-your-key-here

If you use a different provider, configure llm.api_key_env in .intake.yaml:

llm:
  model: gemini/gemini-pro
  api_key_env: GEMINI_API_KEY

Generate config automatically

If you don’t have an .intake.yaml, intake uses sensible defaults. To create a basic configuration file:

intake doctor --fix

This creates a minimal .intake.yaml that you can customize:

# intake configuration
llm:
  model: claude-sonnet-4
  # max_cost_per_spec: 0.50
project:
  name: ""
  language: en
  # stack: []
spec:
  output_dir: ./specs