Testing — Resumen completo de cobertura

Documento actualizado el 2026-02-24. Refleja el estado actual de todos los tests. Versión: v1.0.0.

Requisito: Para ejecutar los tests y herramientas de calidad, instalar el extra dev:

pip install architect-ai-cli[dev]

Incluye: pytest, pytest-cov, pytest-asyncio, black, ruff, mypy.

Resultado global

Scripts de integración (scripts/)

ArchivoTestsEstadoRequiere API key
test_phase1.py6PassedNo
test_phase2.py7PassedNo
test_phase3.py5PassedNo
test_phase4.py3PassedNo
test_phase5.py5PassedNo
test_phase6.py4+1 skipPassedNo (1 skip)
test_phase7.py11PassedNo
test_phase8.py7PassedNo
test_phase9.py24PassedNo
test_phase10.py35PassedNo
test_phase11.py9PassedNo
test_phase12.py39PassedNo
test_phase13.py54PassedNo
test_phase14.py6PassedNo
test_v3_m1.py38PassedNo
test_v3_m2.py22PassedNo
test_v3_m3.py34PassedNo
test_v3_m4.py44PassedNo
test_v3_m5.py41PassedNo
test_v3_m6.py23PassedNo
test_phase15.py29PassedNo
test_phase16.py24PassedNo
test_phase17.py31PassedNo
test_phase18.py32PassedNo
test_phase_b.py~104 checksPassedNo
test_phase_c_e2e.py31PassedNo
test_integration.py54 (47+7)47 passed, 7 esperados7 requieren key
test_config_loader.py37PassedNo
test_mcp_internals.py47PassedNo
test_streaming.py33PassedNo
test_parallel_execution.py29PassedNo
TOTAL scripts~848Passed7 esperados con key

Tests unitarios pytest (tests/)

DirectorioTestsQué cubre
tests/test_hooks/29HookExecutor, HooksRegistry, HookEvent
tests/test_guardrails/24GuardrailsEngine, quality gates, code rules
tests/test_skills/31SkillsLoader, SkillInstaller
tests/test_memory/32ProceduralMemory, correction patterns
tests/test_sessions/22SessionManager, SessionState, generate_session_id
tests/test_reports/20ExecutionReport, ReportGenerator, collect_git_diff
tests/test_dryrun/23DryRunTracker, PlannedAction, WRITE_TOOLS/READ_TOOLS
tests/test_ralph/90RalphLoop, RalphConfig, LoopIteration, RalphLoopResult
tests/test_pipelines/83PipelineRunner, PipelineConfig, PipelineStep, variables, conditions
tests/test_checkpoints/48CheckpointManager, Checkpoint, create/list/rollback
tests/test_reviewer/47AutoReviewer, ReviewResult, build_fix_prompt, get_recent_diff
tests/test_parallel/43ParallelRunner, ParallelConfig, WorkerResult, worktrees
tests/test_dispatch/36DispatchSubagentTool, DispatchSubagentArgs, tipos, tools
tests/test_health/28CodeHealthAnalyzer, HealthSnapshot, HealthDelta, FunctionMetric
tests/test_competitive/19CompetitiveEval, CompetitiveConfig, CompetitiveResult, ranking
tests/test_telemetry/20 (9 skip)ArchitectTracer, NoopTracer, NoopSpan, create_tracer, SERVICE_VERSION
tests/test_presets/37PresetManager, AVAILABLE_PRESETS, apply, list_presets
tests/test_bugfixes/41Validación BUG-3 a BUG-7 (code_rules, dispatch, telemetry, health, parallel)
TOTAL pytest687Phases A + B + C + D + Bugfixes

Los 7 tests que fallan en test_integration.py son llamadas reales a la API de OpenAI (secciones 1 y 2). Fallan con AuthenticationError porque no hay OPENAI_API_KEY configurada. Es el comportamiento esperado en CI sin credenciales.


Cobertura por módulo

src/architect/tools/ — Herramientas locales

Archivo fuenteTest file(s)Qué se prueba
filesystem.pytest_phase1, test_phase9, test_v3_m6, test_integrationread_file, write_file, edit_file, delete_file, list_files — operaciones reales, path traversal, dry-run, modos de escritura
patch.pytest_phase9, test_v3_m6apply_patch — single-hunk, multi-hunk, inserción pura, errores de formato, diff output
search.pytest_phase10, test_v3_m6search_code (regex), grep (literal), find_files (glob) — case insensitive, patrones, contexto
commands.pytest_phase13run_command — blocklist (capa 1), allowed_only (capa 2), timeout+truncado (capa 3), directory sandboxing (capa 4), patrones extra, comandos safe extra, clasificación de sensibilidad

src/architect/core/ — Loop del agente

Archivo fuenteTest file(s)Qué se prueba
loop.pytest_v3_m1, test_parallel_executionAgentLoop.run(), _check_safety_nets (5 condiciones), _graceful_close (4 StopReasons), _should_parallelize, _execute_tool_calls_batch (secuencial vs paralelo, orden preservado)
state.pytest_v3_m1, test_parallel_executionStopReason (7 miembros), AgentState, StepResult, _CLOSE_INSTRUCTIONS (4 keys), ToolCallResult
context.pytest_v3_m2, test_phase11ContextManager — _estimate_tokens, _is_above_threshold, is_critically_full, manage(), _summarize_steps, _format_steps_for_summary, _count_tool_exchanges, truncate_tool_result, enforce_window, maybe_compress
hooks.pytest_v3_m4, test_phase15, test_parallel_executionHookExecutor — 10 lifecycle events (HookEvent enum), HookDecision (ALLOW/BLOCK/MODIFY), exit code protocol, env vars, async hooks, matcher/file_patterns filtering, HooksRegistry, backward-compat run_post_edit; PostEditHooks legacy
evaluator.pytest_phase12SelfEvaluator — basic mode, full mode, evaluación de resultados
mixed_mode.pytest_phase3, test_v3_m3MixedModeRunner — ya no es default, backward compat
shutdown.pytest_phase7GracefulShutdown — estado inicial, reset, should_stop, integración con AgentLoop
timeout.pytest_phase7StepTimeout — sin timeout, salida limpia, restauración de handler, raises

src/architect/llm/ — Adaptador LLM

Archivo fuenteTest file(s)Qué se prueba
adapter.pytest_streaming, test_phase2, test_phase7, test_integrationcompletion_stream (mock completo), _parse_arguments, _try_parse_text_tool_calls, _prepare_messages_with_caching, _normalize_response, StreamChunk/LLMResponse/ToolCall modelos, retry logic
cache.pytest_phase14LocalLLMCache — SHA-256 determinista, TTL, hit/miss

src/architect/mcp/ — MCP (Model Context Protocol)

Archivo fuenteTest file(s)Qué se prueba
client.pytest_mcp_internals, test_phase4MCPClient init (headers, token, URL), _parse_sse (8 escenarios), _parse_response (JSON/SSE/fallback), _resolve_token (4 fuentes), _next_id (secuencia), _ensure_initialized (handshake mock)
adapter.pytest_mcp_internals, test_phase4MCPToolAdapter — name prefixing, schema generation, args_model dinámico, required/optional fields, type mapping, _extract_content (4 formatos), execute (success/errors)
discovery.pytest_phase4MCPDiscovery — descubrimiento de servidores

src/architect/config/ — Configuración

Archivo fuenteTest file(s)Qué se prueba
schema.pytest_config_loader, test_v3_m4, test_phase13, test_phase14AppConfig, AgentConfig, ContextConfig, MCPServerConfig, HookConfig, HooksConfig, LoggingConfig, CommandsConfig — validación Pydantic, extra=‘forbid’, defaults
loader.pytest_config_loaderdeep_merge (8 tests), load_yaml_config (5), load_env_overrides (6), apply_cli_overrides (10), load_config pipeline (5), validación Pydantic en pipeline (3)

src/architect/execution/ — Motor de ejecución

Archivo fuenteTest file(s)Qué se prueba
engine.pytest_phase1, test_v3_m4, test_parallel_executionExecutionEngine — execute, dry-run, run_post_edit_hooks, integración con hooks
policies.pytest_phase1, test_parallel_executionConfirmationPolicy — yolo, confirm-all, confirm-sensitive
validators.pytest_phase1, test_v3_m6validate_path — path traversal prevention

src/architect/costs/ — Tracking de costes

Archivo fuenteTest file(s)Qué se prueba
tracker.pytest_phase14, test_phase11CostTracker — record, summary, format_summary_line
prices.pytest_phase14PriceLoader — precios por modelo, default_prices.json
__init__.pytest_phase14BudgetExceededError — presupuesto excedido

src/architect/agents/ — Agentes y prompts

Archivo fuenteTest file(s)Qué se prueba
prompts.pytest_v3_m3BUILD_PROMPT (5 fases: ANALIZAR→PLANIFICAR→EJECUTAR→VERIFICAR→CORREGIR), PLAN_PROMPT, REVIEW_PROMPT, DEFAULT_PROMPTS
registry.pytest_v3_m3, test_phase3DEFAULT_AGENTS (4 agentes), get_agent (merge YAML+defaults), list_available_agents, resolve_agents_from_yaml, AgentNotFoundError, CLI overrides

src/architect/indexer/ — Indexador de repositorio

Archivo fuenteTest file(s)Qué se prueba
tree.pytest_phase10RepoIndexer — basic, excludes, file_info, languages
cache.pytest_phase10IndexCache — set/get, TTL expiración

src/architect/logging/ — Sistema de logging

Archivo fuenteTest file(s)Qué se prueba
levels.pytest_v3_m5HUMAN level (25, entre INFO y WARNING)
human.pytest_v3_m5HumanFormatter.format_event, HumanLog métodos, HumanLogHandler filtrado
setup.pytest_v3_m5, test_phase5configure_logging, dual pipeline (JSON file + stderr humano), quiet mode, verbose levels

src/architect/cli.py — CLI (Click)

Test file(s)Qué se prueba
test_phase6, test_phase8, test_v3_m3JSON output format, exit codes, stdout/stderr separation, CLI help, agents command, validate-config, full init without LLM, dry-run sin API key, build como default

v4 Phase A — Hooks, Guardrails, Skills, Memory

Archivo fuenteTest file(s)Qué se prueba
core/hooks.pytest_phase15 (29 tests)HookEvent (10 valores), HookDecision (3 valores), HookResult, HookConfig, HooksRegistry (registro, get_hooks, has_hooks), HookExecutor (_build_env, execute_hook, run_event con matcher/file_patterns, run_post_edit backward-compat), exit code protocol (0=ALLOW, 2=BLOCK, otro=Error), async hooks, timeout
core/guardrails.pytest_phase16 (24 tests)GuardrailsEngine — check_file_access (protected_files globs), check_command (blocked_commands regex), check_edit_limits (max_files/lines), check_code_rules (severity warn/block), record_command/record_edit, should_force_test, run_quality_gates (subprocess, timeout, required vs optional), state tracking
skills/loader.pytest_phase17 (31 tests)SkillsLoader — load_project_context (.architect.md, AGENTS.md, CLAUDE.md), discover_skills (local + installed), _parse_skill (YAML frontmatter), get_relevant_skills (glob matching), build_system_context; SkillInfo dataclass
skills/installer.pytest_phase17SkillInstaller — install_from_github (sparse checkout), create_local (plantilla SKILL.md), list_installed, uninstall
skills/memory.pytest_phase18 (32 tests)ProceduralMemory — 6 CORRECTION_PATTERNS (direct, negation, clarification, should_be, wrong_approach, absolute_rule), detect_correction, add_correction (dedup), add_pattern, _load/_append_to_file, get_context, analyze_session_learnings
config/schema.pytest_phase15-18, test_config_loaderHookItemConfig, HooksConfig (10 eventos + post_edit compat), GuardrailsConfig, QualityGateConfig, CodeRuleConfig, SkillsConfig, MemoryConfig — validación Pydantic, defaults, extra=‘forbid’

v4 Phase B — Sessions, Reports, Dry Run, CI/CD Flags

Archivo fuenteTest file(s)Qué se prueba
features/sessions.pytest_phase_b (B1, 8 tests), tests/test_sessions/ (22 tests)SessionManager — save/load/list/cleanup/delete, SessionState round-trip, generate_session_id (formato + unicidad), message truncation (>50 → últimos 30), JSON corrupto → None, ordenación newest-first, caracteres especiales, StopReason round-trip
features/report.pytest_phase_b (B2, 8 tests), tests/test_reports/ (20 tests)ExecutionReport, ReportGenerator — to_json (parseable + todas las keys), to_markdown (tablas + secciones), to_github_pr_comment (<details> collapsible), status icons (OK/WARN/FAIL), valores zero, colecciones vacías, paths largos, collect_git_diff
features/dryrun.pytest_phase_b (B4, 6 tests), tests/test_dryrun/ (23 tests)DryRunTracker — record_action, get_plan_summary, action_count, WRITE_TOOLS/READ_TOOLS disjuntos, _summarize_action (5 code paths), interleave read+write, tool_input complejo/truncación
cli.py (B3 flags)test_phase_b (B3, 5 tests)CLI flags: —json, —dry-run, —report, —report-file, —session, —confirm-mode, —context-git-diff, —exit-code-on-partial; comandos: architect sessions, architect cleanup, architect resume NONEXISTENT → exit 3; exit code constants (0,1,2,3,4,5,130)

Plan base v4 Phase C — Ralph Loop, Parallel, Pipelines, Checkpoints, Auto-Review

Archivo fuenteTest file(s)Qué se prueba
features/ralph.pytests/test_ralph/ (90 tests)RalphLoop — iteración completa, contexto limpio por iteración, safety nets (max_iterations, max_cost, max_time), _run_checks (subprocess, exit codes), _build_iteration_prompt (con checks fallidos y outputs), RalphConfig dataclass, LoopIteration, RalphLoopResult, stop_reason (5 valores), worktree isolation, agent_factory pattern
features/pipelines.pytests/test_pipelines/ (83 tests)PipelineRunner — ejecución secuencial, _substitute_variables ({{name}}), _check_condition (shell exit code), _run_checks, _create_checkpoint, from_step resume, dry_run mode, PipelineConfig/PipelineStep dataclasses, PipelineStepResult, output_var captura, pasos condicionados, YAML parsing
features/parallel.pytests/test_parallel/ (43 tests)ParallelRunner — _create_worktrees, _run_worker (subprocess), cleanup_worktrees, round-robin de tareas y modelos, WorkerResult dataclass, ParallelConfig, WORKTREE_PREFIX, ProcessPoolExecutor, error handling por worker
features/checkpoints.pytests/test_checkpoints/ (48 tests)CheckpointManager — create (git add + commit), list_checkpoints (git log —grep, format %H|%s|%at), rollback (git reset —hard), get_latest, has_changes_since, Checkpoint dataclass (frozen), short_hash, CHECKPOINT_PREFIX, no-changes → None
agents/reviewer.pytests/test_reviewer/ (47 tests)AutoReviewer — review_changes (contexto limpio, agent_factory), build_fix_prompt, get_recent_diff (subprocess git diff), ReviewResult dataclass, REVIEW_SYSTEM_PROMPT, detección “sin issues” (case-insensitive), error handling (LLM failure → ReviewResult con error), AutoReviewConfig
cli.py (C commands)test_phase_c_e2e.py (31 tests)CLI: architect loop, architect pipeline, architect parallel, architect parallel-cleanup; integración ralph+checks, pipeline+variables+conditions, parallel+worktrees, checkpoints+list+rollback, auto-review flow

Plan base v4 Phase D — Dispatch, Health, Eval, Telemetry, Presets

Archivo fuenteTest file(s)Qué se prueba
tools/dispatch.pytests/test_dispatch/ (36 tests)DispatchSubagentTool — DispatchSubagentArgs validación, VALID_SUBAGENT_TYPES (explore/test/review), SUBAGENT_ALLOWED_TOOLS per tipo, SUBAGENT_MAX_STEPS=15, SUBAGENT_SUMMARY_MAX_CHARS=1000, execute con agent_factory mock, error handling
core/health.pytests/test_health/ (28 tests)CodeHealthAnalyzer — take_before/after_snapshot, compute_delta, FunctionMetric (frozen dataclass), HealthSnapshot campos, HealthDelta.to_report() markdown, LONG_FUNCTION_THRESHOLD (50), DUPLICATE_BLOCK_SIZE (6), análisis AST sin radon
features/competitive.pytests/test_competitive/ (19 tests)CompetitiveEval — CompetitiveConfig, CompetitiveResult, run() con ParallelRunner mock, _run_checks_in_worktree, _rank_results (score compuesto), generate_report markdown
telemetry/otel.pytests/test_telemetry/ (20 tests, 9 skip)ArchitectTracer — start_session context manager, trace_llm_call, trace_tool, NoopTracer/NoopSpan, create_tracer factory (enabled/disabled), SERVICE_NAME/SERVICE_VERSION constants. 9 tests skip si OpenTelemetry no está instalado
config/presets.pytests/test_presets/ (37 tests)PresetManager — AVAILABLE_PRESETS (5), apply() genera .architect.md + config.yaml, list_presets(), overwrite behavior, preset content validation
(bugfixes)tests/test_bugfixes/ (41 tests)BUG-3: code_rules pre-execution (11), BUG-4: dispatch wiring (5), BUG-5: telemetry wiring (8), BUG-6: health wiring (6), BUG-7: parallel config propagation (11)

Tests de integración (test_integration.py)

60 assertions que prueban flujos end-to-end entre múltiples módulos:

SecciónTestsEstadoNota
0. Prerequisitos4PassedImports, versión, tools, config
1. LLM Proxy — Llamadas directas4Requiere API keyCompletion básico, con tools, multiple tools, usage
2. Streaming — Respuestas en tiempo real3Requiere API keyStreaming básico, tool calls, usage info
3. MCP — Servidores reales3PassedClient init, handshake mock, tool call mock
4. CLI End-to-End5PassedHelp, version, agents list, validate-config, dry-run
5. Config YAML — Configuraciones complejas6PassedYAML completo, merge, env vars, defaults
6. Safety Nets — Watchdogs4PassedTimeout, shutdown, max_steps, context full
7. CLI + MCP — Flujo completo3PassedConfig con MCP, discovery mock, tools adapter
8. Post-Edit Hooks5Passedrun_for_tool, matching, truncado, disabled
9. Tools Locales8Passedread/write/edit/delete/list/search/grep/find
10. Context Manager6Passedestimate_tokens, threshold, manage, summarize
11. Cost Tracker3PassedBasic tracking, budget exceeded, format line

Qué NO se prueba (gaps conocidos)

Estas áreas no tienen cobertura automatizada pero son difíciles de testear sin infraestructura real:

ÁreaRazón
LLM real (secciones 1-2 de integration)Requiere OPENAI_API_KEY. Funciona con key, probado manualmente
MCP servidor real (HTTP live)Requiere servidor MCP corriendo. test_phase4 prueba con mocks; test_mcp_internals prueba internals exhaustivamente
Agent loop completo (LLM → Tools → LLM)Requiere API key para el ciclo completo. Las partes individuales están probadas por separado
Streaming real sobre redtest_streaming.py prueba con mocks completos del generator; streaming real requiere API key
SIGINT/SIGTERM realtest_phase7 prueba GracefulShutdown en aislamiento; señales reales en un proceso vivo son frágiles en CI

Todas las funciones internas, parsing, validación, seguridad y lógica de decisión están cubiertas sin necesidad de credenciales externas.


QA — v0.16.1

Tras la implementación de v4 Phase A se realizó un proceso de QA completo:

  1. Se ejecutaron los 25 scripts de test (597 originales + 116 nuevos)
  2. Se detectaron y corrigieron 5 bugs:
    • CostTracker.format_summary_line() — AttributeError por campo mal referenciado
    • PriceLoader._load_prices() — acceso a dict con get() vs [] en nested keys
    • HUMAN log level — registro doble del nivel en logging.addLevelName()
    • HumanFormatter._summarize_args()ValueError en .index() para strings sin separador
    • CommandTool — referencia incorrecta a args.timeout vs args.timeout_seconds
  3. Se actualizaron 5 scripts de test para usar EXPECTED_VERSION = "0.16.1"
  4. Resultado final: 713 tests passing, 7 expected failures (requieren API key)

QA — v0.17.0

Tras la implementación de v4 Phase B:

  1. Se creó scripts/test_phase_b.py con ~35 tests y ~104 checks
  2. Se crearon tests unitarios pytest: tests/test_sessions/ (22), tests/test_reports/ (20), tests/test_dryrun/ (23)
  3. Se detectaron y corrigieron 4 bugs (QA3):
    • GuardrailsEngine.check_command() — redirect output no debería bloquearse
    • ReportGenerator.to_markdown() — duración en timeline no calculada
    • Version hardcoded en tests — ahora se lee dinámicamente desde __init__.py
    • _execute_tool_calls_batch — parallel execution timeout en CI
  4. Resultado final: ~817+ tests passing (scripts) + ~181 tests pytest (unitarios)

QA — v0.18.0 (Plan base v4 Phase C)

Tras la implementación de Phase C:

  1. Se crearon tests unitarios pytest: tests/test_ralph/ (90), tests/test_pipelines/ (83), tests/test_checkpoints/ (48), tests/test_reviewer/ (47), tests/test_parallel/ (43)
  2. Se creó scripts/test_phase_c_e2e.py con 31 tests E2E (C1-C5 + combinados)
  3. Se detectaron y corrigieron 3 bugs (QA4):
    • BUG-1: RalphLoop ejecutaba iteraciones compartiendo contexto — corregido para crear agente FRESCO por iteración via agent_factory
    • BUG-2: ParallelRunner._create_worktrees() no aislaba correctamente — corregido para usar git worktree con branches dedicadas
    • BUG-3: CheckpointManager.list_checkpoints() parseaba incorrecto el formato de git log — corregido formato pipe-separated %H|%s|%at
  4. Resultado final: ~848 tests passing (scripts) + 504 tests pytest (unitarios) + 31 E2E (Phase C)

QA — v0.19.0 / v1.0.0 (Plan base v4 Phase D)

Tras la implementación de Phase D:

  1. Se crearon tests unitarios pytest: tests/test_dispatch/ (36), tests/test_health/ (28), tests/test_competitive/ (19), tests/test_telemetry/ (20, 9 skip), tests/test_presets/ (37)
  2. Se detectaron y corrigieron 7 bugs (QA-D):
    • BUG-1 (CRITICAL): @cli.command@main.command para eval e init — rompía importación del módulo CLI
    • BUG-2 (MEDIUM): Versión inconsistente entre pyproject.toml, __init__.py y cli.py
    • BUG-3 (HIGH): code_rules severity:block no prevenía escrituras — se ejecutaba DESPUÉS de write. Fix: movido a pre-ejecución
    • BUG-4 (MEDIUM): dispatch_subagent tool existía pero no se registraba en CLI run
    • BUG-5 (MEDIUM): TelemetryConfig parseada pero create_tracer() nunca llamado
    • BUG-6 (MEDIUM): HealthConfig parseada pero CodeHealthAnalyzer nunca invocado
    • BUG-7 (MEDIUM): Workers paralelos no propagaban --config ni --api-base
  3. Se crearon 41 tests específicos de validación de bugs en tests/test_bugfixes/test_bugfixes.py
  4. Resultado final: 687 pytest passed, 9 skipped, 0 failures + 31 E2E + ~848 scripts

Cómo ejecutar

# Todos los tests (sin API key)
for f in scripts/test_*.py; do python3.12 "$f"; done

# Un test específico
python3.12 scripts/test_phase13.py

# Con API key (para tests de integración completos)
OPENAI_API_KEY=sk-... python3.12 scripts/test_integration.py

Todos los scripts son standalone: no requieren pytest, usan helpers ok()/fail()/section() internos, y retornan exit code 0 (todo OK) o 1 (hay fallos).