Compliance and regulatory frameworks

Why compliance in AI-assisted development

Using AI agents in software development introduces specific regulatory risks:

licit evaluates these aspects against established regulatory frameworks.


EU AI Act (Regulation EU 2024/1689)

Scope

The EU AI Act is the first comprehensive regulatory framework for artificial intelligence. It entered into force in August 2024, with gradual enforcement through August 2027.

licit evaluates the articles relevant to development teams using AI agents:

Evaluated articles

ArticleNameWhat licit evaluates
Art. 9(1)Risk management systemGuardrails, quality gates, budget limits, security scanning
Art. 10(1)Data and data governanceDeployer perspective — document provider practices
Art. 12(1)Record keeping — automatic loggingGit history, audit trail, provenance tracking, OTel
Art. 13(1)TransparencyAnnex IV, config changelog, requirements traceability
Art. 14(1)Human oversightDry-run, human review gate, quality gates, budget limits
Art. 14(4)(a)Oversight — understand capabilitiesSame evidence as Art. 14(1)
Art. 14(4)(d)Oversight — ability to interveneDry-run + rollback
Art. 26(1)Deployer — compliant useAgent configs present
Art. 26(5)Deployer — monitoringSame evidence as Art. 12(1)
Art. 27(1)FRIAFRIA document completed
Annex IVTechnical documentationAnnex IV document generated

Evaluator scoring

Each article has a dedicated evaluation method with numeric scoring. The score is converted to a status using _score_to_status(score, compliant_at, partial_at):

ArticleIndicators (score)Compliant atPartial at
Art. 9Guardrails +1, quality gates +1, budget +1, scanning +13+1+
Art. 10Always PARTIAL (deployer does not train)
Art. 12Git +1, audit trail +2, provenance +1, OTel +13+1+
Art. 13Annex IV +2, changelog +1, traceability +12+1+
Art. 14Dry-run +1, review gate +2, quality gates +1, budget +13+1+

The evaluator generates actionable recommendations with concrete licit commands (e.g., “Run: licit trace — to start tracking code provenance”).

FRIA — Fundamental Rights Impact Assessment

The FRIA (Fundamental Rights Impact Assessment) is mandatory for high-risk AI systems under Art. 27. licit generates an interactive FRIA in 5 steps with 16 questions and auto-detection of 8 fields:

  1. System Description (5 questions): Purpose, AI technology, models, scope, human review.
  2. Fundamental Rights Identification (4 questions): Personal data, employment, safety, discrimination.
  3. Impact Assessment (3 questions): Risk level, maximum impact, detection speed.
  4. Mitigation Measures (5 questions): Guardrails, scanning, testing, audit trail, additional measures.
  5. Monitoring & Review (3 questions): Review frequency, responsible party, incident process.

Auto-detection: For fields like system_purpose, guardrails, security_scanning, testing, and audit_trail, licit infers the answer from the project’s ProjectContext and EvidenceBundle.

Command:

licit fria            # New interactive questionnaire
licit fria --update   # Update existing FRIA

Generated files:

Annex IV — Technical Documentation

Annex IV defines the technical documentation required for AI systems. licit generates this documentation by auto-populating it from 27 template variables extracted from:

6 auto-generated sections:

  1. General Description — Purpose, AI components, languages, frameworks
  2. Development Process — Version control, provenance, agent configs
  3. Monitoring & Control — CI/CD, audit trail, changelog
  4. Risk Management — Guardrails, quality gates, budget, oversight, FRIA
  5. Testing & Validation — Test framework, security scanning
  6. Changes & Lifecycle — Tracking mechanisms

Each section without evidence generates an actionable recommendation (e.g., “Run licit trace to begin tracking code provenance”).

Command:

licit annex-iv --organization "Mi Empresa" --product "Mi Producto"

OWASP Agentic Top 10 (2025)

Scope

The OWASP Top 10 for Agentic AI Security identifies the 10 main security risks in applications that use AI agents. licit evaluates the project’s posture against each risk with numeric scoring.

Status: Implemented since v0.5.0. Run with licit verify --framework owasp.

Evaluated risks

IDRiskWhat licit evaluates
ASI01Excessive AgencyGuardrails, quality gates, budget limits, agent configs
ASI02Prompt Injectionvigil scanning, guardrails, human review gate
ASI03Supply Chain VulnerabilitiesSCA tools (Snyk/Semgrep/CodeQL/Trivy), changelog, config versioning
ASI04Insufficient Logging & MonitoringGit history, audit trail, provenance, OTel
ASI05Improper Output HandlingHuman review gate, quality gates, test suite
ASI06Lack of Human OversightHuman review gate, dry-run, quality gates, rollback
ASI07Insufficient SandboxingGuardrails (blocked commands, protected files), CI/CD, agent configs
ASI08Unbounded Resource ConsumptionBudget limits, quality gates
ASI09Poor Error HandlingTest suite, CI/CD, rollback capability
ASI10Sensitive Data ExposureProtected file guardrails, security scanning, agent scope

Evaluator scoring

Each risk has a dedicated evaluation method with numeric scoring. The score is converted to a status using _score_to_status(score, compliant_at, partial_at):

RiskIndicators (score)Compliant atPartial at
ASI01Guardrails +1, quality gates +1, budget +1, agent configs +13+1+
ASI02vigil +2, guardrails +1, human review +13+1+
ASI03SCA tools +2, changelog +1, agent configs +13+1+
ASI04Git +1, audit trail +2, provenance +1, OTel +13+1+
ASI05Human review +2, quality gates +1, test suite +13+1+
ASI06Human review +2, dry-run +1, quality gates +1, rollback +13+1+
ASI07Guardrails +2, CI/CD +1, agent configs +13+1+
ASI08Budget limits +2, quality gates +12+1+
ASI09Test suite +1, CI/CD +1, rollback +12+1+
ASI10Guardrails +1, security scanning +2, agent configs +13+1+

ASI08 and ASI09 use compliant_at=2 because they have fewer available signals. The evaluator generates actionable recommendations with concrete tools (e.g., “Add AI-specific security scanning: vigil detects prompt injection patterns”).

Design note: The evaluator measures the presence of security tools, not their findings. A project with vigil installed but 50 critical findings gets the same score as one with 0 findings. Findings are relevant for the gap analyzer (Phase 6).

Evidence mapping

Each OWASP risk maps to evidence collectible from ProjectContext and EvidenceBundle:

ASI01 (Excessive Agency)
  ├── ev.has_guardrails + ev.guardrail_count
  ├── ev.has_quality_gates + ev.quality_gate_count
  ├── ev.has_budget_limits
  └── ctx.agent_configs

ASI02 (Prompt Injection)
  ├── ctx.security.has_vigil (+2 — AI-specific scanning)
  ├── ev.has_guardrails
  └── ev.has_human_review_gate

ASI04 (Logging & Monitoring)
  ├── ctx.git_initialized + ctx.total_commits
  ├── ev.has_audit_trail + ev.audit_entry_count (+2)
  ├── ev.has_provenance + ev.provenance_stats
  └── ev.has_otel

ASI06 (Human Oversight)
  ├── ev.has_human_review_gate (+2 — critical control)
  ├── ev.has_dry_run
  ├── ev.has_quality_gates
  └── ev.has_rollback

ASI08 (Unbounded Resources)
  ├── ev.has_budget_limits (+2 — direct control)
  └── ev.has_quality_gates

Command:

licit verify --framework owasp   # Evaluate OWASP only
licit verify --framework all     # Evaluate EU AI Act + OWASP

How licit evaluates compliance

Evaluation process

1. Detect     → ProjectDetector analyzes the project
2. Collect    → EvidenceCollector gathers evidence
3. Evaluate   → Evaluators apply framework requirements
4. Classify   → Each requirement: compliant / partial / non-compliant / n/a
5. Report     → Report with evidence, gaps, and recommendations

Evidence sources

SourceWhat it providesStatus
Git historyCode provenance, contributors, frequencyFunctional (v0.2.0)
Session logsAI agent session logs (Claude Code)Functional (v0.2.0)
Agent config changelogChanges in agent configs with severityFunctional (v0.3.0)
Agent configsGuardrails, models used, code rulesFunctional (v0.1.0)
CI/CD configsHuman review gates, security stepsFunctional (v0.1.0)
Architect reportsAudit trail, execution qualityPhase 7
SARIF filesSecurity findings (vulnerabilities)Phase 7
.licit/ dataFRIA, Annex IV, changelog, provenance storeFunctional (v0.4.0+)

Provenance evidence (licit trace) directly feeds the transparency (Art. 13) and traceability (Art. 10) articles of the EU AI Act. The config changelog (licit changelog) feeds the transparency (Art. 13) and deployer obligations (Art. 26) articles. Both feed the monitoring controls (ASI-06, ASI-10) of the OWASP Agentic Top 10.

Compliance levels

StatusMeaningRequired action
compliantRequirement fully metNone
partialRequirement partially metImprove evidence or controls
non-compliantRequirement not metImplement missing controls
n/aNot applicable to the projectNone
not-evaluatedNot yet evaluatedRun evaluation

Compliance reports

Available formats

FormatRecommended use
MarkdownHuman review, PRs, documentation
JSONIntegration with other tools, dashboards
HTMLPresentation to stakeholders, audits

Report structure

# Compliance Report — Mi Proyecto
Generated: 2026-03-10

## Summary
- EU AI Act: 72% compliant (13/18 controls)
- OWASP Agentic: 60% compliant (6/10 controls)

## EU AI Act
### Article 9 — Risk Management
Status: PARTIAL
Evidence: FRIA exists but incomplete
Recommendation: Complete FRIA sections 3-5

### Article 14 — Human Oversight
Status: COMPLIANT
Evidence: GitHub Actions requires approval for deployment
...

## Gaps
| Priority | Requirement | Gap | Effort |
|---|---|---|---|
| 1 | ART-9-1 | No risk assessment | Medium |
| 2 | ASI-01 | No guardrails | Low |

CI/CD Gate

licit can act as a compliance gate in CI/CD pipelines:

# .github/workflows/compliance.yml
name: Compliance Check
on: [push, pull_request]

jobs:
  compliance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0   # Required for git analysis

      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install licit
        run: pip install licit-ai-cli

      - name: Run compliance check
        run: licit verify
        # Exit 0 = pass, Exit 1 = fail, Exit 2 = warnings

Exit codes:

CodeResultPipeline
0All critical requirements metPass
1Some critical requirement not metFail
2Some requirement partially metWarning (configurable)

Future frameworks (V1+)

licit is designed to support additional frameworks:

FrameworkStatusDescription
NIST AI RMFPlanned (V1)NIST Risk Management Framework
ISO/IEC 42001Planned (V1)AI management system
SOC 2 AIUnder considerationAI-specific SOC 2 controls
IEEE 7000Under considerationEthical system design

The frameworks/ architecture allows adding new frameworks by implementing an evaluator with the corresponding Protocol interface.