Why Compliance in AI-Assisted Development

The use of AI agents in software development introduces specific regulatory risks:

licit evaluates these aspects against established regulatory frameworks.


EU AI Act (Regulation EU 2024/1689)

Scope

The EU AI Act is the first comprehensive regulatory framework for artificial intelligence. It entered into force in August 2024, with gradual application through August 2027.

licit evaluates the articles relevant to development teams using AI agents:

Evaluated Articles

ArticleNameWhat licit evaluates
Art. 9(1)Risk management systemGuardrails, quality gates, budget limits, security scanning
Art. 10(1)Data and data governanceDeployer perspective — document provider practices
Art. 12(1)Record keeping — automatic loggingGit history, audit trail, provenance tracking, OTel
Art. 13(1)TransparencyAnnex IV, agent config changelog, requirements traceability
Art. 14(1)Human oversightDry-run, human review gate, quality gates, budget limits
Art. 14(4)(a)Oversight — understand capabilitiesSame evidence as Art. 14(1)
Art. 14(4)(d)Oversight — ability to interveneDry-run + rollback
Art. 26(1)Deployer — compliant useAgent configs present
Art. 26(5)Deployer — monitoringSame evidence as Art. 12(1)
Art. 27(1)FRIAFRIA document completed
Annex IVTechnical documentationAnnex IV document generated

Evaluator Scoring

Each article has a dedicated evaluation method with numerical scoring. The score is converted to status with _score_to_status(score, compliant_at, partial_at):

ArticleIndicators (score)Compliant atPartial at
Art. 9Guardrails +1, quality gates +1, budget +1, scanning +13+1+
Art. 10Always PARTIAL (deployer does not train)
Art. 12Git +1, audit trail +2, provenance +1, OTel +13+1+
Art. 13Annex IV +2, changelog +1, traceability +12+1+
Art. 14Dry-run +1, review gate +2, quality gates +1, budget +13+1+

The evaluator generates actionable recommendations with concrete licit commands (e.g.: “Run: licit trace — to start tracking code provenance”).

FRIA — Fundamental Rights Impact Assessment

The FRIA (Fundamental Rights Impact Assessment) is mandatory for high-risk AI systems under Art. 27. licit generates an interactive FRIA in 5 steps with 16 questions and auto-detection of 8 fields:

  1. System Description (5 questions): Purpose, AI technology, models, scope, human review.
  2. Fundamental Rights Identification (4 questions): Personal data, employment, safety, discrimination.
  3. Impact Assessment (3 questions): Risk level, maximum impact, detection speed.
  4. Mitigation Measures (5 questions): Guardrails, scanning, testing, audit trail, additional measures.
  5. Monitoring & Review (3 questions): Review frequency, responsible party, incident process.

Auto-detection: For fields like system_purpose, guardrails, security_scanning, testing, and audit_trail, licit infers the answer from the project’s ProjectContext and EvidenceBundle.

Command:

licit fria            # New interactive questionnaire
licit fria --update   # Update existing FRIA

Generated files:

Annex IV — Technical Documentation

Annex IV defines the technical documentation required for AI systems. licit generates this documentation by auto-populating it from 27 template variables extracted from:

6 auto-generated sections:

  1. General Description — Purpose, AI components, languages, frameworks
  2. Development Process — Version control, provenance, agent configs
  3. Monitoring & Control — CI/CD, audit trail, changelog
  4. Risk Management — Guardrails, quality gates, budget, oversight, FRIA
  5. Testing & Validation — Test framework, security scanning
  6. Changes & Lifecycle — Tracking mechanisms

Each section without evidence generates an actionable recommendation (e.g.: “Run licit trace to begin tracking code provenance”).

Command:

licit annex-iv --organization "My Company" --product "My Product"

OWASP Agentic Top 10

Scope

The OWASP Agentic Top 10 identifies the top 10 security risks in applications using AI agents. licit evaluates the project’s posture against each risk.

Evaluated Risks

IDRiskWhat licit evaluates
ASI-01Excessive AgencyGuardrails, protected files, blocked commands
ASI-02Uncontrolled AutonomyBudget limits, dry-run, human approval
ASI-03Supply Chain VulnerabilitiesSecurity tools (Semgrep, Snyk, etc.)
ASI-04Improper Output HandlingOutput validation, quality gates
ASI-05Insecure CommunicationConnector configuration, data protection
ASI-06Insufficient MonitoringAudit trail, logging, OpenTelemetry
ASI-07Identity and Access MismanagementAgent permissions, access scope
ASI-08Inadequate SandboxingExecution isolation, rollback capability
ASI-09Prompt InjectionInput validation, guardrail configuration
ASI-10Insufficient LoggingStructured logs, session traceability

Evidence Mapping

Each OWASP risk maps to collectible evidence:

ASI-01 (Excessive Agency)
  ├── has_guardrails → Are guardrails configured?
  ├── guardrail_count → How many controls exist?
  └── has_human_review_gate → Is there human review?

ASI-02 (Uncontrolled Autonomy)
  ├── has_budget_limits → Are there budget limits?
  ├── has_dry_run → Does a dry-run mode exist?
  └── has_rollback → Is there rollback capability?

ASI-06 (Insufficient Monitoring)
  ├── has_audit_trail → Is there an audit trail?
  ├── audit_entry_count → How many entries?
  └── has_otel → Is there OpenTelemetry instrumentation?

How licit Evaluates Compliance

Evaluation Process

1. Detect    → ProjectDetector analyzes the project
2. Collect   → EvidenceCollector gathers evidence
3. Evaluate  → Evaluators apply framework requirements
4. Classify  → Each requirement: compliant / partial / non-compliant / n/a
5. Report    → Report with evidence, gaps, and recommendations

Evidence Sources

SourceWhat it providesStatus
Git historyCode provenance, contributors, frequencyFunctional (v0.2.0)
Session logsAI agent session logs (Claude Code)Functional (v0.2.0)
Agent config changelogChanges in agent configs with severityFunctional (v0.3.0)
Agent configsGuardrails, models used, code rulesFunctional (v0.1.0)
CI/CD configsHuman review gates, security stepsFunctional (v0.1.0)
Architect reportsAudit trail, execution qualityPhase 7
SARIF filesSecurity findings (vulnerabilities)Phase 7
.licit/ dataFRIA, Annex IV, changelog, provenance storeFunctional (v0.4.0 — all generators operational)

Provenance evidence (licit trace) directly feeds the transparency (Art. 13) and traceability (Art. 10) articles of the EU AI Act. The agent config changelog (licit changelog) feeds the transparency (Art. 13) and deployer obligations (Art. 26) articles. Both feed the monitoring controls (ASI-06, ASI-10) of the OWASP Agentic Top 10.

Compliance Levels

StatusMeaningAction required
compliantRequirement fully metNone
partialRequirement partially metImprove evidence or controls
non-compliantRequirement not metImplement missing controls
n/aDoes not apply to the projectNone
not-evaluatedNot yet evaluatedRun evaluation

Compliance Reports

Available Formats

FormatRecommended use
MarkdownHuman review, PRs, documentation
JSONIntegration with other tools, dashboards
HTMLPresentation to stakeholders, audits

Report Structure

# Compliance Report — My Project
Generated: 2026-03-10

## Summary
- EU AI Act: 72% compliant (13/18 controls)
- OWASP Agentic: 60% compliant (6/10 controls)

## EU AI Act
### Article 9 — Risk Management
Status: PARTIAL
Evidence: FRIA exists but incomplete
Recommendation: Complete FRIA sections 3-5

### Article 14 — Human Oversight
Status: COMPLIANT
Evidence: GitHub Actions requires approval for deployment
...

## Gaps
| Priority | Requirement | Gap | Effort |
|---|---|---|---|
| 1 | ART-9-1 | No risk assessment | Medium |
| 2 | ASI-01 | No guardrails | Low |

CI/CD Gate

licit can act as a compliance gate in CI/CD pipelines:

# .github/workflows/compliance.yml
name: Compliance Check
on: [push, pull_request]

jobs:
  compliance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0   # Required for git analysis

      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install licit
        run: pip install licit-ai-cli

      - name: Run compliance check
        run: licit verify
        # Exit 0 = pass, Exit 1 = fail, Exit 2 = warnings

Exit codes:

CodeResultPipeline
0All critical requirements metPass
1Some critical requirement not metFail
2Some requirement partially metWarning (configurable)

Future Frameworks (V1+)

licit is designed to support additional frameworks:

FrameworkStatusDescription
NIST AI RMFPlanned (V1)NIST Risk Management Framework
ISO/IEC 42001Planned (V1)AI management system
SOC 2 AIUnder considerationAI-specific SOC 2 controls
IEEE 7000Under considerationEthical system design

The frameworks/ architecture allows adding new frameworks by implementing an evaluator with the corresponding Protocol interface.