Why Compliance in AI-Assisted Development

The use of AI agents in software development introduces specific regulatory risks:

licit evaluates these aspects against established regulatory frameworks.


EU AI Act (Regulation EU 2024/1689)

Scope

The EU AI Act is the first comprehensive regulatory framework for artificial intelligence. It entered into force in August 2024, with gradual enforcement until August 2027.

licit evaluates the articles relevant to development teams using AI agents:

Evaluated Articles

ArticleNameWhat licit evaluates
Art. 9Risk Management SystemExistence of FRIA, documented risk analysis
Art. 10Data and Data GovernanceTraining data traceability and provenance
Art. 11Technical DocumentationExistence of Annex IV documentation
Art. 13TransparencyAI use disclosure, provenance tracking
Art. 14Human OversightHuman review gates in CI/CD, guardrails
Art. 15Accuracy, Robustness, and SecurityTesting, security tools, SARIF findings
Art. 17Quality Management SystemQuality gates, auditing, documented processes
Art. 26Obligations of DeployersCompliant use, monitoring, activity logging
Art. 27FRIAFundamental Rights Impact Assessment

FRIA — Fundamental Rights Impact Assessment

The FRIA (Fundamental Rights Impact Assessment) is mandatory for high-risk AI systems under Art. 27. licit generates an interactive FRIA in 5 steps:

  1. System description: What it does, what it is used for, who the users are.
  2. Identification of affected rights: Which fundamental rights could be impacted.
  3. Risk assessment: Probability and impact of each risk.
  4. Mitigation measures: What controls are implemented.
  5. Conclusions and recommendations: Final assessment.

Command:

licit fria

Annex IV — Technical Documentation

Annex IV defines the technical documentation required for AI systems. licit generates this documentation by auto-populating it from:

Command:

licit annex-iv --organization "My Company" --product "My Product"

OWASP Agentic Top 10

Scope

The OWASP Agentic Top 10 identifies the 10 main security risks in applications that use AI agents. licit evaluates the project’s posture against each risk.

Evaluated Risks

IDRiskWhat licit evaluates
ASI-01Excessive AgencyGuardrails, protected files, blocked commands
ASI-02Uncontrolled AutonomyBudget limits, dry-run, human approval
ASI-03Supply Chain VulnerabilitiesSecurity tools (Semgrep, Snyk, etc.)
ASI-04Improper Output HandlingOutput validation, quality gates
ASI-05Insecure CommunicationConnector configuration, data protection
ASI-06Insufficient MonitoringAudit trail, logging, OpenTelemetry
ASI-07Identity and Access MismanagementAgent permissions, access scope
ASI-08Inadequate SandboxingExecution isolation, rollback capability
ASI-09Prompt InjectionInput validation, guardrail configuration
ASI-10Insufficient LoggingStructured logs, session traceability

Mapping to Evidence

Each OWASP risk maps to collectible evidence:

ASI-01 (Excessive Agency)
  ├── has_guardrails → Are guardrails configured?
  ├── guardrail_count → How many controls exist?
  └── has_human_review_gate → Is there human review?

ASI-02 (Uncontrolled Autonomy)
  ├── has_budget_limits → Are there budget limits?
  ├── has_dry_run → Does dry-run mode exist?
  └── has_rollback → Is there rollback capability?

ASI-06 (Insufficient Monitoring)
  ├── has_audit_trail → Is there an audit trail?
  ├── audit_entry_count → How many entries?
  └── has_otel → Is there OpenTelemetry instrumentation?

How licit Evaluates Compliance

Evaluation Process

1. Detect     → ProjectDetector analyzes the project
2. Collect    → EvidenceCollector gathers evidence
3. Evaluate   → Evaluators apply framework requirements
4. Classify   → Each requirement: compliant / partial / non-compliant / n/a
5. Report     → Report with evidence, gaps, and recommendations

Evidence Sources

SourceWhat it providesStatus
Git historyCode provenance, contributors, frequencyFunctional (v0.2.0)
Session logsAI agent session logs (Claude Code)Functional (v0.2.0)
Agent configsGuardrails, models used, code rulesFunctional (v0.1.0)
CI/CD configsHuman review gates, security stepsFunctional (v0.1.0)
Architect reportsAudit trail, execution qualityPhase 7
SARIF filesSecurity findings (vulnerabilities)Phase 7
.licit/ dataFRIA, Annex IV, changelog, provenance storePartial (provenance functional)

Provenance evidence (licit trace) directly feeds the transparency articles (Art. 13) and traceability (Art. 10) of the EU AI Act, as well as the monitoring controls (ASI-06, ASI-10) of the OWASP Agentic Top 10.

Compliance Levels

StatusMeaningRequired Action
compliantRequirement fully metNone
partialRequirement partially metImprove evidence or controls
non-compliantRequirement not metImplement missing controls
n/aNot applicable to the projectNone
not-evaluatedNot yet evaluatedRun evaluation

Compliance Reports

Available Formats

FormatRecommended Use
MarkdownHuman review, PRs, documentation
JSONIntegration with other tools, dashboards
HTMLPresentation to stakeholders, audits

Report Structure

# Compliance Report — My Project
Generated: 2026-03-10

## Summary
- EU AI Act: 72% compliant (13/18 controls)
- OWASP Agentic: 60% compliant (6/10 controls)

## EU AI Act
### Article 9 — Risk Management
Status: PARTIAL
Evidence: FRIA exists but incomplete
Recommendation: Complete FRIA sections 3-5

### Article 14 — Human Oversight
Status: COMPLIANT
Evidence: GitHub Actions requires approval for deployment
...

## Gaps
| Priority | Requirement | Gap | Effort |
|---|---|---|---|
| 1 | ART-9-1 | No risk assessment | Medium |
| 2 | ASI-01 | No guardrails | Low |

CI/CD Gate

licit can act as a compliance gate in CI/CD pipelines:

# .github/workflows/compliance.yml
name: Compliance Check
on: [push, pull_request]

jobs:
  compliance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0   # Required for git analysis

      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install licit
        run: pip install licit-ai-cli

      - name: Run compliance check
        run: licit verify
        # Exit 0 = pass, Exit 1 = fail, Exit 2 = warnings

Exit codes:

CodeResultPipeline
0All critical requirements metPass
1Some critical requirement not metFail
2Some requirement partially metWarning (configurable)

Future Frameworks (V1+)

licit is designed to support additional frameworks:

FrameworkStatusDescription
NIST AI RMFPlanned (V1)NIST Risk Management Framework
ISO/IEC 42001Planned (V1)AI Management System
SOC 2 AIUnder considerationAI-specific SOC 2 controls
IEEE 7000Under considerationEthical system design

The frameworks/ architecture allows adding new frameworks by implementing an evaluator with the corresponding Protocol interface.