Report interpretation

Available formats

licit report                              # Markdown (default)
licit report --format json -o report.json # Structured JSON
licit report --format html -o report.html # Self-contained HTML

All three formats contain the same information; only the presentation changes.


Report structure

Every report has three levels:

1. Overall Summary    →  Aggregated statistics across all frameworks
2. Per-framework      →  Summary + detail for each evaluated framework
3. Per-requirement    →  Status, evidence, and recommendations per requirement

1. Overall summary

  Overall: [####................] 19.0%
  4/21 controls compliant
FieldMeaning
Compliance ratecompliant / (compliant + partial + non_compliant) * 100. Excludes N/A and not-evaluated
Total controlsEU AI Act (11) + OWASP Agentic (10) = 21 controls
CompliantRequirements with sufficient evidence
PartialPartial evidence — improvements possible
Non-compliantNo evidence — action required

How to interpret the rate

RangeInterpretationAction
80-100%Strong compliance postureMaintain and monitor
50-79%Partial compliance, manageable gapsClose priority gaps with licit gaps
20-49%Weak complianceUrgent remediation plan
0-19%Minimal compliance — typical of newly initialized projectRun licit fria, licit annex-iv, licit trace

A new project without FRIA or Annex IV will start at ~5-20%. This is normal. Each licit command you run raises the rate.


2. Per-framework section

EU AI Act

  eu-ai-act (2024/1689)
    [##..................] 9.1%
    1 compliant | 4 partial | 6 non-compliant

The 11 evaluated articles cover deployer obligations (the entity using the AI system), not the provider (the entity building it):

ArticleWhat it evaluatesHow to raise the score
Art. 9Risk managementConfigure guardrails in architect, add vigil/semgrep
Art. 10Data governanceAlways PARTIAL (deployer doesn’t train) — document provider practices
Art. 12Automatic logginglicit trace for provenance, enable audit trail
Art. 13Transparencylicit annex-iv + licit changelog
Art. 14Human oversightConfigure PR reviews, architect dry-run
Art. 26Deployer obligationsHave agent configs (CLAUDE.md, .cursorrules)
Art. 27FRIAlicit fria
Annex IVTechnical docslicit annex-iv

OWASP Agentic Top 10

  owasp-agentic (2025)
    [....................] 0.0%
    0 compliant | 5 partial | 5 non-compliant

The 10 risks evaluate the security posture for AI agents:

RiskWhat it evaluatesHow to raise the score
ASI01Excessive permissionsGuardrails, quality gates, budget limits
ASI02Prompt injectionvigil scanning, input guardrails
ASI03Supply chainSnyk/Semgrep/CodeQL, config changelog
ASI04Insufficient logginglicit trace, audit trail, OTel
ASI05Unvalidated outputHuman review gates, quality gates, test suite
ASI06No human oversightPR reviews, dry-run, rollback
ASI07Weak sandboxingGuardrails, CI/CD isolation
ASI08Unlimited consumptionBudget limits in architect
ASI09Poor error handlingTest suite, CI/CD, rollback
ASI10Data exposureProtected files, security scanning

3. Per-requirement detail

Each requirement shows:

In Markdown

### [FAIL] ART-27-1: Fundamental Rights Impact Assessment (FRIA)

- **Status**: non-compliant
- **Reference**: Article 27(1)
- **Evidence**: No FRIA document found

**Recommendations:**
- Run: licit fria -- to complete the Fundamental Rights Impact Assessment

In JSON

{
  "id": "ART-27-1",
  "name": "Fundamental Rights Impact Assessment (FRIA)",
  "status": "non-compliant",
  "evidence": "No FRIA document found",
  "recommendations": [
    "Run: licit fria -- to complete the Fundamental Rights Impact Assessment"
  ]
}

In HTML

Status with color badge: green (compliant), amber (partial), red (non-compliant), gray (n/a).


Gap analysis

licit gaps

Gaps are a subset of the report: they only show non-compliant and partial requirements, sorted by severity.

How to read a gap

  1. [X] [ART-27-1] Fundamental Rights Impact Assessment (FRIA)
     Missing: Before putting an AI system into use...
     -> Run: licit fria -- to complete the FRIA
     Tools: licit fria
ElementMeaning
[X]Non-compliant (high priority). [!] = partial
[ART-27-1]Requirement ID
Missing:No evidence. Incomplete: = partial evidence
->Specific recommendation
Tools:Specific tools that help

Remediation strategy

  1. [X] (non-compliant) first — these are the ones that would fail licit verify in CI/CD
  2. Within [X], low effort first — quick wins
  3. [!] (partial) after — they improve the rate but don’t block the pipeline

Estimated effort

Each gap has an implicit effort by category:

EffortTypical timeExample
low<1 hourRun licit trace, licit annex-iv, licit changelog
medium1-4 hoursComplete licit fria, configure guardrails, add PR reviews
high1-3 daysConfigure vigil/semgrep, implement sandboxing, configure budget limits

Report configuration

In .licit.yaml:

reports:
  output_dir: .licit/reports        # Where reports are saved
  default_format: markdown          # Default format
  include_evidence: true            # Include Evidence field in each requirement
  include_recommendations: true     # Include recommendations

Without evidence

With include_evidence: false, reports omit the evidence line. Useful for executive reports that only need the status.

Without recommendations

With include_recommendations: false, recommendations are omitted. Useful if you already know them and just want the status snapshot.


Comparing reports over time

Generate JSON reports periodically and compare:

# Week 1
licit report --format json -o report-w1.json

# Week 2
licit report --format json -o report-w2.json

# Compare manually
diff <(jq '.overall' report-w1.json) <(jq '.overall' report-w2.json)

Improvement example:

-  "compliance_rate": 4.8
+  "compliance_rate": 33.3
-  "non_compliant": 11
+  "non_compliant": 5

In future versions, licit diff will automate this comparison.