Enterprise Guide

Guide for organizations evaluating or adopting licit as part of their AI governance strategy.


Who this guide is for


Value proposition

The problem

Organizations using AI agents to generate code face three gaps:

  1. Traceability: They cannot distinguish human code from AI code at scale. This creates intellectual property risks, legal liability, and quality management challenges.

  2. Regulation: The EU AI Act requires specific documentation (FRIA, Annex IV), risk management systems, and human oversight. Generating this documentation manually is costly and error-prone.

  3. Agent security: AI agents operate with elevated permissions and can introduce vulnerabilities that traditional security tools don’t cover (OWASP Agentic Top 10).

How licit solves it

CapabilityEnterprise benefit
Provenance trackingAuditable traceability of AI vs human code
FRIA generatorAutomated Art. 27 regulatory documentation
Annex IV generatorTechnical documentation auto-populated from metadata
EU AI Act evaluatorArticle-by-article evaluation with evidence
OWASP evaluatorSecurity posture against 10 agentic risks
Gap analyzerPrioritized gaps with actionable recommendations
CI/CD gateCompliance integrated into the development pipeline
Config changelogAudit trail of agent configuration changes

Key differentiators

  1. Standalone: No SaaS, databases, or infrastructure required. Everything is local files.
  2. Developer-first: CLI that integrates into existing git/CI/CD workflows.
  3. Language-agnostic: Python, JS/TS, Go, Rust, Java.
  4. Open source (MIT): No vendor lock-in, auditable, extensible.
  5. Multi-framework: EU AI Act + OWASP in one run, with NIST/ISO on the roadmap.

Adoption model

Phase 1: Pilot (1-2 weeks)

Goal: Validate licit on a representative project.

# A developer installs and tests
pip install licit-ai-cli
cd pilot-project/
licit init
licit trace --stats
licit report --format html -o compliance.html
licit gaps

Deliverable: HTML compliance report + gap analysis of the pilot project.

Phase 2: Team (2-4 weeks)

Goal: Integrate into a team’s CI/CD workflow.

  1. Add licit verify to the PR pipeline
  2. Complete the FRIA (licit fria)
  3. Generate Annex IV (licit annex-iv)
  4. Enable connectors if using architect/vigil
  5. Version .licit.yaml and reports

Phase 3: Organization (1-3 months)

Goal: Standardize AI compliance across the organization.

  1. Define standard .licit.yaml per project type
  2. Set up dashboards (parsing JSON reports)
  3. Integrate into internal audit process
  4. Designate compliance leads per team
  5. Establish review cadence (monthly/quarterly)

Technical requirements

RequirementDetail
RuntimePython 3.12+
Dependencies6 PyPI packages (click, pydantic, structlog, pyyaml, jinja2, cryptography)
Storage~50 MB per project (provenance store + reports)
NetworkNot required. Works 100% offline/air-gapped.
PermissionsRead-only project access + write to .licit/
CI/CDGitHub Actions, GitLab CI, Jenkins (templates included)
GitRequires git history for provenance tracking

Regulatory frameworks covered

EU AI Act — Current coverage

ObligationArticlelicit status
Risk management systemArt. 9Evaluated (guardrails, quality gates, scanning)
Data governanceArt. 10Evaluated (deployer perspective)
Automatic loggingArt. 12Evaluated (git, audit trail, provenance)
TransparencyArt. 13Evaluated (Annex IV, changelog)
Human oversightArt. 14Evaluated (review gates, dry-run, rollback)
Deployer obligationsArt. 26Evaluated (agent configs, monitoring)
Impact assessment (FRIA)Art. 27Interactive generator + --auto mode for CI/CD
Technical documentationAnnex IVAuto-populated generator from metadata

OWASP Agentic Top 10 — Current coverage

The 10 evaluated risks cover: access control, prompt injection, supply chain, logging, output handling, human oversight, sandboxing, resource consumption, error handling, and data exposure.

Framework roadmap

Frameworklicit versionStatus
EU AI ActV0 (current)Implemented
OWASP Agentic Top 10V0 (current)Implemented
NIST AI RMFV1Planned
ISO/IEC 42001V1Planned
SOC 2 AI ControlsV2Under evaluation

Security and data

What data licit generates

DataSensitivityRecommendation
Provenance store (JSONL)Medium (contributor names)Don’t version in public repos
FRIA data (JSON)High (rights assessment)Don’t version; store in a secure system
Compliance reportsLow (metadata, not code)Version; share with audit
Annex IVLow (technical documentation)Version
Config changelogLow (config changes)Version
Signing keyCriticalNever version; permissions 600

Security model


Integration with existing tools

Security tools

Toollicit integrationHow
vigilNative connectorlicit connect vigil — reads SARIF
SemgrepVia SARIFGenerate .sarif and configure sarif_path
SnykAutomatic detectionProjectDetector detects .snyk
CodeQLAutomatic detectionDetects .github/codeql/
TrivyAutomatic detectionDetects Trivy config

AI tools

Toollicit integrationHow
Claude CodeSession reader + git heuristicsAutomatic provenance tracking
CursorGit heuristics + config monitoring.cursorrules tracking
GitHub CopilotGit heuristics + config monitoring.github/copilot-instructions.md
architectNative connectorlicit connect architect — reads reports/audit/config
GitHub AgentsConfig monitoringAGENTS.md tracking

GRC platforms (Governance, Risk, Compliance)

licit generates JSON reports that can feed GRC platforms:

licit report --format json -o compliance-data.json
# → Parse with your GRC platform's API

The JSON contains: project metadata, per-framework results, compliance rates, gap analysis.


Enterprise FAQ

Does licit replace an audit?

No. licit automates the collection of technical evidence and generates regulatory documentation. Final compliance decisions must be reviewed by qualified professionals. licit is a tool for the auditor, not a substitute for the auditor.

Is the licit report legally binding?

No. licit reports are supporting technical evidence. For legal obligations under the EU AI Act, formal legal review of the FRIA and technical documentation is required.

Does it work in air-gapped environments?

Yes. licit requires no internet connection at any time. It only needs Python 3.12 and its 6 dependencies installed beforehand.

Does it support monorepos?

licit analyzes a root directory. For monorepos, run licit init in each project subdirectory or at the root depending on your needs.

What is the CI/CD execution cost?

licit verify typically takes 2-5 seconds on medium projects (100-500 commits). licit trace can take 10-30 seconds on large repos (10,000+ commits). It requires no external services or API calls.

How do you handle intellectual property of AI-generated code?

licit takes no legal position on IP. What it does is track which code was generated by AI (and by which model), which is necessary evidence for any IP analysis your legal team needs to perform.


Support and community