FAQ — Frequently asked questions

Installation and setup

What Python version do I need?

Python 3.12 or higher. licit uses StrEnum and other features that require 3.12+.

python3.12 --version
# If you don't have it:
# Ubuntu/Debian: sudo apt install python3.12
# macOS: brew install python@3.12
# Windows: download from python.org

Error: “pip install” hangs

If you are on a system with multiple Python versions, make sure you use the correct pip:

# Incorrect (may use Python 3.10 or another version)
pip install licit-ai-cli

# Correct
python3.12 -m pip install licit-ai-cli

Error: ModuleNotFoundError: No module named 'licit'

The installation did not complete correctly. Reinstall:

python3.12 -m pip install -e ".[dev]"

Verify that the entry point works:

python3.12 -m licit --version

Configuration

Where does the .licit.yaml file go?

In the root of your project, alongside pyproject.toml or package.json:

mi-proyecto/
├── .licit.yaml      ← here
├── pyproject.toml
├── src/
└── tests/

Can I use a different name for the config file?

Yes, use the --config option:

licit --config mi-config.yaml status

What happens if .licit.yaml has syntax errors?

licit logs a warning and uses the default values. It does not fail with an error.

$ licit --verbose status
# Warning: Failed to parse .licit.yaml: ...
# Using default configuration

Should I commit .licit.yaml?

Yes. It is the team’s shared configuration. Commit .licit.yaml.

Do not commit .licit/provenance.jsonl or .licit/fria-data.json (sensitive data).


Commands

Which commands are functional?

All 10 commands are functional as of v0.6.0:

CommandPhaseCurrent status
init1Functional (v0.1.0)
status1Functional (v0.1.0)
connect1Functional (v0.1.0)
trace2Functional (v0.2.0)
changelog3Functional (v0.3.0)
fria4Functional (v0.4.0)
annex-iv4Functional (v0.4.0)
verify4-6Functional (v0.5.0 — EU AI Act + OWASP)
report6Functional (v0.6.0 — Markdown, JSON, HTML)
gaps6Functional (v0.6.0 — with tool suggestions)

licit init does not detect my language/framework

ProjectDetector looks for specific files:

LanguageFile searched
Pythonpyproject.toml, requirements.txt
JavaScriptpackage.json
TypeScripttsconfig.json
Gogo.mod
RustCargo.toml
Javapom.xml, build.gradle
FrameworkHow it is detected
FastAPIfastapi in pyproject.toml dependencies
Flaskflask in pyproject.toml dependencies
Djangodjango in pyproject.toml dependencies
Reactreact in package.json dependencies
Next.jsnext in package.json dependencies
Expressexpress in package.json dependencies

If your language or framework is not supported, open an issue.

licit status shows “not collected” for provenance

Run licit trace to analyze the git history and generate provenance data. After running trace, licit status will show the provenance statistics.

licit trace --stats     # Analyze and show statistics
licit status            # Now shows provenance data

Testing and development

Tests fail with structlog errors

Make sure tests/conftest.py configures structlog correctly:

import logging
import structlog

structlog.configure(
    wrapper_class=structlog.make_filtering_bound_logger(logging.CRITICAL),
    cache_logger_on_first_use=False,
)

The most common error is ValueError: I/O operation on closed file when Click’s CliRunner closes stderr and structlog tries to write to it. The solution is to use WriteLoggerFactory() (not PrintLoggerFactory(file=sys.stderr)).

mypy shows errors on future module imports

Imports of future phase modules (like licit.reports.unified) use # type: ignore[import-not-found]:

from licit.reports.unified import (  # type: ignore[import-not-found]
    UnifiedReportGenerator,
)

The type: ignore comment must be on the from line, not on the imported name lines. If ruff reformats the import to multiline, verify that the comment stays on the correct line.

Note: Modules from Phases 2-5 (provenance, changelog, eu_ai_act, owasp_agentic) are already implemented and imported directly without type: ignore. Only reports/ (Phase 6) uses lazy stubs.

ruff reports UP042 on my enums

Use StrEnum instead of (str, Enum):

# ruff UP042 error:
class MiEnum(str, Enum):  # ← error
    VALUE = "value"

# Correct:
from enum import StrEnum
class MiEnum(StrEnum):    # ← correct
    VALUE = "value"

How do I run a single test?

# By name
python3.12 -m pytest tests/test_cli.py::TestCLIHelp::test_help -q

# By pattern (keyword)
python3.12 -m pytest tests/ -q -k "test_init"

# By file
python3.12 -m pytest tests/test_core/test_project.py -q

Compliance

Does licit replace a lawyer/compliance consultant?

No. licit is an assistance tool that automates evidence collection and generates reports. Final compliance decisions should be reviewed by qualified professionals.

Is the licit report sufficient for an EU AI Act audit?

The licit report is a starting point. It provides structured technical evidence that can complement compliance documentation. For a formal audit, you will need:

  1. Legal review of the FRIA
  2. Additional organizational documentation
  3. Evidence of risk management processes
  4. Team training records

What if my project is not “high risk” under the EU AI Act?

If your AI system does not fall into the high-risk category, many EU AI Act requirements do not apply. licit allows marking requirements as n/a (not applicable). However, it is good practice to comply with the transparency (Art. 13) and human oversight (Art. 14) requirements regardless of the risk classification.

Is the OWASP Agentic Top 10 mandatory?

It is not regulation; it is a security best practices framework. However, following the OWASP Agentic Top 10 recommendations significantly reduces security risks when using AI agents in development.


Security

Does licit send data to any server?

No. licit operates 100% locally. There is no telemetry, analytics, or communication with external servers.

Can I use licit in an air-gapped environment?

Yes. licit does not require an internet connection to function. You only need to install the dependencies beforehand.

Is it safe to commit the generated reports?

Reports in .licit/reports/ are generally safe to commit. They contain compliance evaluations, not sensitive data. However, review the content before pushing to a public repo.

The files you should not commit:


Known issues (v0.6.0)

IssueStatusWorkaround
Does not detect Go/Rust/Java frameworksLimitationDetects the language but not specific frameworks
Provenance heuristics may produce false positivesLimitationAdjust confidence_threshold in config
Session reader only supports Claude CodeLimitationMore readers in future phases
Pipe | in organization name breaks Markdown table in Annex IVLimitationAvoid pipe in organization names
Markdown differ only supports ATX headings (#)LimitationSetext headings (===/---) are not detected
FRIA run_interactive() requires a terminalLimitationCannot run in batch mode; use --update with pre-generated data

Glossary

TermDefinition
ProvenanceOrigin and authorship of code (human vs AI)
FRIAFundamental Rights Impact Assessment (Art. 27 EU AI Act)
Annex IVTechnical documentation required by the EU AI Act
SARIFStatic Analysis Results Interchange Format — standard format for security findings
SBOMSoftware Bill of Materials — component inventory
GuardrailControl that limits the behavior of an AI agent
Human review gateCheckpoint that requires human approval
AttestationCryptographic verification of data integrity
Compliance ratePercentage of requirements met vs total evaluable
GapDifference between the current state and a compliance requirement