Guide to completing the FRIA

What is the FRIA

The FRIA (Fundamental Rights Impact Assessment) is a mandatory assessment under Article 27 of the EU AI Act. Before putting an AI system into use, the deployer must evaluate its impact on fundamental rights.

licit generates the FRIA through a 5-step interactive questionnaire with 16 questions. Several answers are auto-detected from the project configuration.

licit fria             # Start a new FRIA
licit fria --update    # Update an existing FRIA

The 5 steps

Step 1 — System description

Objective: Document what the AI system is, what it does, and how it is deployed.

1.1 — What is the main purpose of this AI system?

Describe what the system does in one or two concrete sentences.

Response typeExample
Good”Autonomous code generation and file modification in CI/CD pipelines using Claude Code”
Good”Interactive code assistant for developers using Cursor with Claude Sonnet 4”
Bad”Use AI”
Bad”Development”

Auto-detection: licit infers the purpose from detected agent configs (CLAUDE.md, .cursorrules, etc.).

1.2 — What type of AI technology is used?

OptionWhen to select
LLM for code generationThe agent generates code but a human reviews and executes it
AI coding assistant (interactive)The developer works alongside the agent (Cursor, Copilot)
Autonomous AI agent (headless)The agent operates without human intervention (Claude Code in CI, architect)
Multi-agent systemMultiple agents collaborate (architect + vigil, or custom)

Auto-detection: licit detects whether there is architect (headless) or only interactive configs (Cursor, Copilot).

1.3 — What AI models/providers are used?

List the specific models. Regulators want to know which models are in use.

Response typeExample
Good”Claude Sonnet 4 (Anthropic) for code generation, GPT-4.1 (OpenAI) for review”
Good”Claude Opus 4 (Anthropic) via Claude Code”
Bad”AI”

Auto-detection: licit reads the architect config to detect the configured model.

1.4 — How many people/systems are affected?

This determines the impact scope:

OptionRegulatory implication
Internal team (<50)Low risk — impact limited to developers
Internal org (50-500)Medium risk — produced software affects the organization
External users (500-10K)High risk — end users depend on the produced software
Large-scale (10K+)Very high risk — justifies exhaustive mitigation measures

1.5 — Is human review required?

OptionWhat it implies for compliance
Yes, allStrong Art. 14 compliance. Document the review process
PartiallyDocument what is reviewed and what is not, and why
NoHigh risk. You must justify why this is acceptable and what alternative mitigations exist

Auto-detection: licit checks whether there is CI/CD with GitHub Actions (implies PR reviews) or architect with dry-run.


Step 2 — Fundamental rights identification

Objective: Identify which fundamental rights could be affected by the AI system.

2.1 — Does the system process personal data?

Consider whether the source code or configurations contain:

OptionWhen
YesThe code handles user data (forms, databases, APIs with PII)
NoInfrastructure code, libraries, internal tools without user data
PossiblyIt is unclear — the AI agent could generate code that processes data

2.2 — Could it affect employment or working conditions?

OptionWhen
No — only generates codeThe agent’s output is code reviewed by humans
Possibly — productivity metricsIf AI code metrics are used to evaluate developer performance
Yes — hiring decisionsIf the system influences HR decisions

2.3 — Could vulnerabilities affect users’ rights?

OptionWhen
Low risk — internal toolsThe produced software is for internal use only
Medium risk — user-facingThe software has users but does not handle critical data
High risk — financial/health/identityThe software handles money, health, or personal identity

2.4 — Could it introduce discriminatory behavior?

OptionWhen
No — backend/infraThe code is purely technical
PossiblyThe code interacts with decisions that affect people (recommendations, filters)
YesThe code implements decision algorithms (scoring, classification, selection)

Step 3 — Impact assessment

Objective: Assess the likelihood and severity of impact on the identified rights.

3.1 — Overall risk level

OptionCriterion
MinimalDevelopment tool with full human oversight
LimitedSome automation but with review gates
HighAutonomous operation with limited oversight
UnacceptableFully autonomous without safeguards — not acceptable under the EU AI Act

If you select “Unacceptable”, you must implement safeguards before proceeding.

3.2 — Maximum potential impact

Describe the worst realistic scenario. Regulators want to see that you have thought about this.

Response typeExample
Good”Security vulnerability in generated code could expose data of 10K users. Estimated financial impact: 50K-200K EUR. Detection time: <24h via CI/CD”
Bad”Nothing bad can happen”

3.3 — Detection and reversal speed

OptionImplication
Immediately — automated testsStrong. Document your test suite and coverage
Hours — CI/CDAcceptable. Document your pipeline
Days — manual reviewWeak. Consider automating
UnknownUnacceptable. Implement detection before continuing

Step 4 — Mitigation measures

Objective: Document existing and planned measures to mitigate risks.

4.1 — Guardrails

Auto-detection: licit reads the architect config to detect guardrails.

Document what restrictions the AI agent has:

MeasureExample
Protected filesREADME.md, .env, Dockerfile — the agent cannot modify them
Blocked commandsrm -rf /, DROP TABLE, `curl
Budget limitsMaximum $5 USD per execution
Quality gatesTests must pass before commit

4.2 — Security scanning

Auto-detection: licit detects vigil, semgrep, snyk, codeql, trivy.

4.3 — Testing

Auto-detection: licit detects pytest, jest, vitest, go test.

4.4 — Audit trail

Auto-detection: licit checks whether .licit/provenance.jsonl and .architect/reports/ exist.

4.5 — Additional measures

Free-form field for measures not covered by the previous questions. Examples:


Step 5 — Monitoring and review

Objective: Define continuous monitoring and periodic review processes.

5.1 — Review frequency

OptionWhen it is appropriate
QuarterlyHigh risk, frequent changes to the system
Semi-annuallyMedium risk, stable system
AnnuallyLow risk, no significant changes
On significant changesWhen the model, scope, or guardrails change

Recommendation: combine “On significant changes” with a minimum frequency (at least annually).

5.2 — Compliance responsible

Designate a specific person with name and role. Regulators want a point of contact.

5.3 — Incident process

Describe what happens when AI-generated code causes a problem:

  1. How is it detected?
  2. Who is notified?
  3. How is it reverted?
  4. How is it documented?

After completing the FRIA

Generated files

FileContent
.licit/fria-data.jsonRaw answers (JSON). Do not version — may contain sensitive data
.licit/fria-report.mdFormatted Markdown report. Do version — this is the regulatory document

Updating an existing FRIA

licit fria --update

Pre-loads previous answers and allows you to modify them.

When to update