Legal and regulatory framework
Legal context of the regulatory frameworks evaluated by licit, with references to official texts.
EU AI Act — Regulation (EU) 2024/1689
Context
The European Union’s Artificial Intelligence Regulation is the first comprehensive AI regulation worldwide. It establishes harmonized rules for the development, commercialization, and use of AI systems in the European market.
Enforcement timeline
| Date | Milestone |
|---|---|
| August 2024 | Entry into force |
| February 2025 | Prohibited AI practices (Title II) |
| August 2025 | General-purpose AI model (GPAI) obligations |
| August 2026 | Majority of obligations, including high-risk systems |
| August 2027 | Full enforcement |
Scope for development teams using AI
licit focuses on deployer obligations (Art. 26-27) and on the transparency and technical documentation requirements applicable to teams using AI agents to generate code.
Is your team in scope? Yes, if:
- You use AI agents (Claude Code, Cursor, Copilot, etc.) to generate code
- The produced software is deployed in the EU or affects EU citizens
- Your AI system falls under any Annex III category (high risk)
Articles evaluated by licit
Art. 9 — Risk management system
“High-risk AI systems shall be subject to a risk management system […] consisting of a continuous iterative process.”
What licit evaluates: Presence of guardrails, quality gates, budget limits, and security scanning tools.
Art. 10 — Data and data governance
“Training, validation and testing data sets shall be subject to appropriate data governance and management practices.”
What licit evaluates: Deployer perspective — documents that the model provider manages training data.
Art. 12 — Record keeping
“High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (logs).”
What licit evaluates: Git history, audit trail (architect), provenance tracking, OpenTelemetry.
Art. 13 — Transparency
“High-risk AI systems shall be designed and developed in such a way that their operation is sufficiently transparent.”
What licit evaluates: Generated Annex IV documentation, agent config changelog, requirements traceability.
Art. 14 — Human oversight
“High-risk AI systems shall be designed and developed in such a way that they can be effectively overseen by natural persons.”
What licit evaluates: Human review gates, dry-run, quality gates, rollback, budget limits.
Art. 26 — Deployer obligations
“Deployers shall use high-risk AI systems in accordance with the instructions of use.”
What licit evaluates: Presence of agent configurations, operations monitoring.
Art. 27 — Fundamental rights impact assessment
“Before putting a high-risk AI system into service, deployers shall carry out an assessment of the impact on fundamental rights.”
What licit generates: Complete FRIA with a 5-step interactive questionnaire, 16 questions, and auto-detection of 8 fields.
Annex IV — Technical documentation
“The technical documentation shall contain […] a general description of the AI system, its intended purpose, development process, testing, and performance.”
What licit generates: Document with 6 auto-populated sections from project metadata.
Official text
OWASP Agentic Top 10 (2025)
Context
The OWASP Top 10 for Agentic AI Security identifies the top 10 security risks specific to applications using AI agents. Published by the OWASP Foundation in 2025.
It is not a regulation — it is a widely adopted industry best-practice security framework. Similar to how the OWASP Top 10 (web) is the standard reference for web security.
The 10 risks
| ID | Risk | Description | Relevance for development |
|---|---|---|---|
| ASI01 | Excessive Agency | The agent has more permissions than necessary | Agents that can write to any file |
| ASI02 | Prompt Injection | Malicious inputs that manipulate behavior | Source code with payloads in comments |
| ASI03 | Supply Chain | Vulnerable or compromised dependencies | Agents that install packages without verification |
| ASI04 | Insufficient Logging | Lack of agent action recording | No audit trail of what the agent did |
| ASI05 | Output Handling | Unvalidated output used downstream | AI-generated code without review reaching prod |
| ASI06 | No Human Oversight | Lack of human supervision | Agents that push directly to main |
| ASI07 | Insufficient Sandboxing | Agent without proper isolation | Access to the entire filesystem and network |
| ASI08 | Resource Consumption | No spending/token limits | Agents without budget spending without control |
| ASI09 | Poor Error Handling | Errors that expose state or bypass controls | Agent that crashes leaving corrupted files |
| ASI10 | Data Exposure | Sensitive data leakage | Agent that logs credentials or PII |
Official text
Future frameworks
NIST AI RMF (AI 100-1) — Planned V1
The NIST AI Risk Management Framework defines 4 core functions:
- Govern: Establish governance policies and processes
- Map: Contextualize AI system risks
- Measure: Assess and monitor risks
- Manage: Prioritize and treat risks
Reference: NIST AI RMF (AI 100-1)
ISO/IEC 42001:2023 — Planned V1
International standard that specifies requirements for an AI management system (AIMS). It defines:
- Clauses 4-10: Context, leadership, planning, support, operation, evaluation, improvement
- Annex A: ~35 AI-specific controls
- Annex B: Implementation guidance
Reference: ISO/IEC 42001:2023
Legal limitations of licit
- licit is not legal advice. Reports are supporting technical evidence, not legal opinions.
- licit does not classify risk. Classifying a system as “high risk” (Annex III) requires legal analysis.
- licit does not replace the DPO. If your system processes personal data, you need a Data Protection Officer regardless of licit.
- Compliance percentages are indicative. An “80% compliant” does not mean legal compliance — a single unfulfilled article can have regulatory consequences.
- Auto-detection is heuristic-based. Auto-detected answers in the FRIA are suggestions based on technical signals, not legal determinations.