Legal and regulatory framework

Legal context of the regulatory frameworks that licit evaluates, with references to official texts.


EU AI Act — Regulation (EU) 2024/1689

Context

The European Union’s Artificial Intelligence Regulation is the world’s first comprehensive AI regulation. It establishes harmonized rules for the development, marketing, and use of AI systems in the European market.

Application timeline

DateMilestone
August 2024Entry into force
February 2025AI practice prohibitions (Title II)
August 2025General-purpose AI model (GPAI) obligations
August 2026Majority of obligations, including high-risk systems
August 2027Full application

Scope for AI development teams

licit focuses on deployer obligations (Art. 26-27) and on transparency and technical documentation requirements that apply to teams using AI agents to generate code.

Is your team in scope? Yes, if:

Articles evaluated by licit

Art. 9 — Risk management system

“High-risk AI systems shall be subject to a risk management system […] consisting of a continuous iterative process.”

What licit evaluates: Presence of guardrails, quality gates, budget limits, and security scanning tools.

Art. 10 — Data and data governance

“Training, validation, and testing data sets shall be subject to appropriate data governance and management practices.”

What licit evaluates: Deployer perspective — documents that the model provider manages training data.

Art. 12 — Record keeping

“High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (logs).”

What licit evaluates: Git history, audit trail (architect), provenance tracking, OpenTelemetry.

Art. 13 — Transparency

“High-risk AI systems shall be designed and developed in such a way that their operation is sufficiently transparent.”

What licit evaluates: Generated Annex IV documentation, agent config changelog, requirements traceability.

Art. 14 — Human oversight

“High-risk AI systems shall be designed and developed in such a way that they can be effectively overseen by natural persons.”

What licit evaluates: Human review gates, dry-run, quality gates, rollback, budget limits.

Art. 26 — Deployer obligations

“Deployers shall use high-risk AI systems in accordance with the instructions of use.”

What licit evaluates: Presence of agent configurations, operations monitoring.

Art. 27 — Fundamental rights impact assessment

“Before putting a high-risk AI system into service, deployers shall carry out an assessment of the impact on fundamental rights.”

What licit generates: Complete FRIA with an interactive 5-step questionnaire, 16 questions, auto-detection of 8 fields.

Annex IV — Technical documentation

“The technical documentation shall contain […] a general description of the AI system, its intended purpose, development process, testing, and performance.”

What licit generates: Document with 6 sections auto-populated from project metadata.

Official text


OWASP Agentic Top 10 (2025)

Context

The OWASP Top 10 for Agentic AI Security identifies the top 10 security risks specific to applications using AI agents. Published by the OWASP Foundation in 2025.

It is not regulation — it is an industry-widely adopted security best practices framework. Similar to how the OWASP Top 10 (web) is the standard reference for web security.

The 10 risks

IDRiskDescriptionDevelopment relevance
ASI01Excessive AgencyThe agent has more permissions than necessaryAgents that can write to any file
ASI02Prompt InjectionMalicious inputs that manipulate behaviorSource code with payloads in comments
ASI03Supply ChainVulnerable or compromised dependenciesAgents that install packages without verification
ASI04Insufficient LoggingLack of agent action loggingNo audit trail of what the agent did
ASI05Output HandlingUnvalidated output used downstreamGenerated code without review reaching production
ASI06No Human OversightLack of human supervisionAgents that push directly to main
ASI07Insufficient SandboxingAgent without adequate isolationAccess to the entire filesystem and network
ASI08Resource ConsumptionNo spending/token limitsAgents without budgets spending uncontrolled
ASI09Poor Error HandlingErrors that expose state or bypass controlsAgent that crashes leaving corrupted files
ASI10Data ExposureSensitive data leakageAgent that logs credentials or PII

Official text


Future frameworks

NIST AI RMF (AI 100-1) — Planned V1

The NIST AI Risk Management Framework defines 4 core functions:

  1. Govern: Establish governance policies and processes
  2. Map: Contextualize AI system risks
  3. Measure: Evaluate and monitor risks
  4. Manage: Prioritize and treat risks

Reference: NIST AI RMF (AI 100-1)

ISO/IEC 42001:2023 — Planned V1

International standard specifying requirements for an AI management system (AIMS). It defines:

Reference: ISO/IEC 42001:2023


  1. licit is not legal advice. Reports are supporting technical evidence, not legal opinions.
  2. licit does not classify risk. Classifying a system as “high risk” (Annex III) requires legal analysis.
  3. licit does not replace the DPO. If your system processes personal data, you need a Data Protection Officer regardless of licit.
  4. Compliance percentages are indicative. An “80% compliant” does not mean legal compliance — a single non-compliant article can have regulatory consequences.
  5. Auto-detection is heuristic. Auto-detected answers in the FRIA are suggestions based on technical signals, not legal determinations.