Fundamental Rights Impact Assessment (FRIA) Template
Structured template for conducting a Fundamental Rights Impact Assessment for high-risk AI systems under EU AI Act Article 27.
This template includes both English and Spanish versions. Scroll down to find "Versión Española".
Disclaimer: This template is provided for guidance purposes only. It does not constitute legal advice. Organisations should consult qualified legal counsel to ensure compliance with applicable laws and regulations.
Template provided by VORLUX AI — vorluxai.com
Fundamental Rights Impact Assessment (FRIA)
EU AI Act — Article 27 Compliance Template
Document Reference: FRIA-[YYYY]-[NNN] Version: _______________ Date Initiated: _______________ Date Completed: _______________ Next Review Date: _______________ Classification: ☐ Public ☐ Internal ☐ Confidential
Part A: Organisation & System Details
| Field | Entry |
|---|---|
| Organisation Name | |
| Organisation Type | ☐ Public body ☐ Private company ☐ Non-profit |
| Contact Person (DPO/AI Officer) | |
| AI System Name | |
| AI System Version | |
| Deployment Region(s) | |
| Intended Go-Live Date | |
| High-Risk Category (Annex III) |
Part B: System Description
B.1 Purpose and Functionality
What does the AI system do? (Describe in plain language)
[Provide a clear, jargon-free description of what the system does, its inputs, processing logic, and outputs.]
What specific decision(s) does the system make or support?
- Decision 1: _______________
- Decision 2: _______________
- Decision 3: _______________
Is the AI system the sole decision-maker, or does a human review outputs?
☐ Fully automated (no human review) ☐ Human-in-the-loop (human can override) ☐ Human-on-the-loop (human monitors, intervenes if needed) ☐ Human-in-command (AI is advisory only)
Describe the human oversight mechanism:
B.2 Affected Populations
Who are the individuals or groups directly affected by this system?
| Population Group | Estimated Number | Vulnerability Level |
|---|---|---|
| ☐ Low ☐ Medium ☐ High | ||
| ☐ Low ☐ Medium ☐ High | ||
| ☐ Low ☐ Medium ☐ High | ||
| ☐ Low ☐ Medium ☐ High |
Do affected groups include vulnerable persons? ☐ Yes ☐ No ☐ Possibly
If yes, specify: ☐ Minors (under 18) ☐ Elderly persons ☐ Persons with disabilities ☐ Persons with mental health conditions ☐ Persons in economically precarious situations ☐ Asylum seekers / migrants ☐ Other: _______________
B.3 Technical Specification Summary
| Parameter | Detail |
|---|---|
| AI/ML Model Type | |
| Training Data Sources | |
| Data Freshness | |
| Known Accuracy / Error Rate | |
| External APIs or Third-party Models Used | |
| Data Processed (categories) | |
| Special Category Data Involved? | ☐ Yes ☐ No |
Part C: Rights Affected — Screening
Instructions: For each fundamental right, indicate whether the AI system affects it (even indirectly). Mark all that apply and proceed to the risk scoring in Part D for each marked right.
| # | Fundamental Right | Affected? | Potential Impact |
|---|---|---|---|
| 1 | Human dignity (Art. 1 EUCFR) | ☐ Yes ☐ No ☐ Unclear | |
| 2 | Right to life (Art. 2) | ☐ Yes ☐ No ☐ Unclear | |
| 3 | Prohibition of torture / inhuman treatment (Art. 4) | ☐ Yes ☐ No ☐ Unclear | |
| 4 | Prohibition of slavery (Art. 5) | ☐ Yes ☐ No ☐ Unclear | |
| 5 | Right to liberty and security (Art. 6) | ☐ Yes ☐ No ☐ Unclear | |
| 6 | Respect for private and family life (Art. 7) | ☐ Yes ☐ No ☐ Unclear | |
| 7 | Protection of personal data (Art. 8) | ☐ Yes ☐ No ☐ Unclear | |
| 8 | Right to marry and found a family (Art. 9) | ☐ Yes ☐ No ☐ Unclear | |
| 9 | Freedom of thought, conscience, religion (Art. 10) | ☐ Yes ☐ No ☐ Unclear | |
| 10 | Freedom of expression and information (Art. 11) | ☐ Yes ☐ No ☐ Unclear | |
| 11 | Freedom of assembly and association (Art. 12) | ☐ Yes ☐ No ☐ Unclear | |
| 12 | Freedom of the arts and sciences (Art. 13) | ☐ Yes ☐ No ☐ Unclear | |
| 13 | Right to education (Art. 14) | ☐ Yes ☐ No ☐ Unclear | |
| 14 | Freedom to choose an occupation (Art. 15) | ☐ Yes ☐ No ☐ Unclear | |
| 15 | Freedom to conduct a business (Art. 16) | ☐ Yes ☐ No ☐ Unclear | |
| 16 | Right to property (Art. 17) | ☐ Yes ☐ No ☐ Unclear | |
| 17 | Right to asylum (Art. 18) | ☐ Yes ☐ No ☐ Unclear | |
| 18 | Equality before the law (Art. 20) | ☐ Yes ☐ No ☐ Unclear | |
| 19 | Non-discrimination (Art. 21) | ☐ Yes ☐ No ☐ Unclear | |
| 20 | Cultural, religious, linguistic diversity (Art. 22) | ☐ Yes ☐ No ☐ Unclear | |
| 21 | Equality between men and women (Art. 23) | ☐ Yes ☐ No ☐ Unclear | |
| 22 | Rights of the child (Art. 24) | ☐ Yes ☐ No ☐ Unclear | |
| 23 | Rights of the elderly (Art. 25) | ☐ Yes ☐ No ☐ Unclear | |
| 24 | Integration of persons with disabilities (Art. 26) | ☐ Yes ☐ No ☐ Unclear | |
| 25 | Right to an effective remedy / fair trial (Art. 47) | ☐ Yes ☐ No ☐ Unclear | |
| 26 | Presumption of innocence (Art. 48) | ☐ Yes ☐ No ☐ Unclear | |
| 27 | Legality / proportionality of criminal offences (Art. 49) | ☐ Yes ☐ No ☐ Unclear | |
| 28 | Right not to be tried twice (Art. 50) | ☐ Yes ☐ No ☐ Unclear |
Part D: Risk Analysis
D.1 Scoring Methodology
For each right flagged Yes or Unclear in Part C, complete the scoring table below.
Likelihood Score (L):
| Score | Meaning |
|---|---|
| 1 | Remote — impact unlikely under normal operation |
| 2 | Possible — impact could occur in some circumstances |
| 3 | Likely — impact expected under certain conditions |
| 4 | Certain — impact will occur in normal operation |
Severity Score (S):
| Score | Meaning |
|---|---|
| 1 | Negligible — minimal effect, easily reversible |
| 2 | Moderate — noticeable harm, reversible with effort |
| 3 | Significant — serious harm, difficult to reverse |
| 4 | Severe — irreversible or catastrophic harm |
Risk Score = L × S
| Score Range | Risk Level |
|---|---|
| 1–3 | Low |
| 4–8 | Medium |
| 9–12 | High |
| 13–16 | Critical |
D.2 Risk Scoring Table
| Right | Likelihood (1–4) | Severity (1–4) | Risk Score | Risk Level | Notes |
|---|---|---|---|---|---|
| [e.g., Non-discrimination] | |||||
D.3 Detailed Risk Narratives
For each Medium, High, or Critical risk, complete the following:
Right affected: _______________ Risk Score: _______________ Risk Level: _______________
How does the AI system create or exacerbate this risk?
Which affected groups are most exposed to this risk?
What is the worst-case scenario?
Is there historical precedent or research evidence for this risk?
(Copy this block for each risk identified)
Part E: Mitigation Measures
E.1 Mitigation Catalogue
For each identified risk, document mitigation measures:
| Risk ID | Risk Description | Mitigation Measure | Type | Owner | Deadline | Status |
|---|---|---|---|---|---|---|
| R-01 | ☐ Technical ☐ Organisational ☐ Legal | ☐ Planned ☐ In progress ☐ Done | ||||
| R-02 | ☐ Technical ☐ Organisational ☐ Legal | ☐ Planned ☐ In progress ☐ Done | ||||
| R-03 | ☐ Technical ☐ Organisational ☐ Legal | ☐ Planned ☐ In progress ☐ Done | ||||
| R-04 | ☐ Technical ☐ Organisational ☐ Legal | ☐ Planned ☐ In progress ☐ Done | ||||
| R-05 | ☐ Technical ☐ Organisational ☐ Legal | ☐ Planned ☐ In progress ☐ Done |
E.2 Residual Risk Assessment
After mitigations, reassess each risk:
| Risk ID | Original Score | Residual Score | Residual Level | Accepted? |
|---|---|---|---|---|
| R-01 | ☐ Yes ☐ Escalate | |||
| R-02 | ☐ Yes ☐ Escalate | |||
| R-03 | ☐ Yes ☐ Escalate |
Overall residual risk level for deployment: ☐ Acceptable ☐ Conditional ☐ Unacceptable
If conditional, list conditions for deployment:
Part F: Monitoring Plan
F.1 Ongoing Monitoring Activities
| Monitoring Activity | Frequency | Responsible Person | KPI / Threshold | Escalation Trigger |
|---|---|---|---|---|
| Model performance review | ||||
| Bias / fairness audit | ||||
| User complaint review | ||||
| Data drift check | ||||
| Incident log review | ||||
| Regulatory update review |
F.2 Re-assessment Triggers
This FRIA must be reviewed and updated if any of the following occur:
☐ Significant change to the AI system’s purpose or functionality ☐ Significant change to the data used for training or inference ☐ Material change in the regulatory or legal environment ☐ Incident causing or nearly causing fundamental rights harm ☐ Entry into a new deployment territory or context ☐ Annual review (minimum) ☐ Other: _______________
F.3 Incident Reporting
Internal escalation contact: _______________ DPO / AI Officer contact: _______________ Regulatory reporting obligation? ☐ Yes (specify authority): _______________ ☐ No
Part G: Consultation Record
| Stakeholder / Group Consulted | Date | Method | Key Findings / Concerns |
|---|---|---|---|
| Internal legal team | |||
| DPO | |||
| Affected community representative | |||
| Technical team | |||
| External expert |
Part H: Sign-off
| Role | Name | Signature | Date |
|---|---|---|---|
| AI System Owner | |||
| Data Protection Officer | |||
| Legal Counsel | |||
| Executive Sponsor |
Final Assessment Decision:
☐ Approved — System may proceed to deployment as described. ☐ Approved with conditions — System may proceed subject to the conditions listed in Part E. ☐ Rejected — Residual risks are unacceptable. System must not deploy until risks are reduced.
Rationale for decision:
Template provided by VORLUX AI | vorluxai.com Version 1.0 — April 2026 | EU AI Act Article 27 compliant template This is guidance only, not legal advice. Consult qualified legal counsel for your specific situation.
Versión Española
Aviso: Este modelo se proporciona solo con fines de orientación. No constituye asesoramiento legal. Las organizaciones deben consultar a abogados calificados para asegurarse de que cumplan con las leyes y regulaciones aplicables.
Evaluación del Impacto en los Derechos Fundamentales (FRIA)
Reglamento UE sobre Inteligencia Artificial — Cumplimiento del artículo 27
Referencia del documento: FRIA-[YYYY]-[NNN]
Versión: _______________
Fecha de inicio: _______________
Fecha de finalización: _______________
Fecha de revisión siguiente: _______________
Clasificación: ☐ Público ☐ Interno ☐ Confidencial
Parte A: Detalles de la organización y el sistema
| Campo | Entrada |
|---|---|
| Nombre de la organización | |
| Tipo de organización | ☐ Entidad pública ☐ Empresa privada ☐ Sin fines de lucro |
| Persona de contacto (DPO/Oficial de Inteligencia Artificial) | |
| Correo electrónico | |
| Nombre del sistema de inteligencia artificial | |
| Versión del sistema de inteligencia artificial | |
| Regiones de despliegue | |
| Fecha prevista para el lanzamiento | |
| Categoría de alto riesgo (Anexo III) |
Parte B: Descripción del sistema
B.1 Propósito y funcionalidad
¿Qué hace el sistema de inteligencia artificial? (Describe en lenguaje llano)
[Proporciona una descripción clara y sin jerga de lo que hace el sistema, sus entradas, lógica de procesamiento y salidas.] _______________________________________________________________________________ _______________________________________________________________________________ _______________________________________________________________________________
¿Qué decisión(es) específica(s) toma o respalda el sistema?
- Decisión 1: _______________
- Decisión 2: _______________
- Decisión 3: _______________
¿Es el sistema de inteligencia artificial el único tomador de decisiones, o un humano revisa las salidas?
☐ Automatizado por completo (sin revisión humana) ☐ Con la participación del ser humano (el ser humano puede superar) ☐ Con la supervisión del ser humano (el ser humano monitorea e interviene si es necesario) ☐ A cargo de un ser humano (la inteligencia artificial es solo consultiva)
Describe el mecanismo de revisión humana:
B.2 Poblaciones afectadas
¿Quiénes son los individuos o grupos directamente afectados por este sistema?
| Grupo poblacional | Número estimado | Nivel de vulnerabilidad |
|---|---|---|
| ☐ Bajo ☐ Medio ☐ Alto | ||
| ☐ Bajo ☐ Medio ☐ Alto | ||
| ☐ Bajo ☐ Medio ☐ Alto | ||
| ☐ Bajo ☐ Medio ☐ Alto |
¿Las poblaciones afectadas incluyen personas vulnerables?
☐ Sí ☐ No ☐ Posiblemente
Si es así, especificar:
☐ Menores de edad (menos de 18 años) ☐ Personas mayores ☐ Personas con discapacidad ☐ Otras