Volver a plantillas
compliance assessment

Plantilla de Evaluación de Impacto sobre los Derechos Fundamentales (FRIA)

Plantilla estructurada para realizar una Evaluación de Impacto sobre los Derechos Fundamentales para sistemas de IA de alto riesgo según el Artículo 27 de la Ley de IA de la UE.

Request customization
🌐
Bilingual / Bilingüe

This template includes both English and Spanish versions. Scroll down to find "Versión Española".

Disclaimer: This template is provided for guidance purposes only. It does not constitute legal advice. Organisations should consult qualified legal counsel to ensure compliance with applicable laws and regulations.

Template provided by VORLUX AI — vorluxai.com


Fundamental Rights Impact Assessment (FRIA)

EU AI Act — Article 27 Compliance Template

Document Reference: FRIA-[YYYY]-[NNN] Version: _______________ Date Initiated: _______________ Date Completed: _______________ Next Review Date: _______________ Classification: ☐ Public ☐ Internal ☐ Confidential


Part A: Organisation & System Details

FieldEntry
Organisation Name
Organisation Type☐ Public body ☐ Private company ☐ Non-profit
Contact Person (DPO/AI Officer)
Email
AI System Name
AI System Version
Deployment Region(s)
Intended Go-Live Date
High-Risk Category (Annex III)

Part B: System Description

B.1 Purpose and Functionality

What does the AI system do? (Describe in plain language)

[Provide a clear, jargon-free description of what the system does, its inputs, processing logic, and outputs.]




What specific decision(s) does the system make or support?

  • Decision 1: _______________
  • Decision 2: _______________
  • Decision 3: _______________

Is the AI system the sole decision-maker, or does a human review outputs?

☐ Fully automated (no human review) ☐ Human-in-the-loop (human can override) ☐ Human-on-the-loop (human monitors, intervenes if needed) ☐ Human-in-command (AI is advisory only)

Describe the human oversight mechanism:



B.2 Affected Populations

Who are the individuals or groups directly affected by this system?

Population GroupEstimated NumberVulnerability Level
☐ Low ☐ Medium ☐ High
☐ Low ☐ Medium ☐ High
☐ Low ☐ Medium ☐ High
☐ Low ☐ Medium ☐ High

Do affected groups include vulnerable persons? ☐ Yes ☐ No ☐ Possibly

If yes, specify: ☐ Minors (under 18) ☐ Elderly persons ☐ Persons with disabilities ☐ Persons with mental health conditions ☐ Persons in economically precarious situations ☐ Asylum seekers / migrants ☐ Other: _______________


B.3 Technical Specification Summary

ParameterDetail
AI/ML Model Type
Training Data Sources
Data Freshness
Known Accuracy / Error Rate
External APIs or Third-party Models Used
Data Processed (categories)
Special Category Data Involved?☐ Yes ☐ No

Part C: Rights Affected — Screening

Instructions: For each fundamental right, indicate whether the AI system affects it (even indirectly). Mark all that apply and proceed to the risk scoring in Part D for each marked right.

#Fundamental RightAffected?Potential Impact
1Human dignity (Art. 1 EUCFR)☐ Yes ☐ No ☐ Unclear
2Right to life (Art. 2)☐ Yes ☐ No ☐ Unclear
3Prohibition of torture / inhuman treatment (Art. 4)☐ Yes ☐ No ☐ Unclear
4Prohibition of slavery (Art. 5)☐ Yes ☐ No ☐ Unclear
5Right to liberty and security (Art. 6)☐ Yes ☐ No ☐ Unclear
6Respect for private and family life (Art. 7)☐ Yes ☐ No ☐ Unclear
7Protection of personal data (Art. 8)☐ Yes ☐ No ☐ Unclear
8Right to marry and found a family (Art. 9)☐ Yes ☐ No ☐ Unclear
9Freedom of thought, conscience, religion (Art. 10)☐ Yes ☐ No ☐ Unclear
10Freedom of expression and information (Art. 11)☐ Yes ☐ No ☐ Unclear
11Freedom of assembly and association (Art. 12)☐ Yes ☐ No ☐ Unclear
12Freedom of the arts and sciences (Art. 13)☐ Yes ☐ No ☐ Unclear
13Right to education (Art. 14)☐ Yes ☐ No ☐ Unclear
14Freedom to choose an occupation (Art. 15)☐ Yes ☐ No ☐ Unclear
15Freedom to conduct a business (Art. 16)☐ Yes ☐ No ☐ Unclear
16Right to property (Art. 17)☐ Yes ☐ No ☐ Unclear
17Right to asylum (Art. 18)☐ Yes ☐ No ☐ Unclear
18Equality before the law (Art. 20)☐ Yes ☐ No ☐ Unclear
19Non-discrimination (Art. 21)☐ Yes ☐ No ☐ Unclear
20Cultural, religious, linguistic diversity (Art. 22)☐ Yes ☐ No ☐ Unclear
21Equality between men and women (Art. 23)☐ Yes ☐ No ☐ Unclear
22Rights of the child (Art. 24)☐ Yes ☐ No ☐ Unclear
23Rights of the elderly (Art. 25)☐ Yes ☐ No ☐ Unclear
24Integration of persons with disabilities (Art. 26)☐ Yes ☐ No ☐ Unclear
25Right to an effective remedy / fair trial (Art. 47)☐ Yes ☐ No ☐ Unclear
26Presumption of innocence (Art. 48)☐ Yes ☐ No ☐ Unclear
27Legality / proportionality of criminal offences (Art. 49)☐ Yes ☐ No ☐ Unclear
28Right not to be tried twice (Art. 50)☐ Yes ☐ No ☐ Unclear

Part D: Risk Analysis

D.1 Scoring Methodology

For each right flagged Yes or Unclear in Part C, complete the scoring table below.

Likelihood Score (L):

ScoreMeaning
1Remote — impact unlikely under normal operation
2Possible — impact could occur in some circumstances
3Likely — impact expected under certain conditions
4Certain — impact will occur in normal operation

Severity Score (S):

ScoreMeaning
1Negligible — minimal effect, easily reversible
2Moderate — noticeable harm, reversible with effort
3Significant — serious harm, difficult to reverse
4Severe — irreversible or catastrophic harm

Risk Score = L × S

Score RangeRisk Level
1–3Low
4–8Medium
9–12High
13–16Critical

D.2 Risk Scoring Table

RightLikelihood (1–4)Severity (1–4)Risk ScoreRisk LevelNotes
[e.g., Non-discrimination]

D.3 Detailed Risk Narratives

For each Medium, High, or Critical risk, complete the following:


Right affected: _______________ Risk Score: _______________ Risk Level: _______________

How does the AI system create or exacerbate this risk?


Which affected groups are most exposed to this risk?


What is the worst-case scenario?


Is there historical precedent or research evidence for this risk?


(Copy this block for each risk identified)


Part E: Mitigation Measures

E.1 Mitigation Catalogue

For each identified risk, document mitigation measures:

Risk IDRisk DescriptionMitigation MeasureTypeOwnerDeadlineStatus
R-01☐ Technical ☐ Organisational ☐ Legal☐ Planned ☐ In progress ☐ Done
R-02☐ Technical ☐ Organisational ☐ Legal☐ Planned ☐ In progress ☐ Done
R-03☐ Technical ☐ Organisational ☐ Legal☐ Planned ☐ In progress ☐ Done
R-04☐ Technical ☐ Organisational ☐ Legal☐ Planned ☐ In progress ☐ Done
R-05☐ Technical ☐ Organisational ☐ Legal☐ Planned ☐ In progress ☐ Done

E.2 Residual Risk Assessment

After mitigations, reassess each risk:

Risk IDOriginal ScoreResidual ScoreResidual LevelAccepted?
R-01☐ Yes ☐ Escalate
R-02☐ Yes ☐ Escalate
R-03☐ Yes ☐ Escalate

Overall residual risk level for deployment: ☐ Acceptable ☐ Conditional ☐ Unacceptable

If conditional, list conditions for deployment:



Part F: Monitoring Plan

F.1 Ongoing Monitoring Activities

Monitoring ActivityFrequencyResponsible PersonKPI / ThresholdEscalation Trigger
Model performance review
Bias / fairness audit
User complaint review
Data drift check
Incident log review
Regulatory update review

F.2 Re-assessment Triggers

This FRIA must be reviewed and updated if any of the following occur:

☐ Significant change to the AI system’s purpose or functionality ☐ Significant change to the data used for training or inference ☐ Material change in the regulatory or legal environment ☐ Incident causing or nearly causing fundamental rights harm ☐ Entry into a new deployment territory or context ☐ Annual review (minimum) ☐ Other: _______________

F.3 Incident Reporting

Internal escalation contact: _______________ DPO / AI Officer contact: _______________ Regulatory reporting obligation? ☐ Yes (specify authority): _______________ ☐ No


Part G: Consultation Record

Stakeholder / Group ConsultedDateMethodKey Findings / Concerns
Internal legal team
DPO
Affected community representative
Technical team
External expert

Part H: Sign-off

RoleNameSignatureDate
AI System Owner
Data Protection Officer
Legal Counsel
Executive Sponsor

Final Assessment Decision:

Approved — System may proceed to deployment as described. ☐ Approved with conditions — System may proceed subject to the conditions listed in Part E. ☐ Rejected — Residual risks are unacceptable. System must not deploy until risks are reduced.

Rationale for decision:




Template provided by VORLUX AI | vorluxai.com Version 1.0 — April 2026 | EU AI Act Article 27 compliant template This is guidance only, not legal advice. Consult qualified legal counsel for your specific situation.


Versión Española

Aviso: Este modelo se proporciona solo con fines de orientación. No constituye asesoramiento legal. Las organizaciones deben consultar a abogados calificados para asegurarse de que cumplan con las leyes y regulaciones aplicables.


Evaluación del Impacto en los Derechos Fundamentales (FRIA)

Reglamento UE sobre Inteligencia Artificial — Cumplimiento del artículo 27

Referencia del documento: FRIA-[YYYY]-[NNN]

Versión: _______________

Fecha de inicio: _______________

Fecha de finalización: _______________

Fecha de revisión siguiente: _______________

Clasificación: ☐ Público ☐ Interno ☐ Confidencial


Parte A: Detalles de la organización y el sistema

CampoEntrada
Nombre de la organización
Tipo de organización☐ Entidad pública ☐ Empresa privada ☐ Sin fines de lucro
Persona de contacto (DPO/Oficial de Inteligencia Artificial)
Correo electrónico
Nombre del sistema de inteligencia artificial
Versión del sistema de inteligencia artificial
Regiones de despliegue
Fecha prevista para el lanzamiento
Categoría de alto riesgo (Anexo III)

Parte B: Descripción del sistema

B.1 Propósito y funcionalidad

¿Qué hace el sistema de inteligencia artificial? (Describe en lenguaje llano)

[Proporciona una descripción clara y sin jerga de lo que hace el sistema, sus entradas, lógica de procesamiento y salidas.] _______________________________________________________________________________ _______________________________________________________________________________ _______________________________________________________________________________

¿Qué decisión(es) específica(s) toma o respalda el sistema?

  • Decisión 1: _______________
  • Decisión 2: _______________
  • Decisión 3: _______________

¿Es el sistema de inteligencia artificial el único tomador de decisiones, o un humano revisa las salidas?

☐ Automatizado por completo (sin revisión humana) ☐ Con la participación del ser humano (el ser humano puede superar) ☐ Con la supervisión del ser humano (el ser humano monitorea e interviene si es necesario) ☐ A cargo de un ser humano (la inteligencia artificial es solo consultiva)

Describe el mecanismo de revisión humana:



B.2 Poblaciones afectadas

¿Quiénes son los individuos o grupos directamente afectados por este sistema?

Grupo poblacionalNúmero estimadoNivel de vulnerabilidad
☐ Bajo ☐ Medio ☐ Alto
☐ Bajo ☐ Medio ☐ Alto
☐ Bajo ☐ Medio ☐ Alto
☐ Bajo ☐ Medio ☐ Alto

¿Las poblaciones afectadas incluyen personas vulnerables?

☐ Sí ☐ No ☐ Posiblemente

Si es así, especificar:

☐ Menores de edad (menos de 18 años) ☐ Personas mayores ☐ Personas con discapacidad ☐ Otras

EU AI Act: 99 días para el deadline

15 minutos para evaluar su caso

Consultoría inicial sin compromiso. Analizamos su infraestructura y le recomendamos la arquitectura híbrida óptima.

Sin compromiso 15 minutos Propuesta personalizada

136 páginas de recursos gratuitos · 26 plantillas de compliance · 22 dispositivos certificados