Volver a plantillas
compliance checklist

Lista de Verificación de Documentación Técnica (Anexo IV)

Lista de verificación completa para los requisitos de documentación técnica del Anexo IV de la Ley de IA de la UE, cubriendo las 15 secciones obligatorias para sistemas de IA de alto riesgo.

Request customization
🌐
Bilingual / Bilingüe

This template includes both English and Spanish versions. Scroll down to find "Versión Española".

Technical Documentation Checklist — EU AI Act Annex IV

Disclaimer: This is guidance only, not legal advice. Consult qualified legal counsel for your specific compliance obligations.

Template provided by VORLUX AI | vorluxai.com


How to Use This Template

This checklist maps directly to Annex IV of the EU AI Act (Regulation (EU) 2024/1689). Each section must be completed before placing a high-risk AI system on the EU market or putting it into service. Documentation must be kept up to date throughout the system’s lifecycle and made available to competent authorities on request.

FieldValue
AI System Name___________________________
Version / Build___________________________
Provider Organisation___________________________
Document Owner___________________________
Document Version___________________________
Last Updated___________________________
ClassificationHigh-Risk AI System (Annex III, item ____)

Section 1 — General Description of the AI System

Art. 11 & Annex IV §1

1.1 Intended Purpose

  • Clear statement of the intended purpose of the AI system
  • Identification of the natural persons or groups of persons to whom the system is intended to be used
  • Description of the specific context(s) of use
  • Identification of users (operators/deployers) and affected persons

Intended Purpose Statement:

[Describe in plain language what this AI system does and for whom]

Target Users:

[List intended operator types and end-user categories]

Use Context:

[Describe the operational environment — sector, geography, workflow integration]

1.2 System Category & Annex III Classification

  • Confirm which Annex III category applies (tick all that apply):
Annex III ItemDescriptionApplies?
1Biometric identification and categorisation☐ Yes ☐ No
2Critical infrastructure management☐ Yes ☐ No
3Education and vocational training☐ Yes ☐ No
4Employment and workers management☐ Yes ☐ No
5Access to essential services☐ Yes ☐ No
6Law enforcement☐ Yes ☐ No
7Migration and asylum management☐ Yes ☐ No
8Administration of justice☐ Yes ☐ No

Confirmed Classification: Annex III, Item ____

1.3 System Version History

VersionDateSummary of ChangesAuthor
v_______-__-__Initial release___
v_______-__-_____________________________

Section 2 — Detailed Description of System Elements

Annex IV §2

2.1 Development Methods and Logic

  • Description of the methods used to develop the AI system
  • Description of the design specifications (objectives, classification, regression targets)
  • Description of the main design choices and trade-offs made
  • Rationale for selecting the ML approach/architecture used

Development Approach:

[Describe: supervised/unsupervised/reinforcement learning, model type, framework]

Key Design Decisions:

[Document major architectural choices and the reasoning behind them]

2.2 System Architecture

  • Overall system architecture diagram attached
  • Description of each component and its role
  • Data flow diagram showing inputs, processing steps, outputs
  • Description of integration points with external systems

Architecture Diagram Reference: [Attach or link diagram — file: ___________]

Component Inventory:

ComponentRoleTechnologyVersion
____________________________________
____________________________________
____________________________________

2.3 Computational Resources

  • Description of compute infrastructure (cloud/on-premise/edge)
  • Hardware specifications for inference and training
  • Estimated resource consumption at scale

Infrastructure:

[Cloud provider, region, instance types; or on-premise hardware specs]

Section 3 — Monitoring, Functioning, and Control

Annex IV §3

3.1 Real-Time Monitoring Capabilities

  • Description of monitoring mechanisms built into the system
  • List of metrics monitored in production (performance, drift, errors)
  • Alert thresholds and escalation procedures defined
  • Logging mechanisms for audit trails

Monitoring Metrics:

MetricThresholdAlert ActionOwner
Prediction accuracy> ____%______________
Response latency (P99)< ___ms______________
Data drift index< _________________
Error rate< ____%______________

3.2 Human Control Mechanisms

  • Description of human override capabilities (Article 14 requirements)
  • Stop/pause/rollback procedures documented
  • Operator intervention interfaces described
  • Escalation path to human decision-maker defined

Override Procedure Reference: [Link to Human Oversight Guide]

3.3 Logging and Audit Trail

  • All inputs and outputs logged (where proportionate)
  • Log retention period defined: ___ days / months / years
  • Log access controls defined
  • Log integrity protection (tamper-evidence) in place

Section 4 — Risk Management System

Art. 9 & Annex IV §4

4.1 Risk Identification

  • Systematic risk identification process completed
  • All known risks to health, safety, and fundamental rights documented
  • Foreseeable misuse scenarios documented
  • Risks arising from interaction with other systems identified

Risk Register:

Risk IDRisk DescriptionLikelihood (1-5)Impact (1-5)Risk ScoreMitigation
R-001___________________________________
R-002___________________________________
R-003___________________________________

4.2 Risk Evaluation and Mitigation

  • Risk evaluation against acceptable risk levels documented
  • Risk mitigation measures identified and implemented
  • Residual risks documented and accepted by accountable person
  • Risk management process is iterative (reviewed post-change)

Residual Risk Statement:

[After mitigation, describe remaining risks and justification for acceptance]

4.3 Risk Management Review Schedule

  • Initial risk assessment completed: ____-__-__
  • Next scheduled review: ____-__-__
  • Trigger events for ad-hoc reviews defined (e.g., incidents, model updates)

Section 5 — Data Governance

Art. 10 & Annex IV §5

5.1 Training Data

  • Description of training dataset(s) — source, size, format
  • Data collection methodologies described
  • Data selection criteria documented
  • Labelling methodology and quality controls described
  • Known limitations and gaps in training data documented

Training Data Summary:

DatasetSourceSizeDate RangeLicenceKnown Gaps
_____________________________________________

5.2 Validation and Test Data

  • Validation dataset described (separate from training data)
  • Test dataset described (held-out, representative of real-world distribution)
  • Data splits documented (e.g., 70/15/15 train/val/test)
  • Geographic and demographic representativeness assessed

5.3 Data Quality

  • Data quality measures applied (deduplication, outlier removal, etc.)
  • Data cleaning procedures documented
  • Bias detection and mitigation steps documented
  • Data lineage traceable

5.4 Personal Data and Privacy

  • Personal data in training sets identified
  • GDPR lawful basis for processing confirmed: ___________
  • Data minimisation applied
  • Anonymisation/pseudonymisation techniques documented
  • Data Protection Impact Assessment (DPIA) status: ___________

Section 6 — Technical Specifications

Annex IV §6

6.1 Input Specifications

  • Data types accepted (structured/unstructured, modalities)
  • Input format requirements (schema, encoding, size limits)
  • Valid input ranges and constraints defined
  • Handling of out-of-distribution inputs described

Input Schema:

[Define expected input structure, data types, constraints]

6.2 Output Specifications

  • Output format described (classification labels, scores, structured data, text, etc.)
  • Confidence/uncertainty estimates provided: ☐ Yes ☐ No
  • Output interpretation guidance documented
  • Output post-processing steps described

Output Schema:

[Define output structure, value ranges, interpretation guidelines]

6.3 Model Specifications

  • Model type and architecture documented (e.g., transformer, random forest, CNN)
  • Number of parameters: ___________
  • Model file formats and versions documented
  • Dependencies and third-party libraries listed with versions

Dependency Manifest:

Library/ModelVersionLicencePurpose
________________________________
________________________________

Section 7 — Quality Management

Art. 17 & Annex IV §7

7.1 Performance Metrics

  • Performance metrics selected and justified
  • Baseline benchmarks documented
  • Performance across relevant subgroups tested (demographic parity, etc.)
  • Acceptable performance thresholds defined

Performance Summary:

MetricValueBenchmarkSubgroup ResultsPass/Fail
Accuracy____%____%[attached]☐ Pass ☐ Fail
Precision____%____%[attached]☐ Pass ☐ Fail
Recall____%____%[attached]☐ Pass ☐ Fail
F1 Score______[attached]☐ Pass ☐ Fail

7.2 Testing Procedures

  • Unit testing procedures documented
  • Integration testing procedures documented
  • Pre-deployment acceptance testing documented
  • Adversarial / robustness testing results documented

7.3 Continuous Improvement

  • Post-market monitoring plan in place (Art. 72)
  • Feedback mechanisms to capture real-world performance
  • Process for incorporating improvements / retraining defined
  • Version control and release management procedures documented

Section 8 — EU Declaration of Conformity

Art. 47 & Annex V — see separate template

  • EU Declaration of Conformity drafted (see declaration-of-conformity.md)
  • Declaration covers the specific AI system version documented here
  • Declaration signed by authorised representative
  • Declaration reference number: DOC-____-____

Section 9 — Post-Market Monitoring Plan

Art. 72

  • Post-market monitoring plan prepared
  • KPIs for ongoing monitoring defined
  • Incident reporting procedures established
  • Serious incident notification to authorities documented (Art. 73)
  • Review frequency: ☐ Monthly ☐ Quarterly ☐ Annually ☐ Event-triggered

Monitoring Plan Reference: [File: ___________]


Section 10 — Cybersecurity

  • Cybersecurity risks identified and documented
  • Security testing (penetration testing, adversarial input testing) performed
  • Access controls and authentication described
  • Data encryption at rest and in transit documented
  • Incident response plan in place

Security Assessment Date: ____-__-__ Security Assessment Provider: ___________________________


Section 11 — Accuracy, Robustness, and Resilience

Art. 15

  • Accuracy levels declared and substantiated
  • Robustness to errors and inconsistencies in inputs tested
  • Resilience against adversarial manipulation assessed
  • Fallback behaviour when system operates outside design envelope defined

Robustness Testing Summary:

[Summarise robustness tests performed and results]

Section 12 — Transparency and Explainability

  • Level of explainability appropriate for the use case documented
  • Explanation methods used (SHAP, LIME, attention maps, etc.) described
  • User-facing explanations designed and tested
  • Limitations of explanations disclosed

Section 13 — Third-Party Components and Supply Chain

  • All third-party AI components identified
  • Suppliers’ compliance documentation collected
  • Supply chain risk assessment completed
  • Licence compliance confirmed for all components

Third-Party Component Register:

ComponentSupplierVersionEU AI Act StatusLicence Confirmed
____________________________________☐ Yes ☐ No

Section 14 — Instructions for Use

Art. 13 & Annex IV §14

  • Instructions for use prepared for operators/deployers
  • Instructions cover: intended purpose, known limitations, maintenance
  • Instructions cover: human oversight requirements
  • Instructions available in language(s) of Member States of deployment
  • Instructions version-controlled and dated

Languages Available: ___________________________ Document Reference: [File: ___________]


Section 15 — Samples and Specimens

  • Representative sample outputs included or referenced
  • Test cases with expected vs. actual outputs documented
  • Edge cases and failure mode examples documented

Documentation Sign-Off

RoleNameSignatureDate
Technical Lead__________________________________-__-__
Data Protection Officer__________________________________-__-__
Quality Manager__________________________________-__-__
Legal / Compliance__________________________________-__-__
Authorised Signatory__________________________________-__-__

Annex IV Completeness Tracker

SectionStatusOwnerLast Updated
1. General Description☐ Draft ☐ Review ☐ Approved_______-__-__
2. System Elements☐ Draft ☐ Review ☐ Approved_______-__-__
3. Monitoring & Control☐ Draft ☐ Review ☐ Approved_______-__-__
4. Risk Management☐ Draft ☐ Review ☐ Approved_______-__-__
5. Data Governance☐ Draft ☐ Review ☐ Approved_______-__-__
6. Technical Specifications☐ Draft ☐ Review ☐ Approved_______-__-__
7. Quality Management☐ Draft ☐ Review ☐ Approved_______-__-__
8. Declaration of Conformity☐ Draft ☐ Review ☐ Approved_______-__-__
9. Post-Market Monitoring☐ Draft ☐ Review ☐ Approved_______-__-__
10. Cybersecurity☐ Draft ☐ Review ☐ Approved_______-__-__
11. Accuracy & Robustness☐ Draft ☐ Review ☐ Approved_______-__-__
12. Transparency☐ Draft ☐ Review ☐ Approved_______-__-__
13. Supply Chain☐ Draft ☐ Review ☐ Approved_______-__-__
14. Instructions for Use☐ Draft ☐ Review ☐ Approved_______-__-__
15. Samples & Specimens☐ Draft ☐ Review ☐ Approved_______-__-__

Template provided by VORLUX AI | vorluxai.com | This is guidance only, not legal advice.


Versión Española

Checklist de Documentación Técnica — Anexo IV del Reglamento (UE) 2024/1689 sobre Inteligencia Artificial

Nota: Esta es una guía solo, no asesoramiento legal. Consulte a un abogado calificado para sus obligaciones de cumplimiento específicas.

Plantilla proporcionada por VORLUX AI | vorluxai.com


Cómo Usar esta Plantilla

Esta lista de verificación se ajusta directamente al Anexo IV del Reglamento (UE) 2024/1689 sobre Inteligencia Artificial. Cada sección debe completarse antes de colocar un sistema de inteligencia artificial de alto riesgo en el mercado de la UE o ponerlo en servicio. La documentación debe mantenerse actualizada a lo largo del ciclo de vida del sistema y estar disponible para las autoridades competentes a petición.

CampoValor
Nombre del Sistema AI___________________________
Versión / Edición___________________________
Organización Proveedor___________________________
Propietario de la Documentación___________________________
Versión de la Documentación___________________________
Última Actualización___________________________
ClasificaciónSistema AI de Alto Riesgo (Anexo III, ítem ____)

Sección 1 — Descripción General del Sistema AI

Art. 11 & Anexo IV §1

1.1 Propósito Intencionado

  • Declaración clara del propósito intencionado del sistema AI
  • Identificación de las personas naturales o grupos de personas a quienes el sistema está destinado a utilizarse
  • Descripción del contexto específico(s) de uso
  • Identificación de los usuarios (operadores/deployers) y las personas afectadas

Declaración de Propósito Intencionado:

[Describa en lenguaje llano qué hace este sistema AI y para quién]

Usuarios Objetivos:

[Liste tipos de operadores e identifique categorías de usuarios finales]

Contexto de Uso:

[Describa el entorno operativo — sector, geografía, integración en flujo de trabajo]

1.2 Categoría del Sistema y Clasificación del Anexo III

  • Confirme qué categoría del Anexo III se aplica (marque todas que apliquen):
Item del Anexo IIIDescripciónAplica?
1Identificación biométrica y clasificación☐ Sí ☐ No
2Gestión de infraestructura crítica☐ Sí ☐ No
3Educación y formación profesional☐ Sí ☐ No
4Gestión del empleo y los trabajadores☐ Sí ☐ No
5Acceso a servicios esenciales☐ Sí ☐ No
6Vigilancia de la ley☐ Sí ☐ No
7Gestión de migración y asilo☐ Sí ☐ No
8Administración de justicia☐ Sí ☐ No

Clasificación Confirmada: Anexo III, ítem ____

1.3 Historial de Versiones del Sistema

VersiónFechaResumen de CambiosAutor
v_______-__-__Lanzamiento inicial___
v_______-__-_____________________________

Sección 2 — Descripción Detallada de los Elementos del Sistema

Anexo IV §2

2.1 Métodos y Lógica de Desarrollo

  • Descripción de los métodos utilizados para desarrollar el sistema AI
  • Descripción de las especificaciones de diseño (objetivos, clasificación, objetivos de regresión)
  • Descripción de las principales decisiones de diseño y equilibrios realizados
  • Razonamiento para seleccionar la aproximación ML utilizada

Enfoque de Desarrollo:

[Describa: aprendizaje supervisado/autónomo/aprendizaje por refuerzo, tipo de modelo, framework]

Decisiones de Diseño Clave:

[Documente las principales decisiones arquitectónicas y la razón detrás de ellas]

2.2 Arquitectura del Sistema

  • Diagrama de arquitectura general adjunto
  • Descripción de cada componente y su papel
  • Diagrama de flujo de datos mostrando entradas, pasos de procesamiento y salidas
  • Descripción de puntos de integración con sistemas externos

Referencia del Diagrama de Arquitectura: [Adjunte o enlace diagrama — archivo: ___________]

Inventario de Componentes:

ComponentePapelTecnologíaVersión
____________________________________
____________________________________
____________________________________

2.3 Recursos Computacionales

  • Descripción de recursos computacionales utilizados

Salida:

EU AI Act: 99 días para el deadline

15 minutos para evaluar su caso

Consultoría inicial sin compromiso. Analizamos su infraestructura y le recomendamos la arquitectura híbrida óptima.

Sin compromiso 15 minutos Propuesta personalizada

136 páginas de recursos gratuitos · 26 plantillas de compliance · 22 dispositivos certificados