Plantilla de Plan de Respuesta a Incidentes de IA
Plan integral de respuesta a incidentes de IA que cubre categorías de incidentes, niveles de gravedad, procedimientos de respuesta, obligaciones de notificación conforme al Artículo 73 de la Ley de IA de la UE y procesos de revisión post-incidente.
This template includes both English and Spanish versions. Scroll down to find "Versión Española".
Disclaimer: This template is provided for guidance purposes only. It does not constitute legal advice. Organisations should consult qualified legal counsel to ensure compliance with applicable laws and regulations.
Template provided by VORLUX AI — vorluxai.com
AI Incident Response Plan
EU AI Act — Article 73 Compliance Template
Organisation: _______________ Document Reference: AIRP-[YYYY]-[NNN] Version: _______________ Prepared by: _______________ Approved by: _______________ Effective Date: _______________ Review Date: _______________
Section 1: Purpose and Scope
1.1 Purpose
This plan defines how [Organisation Name] identifies, classifies, responds to, reports, and learns from incidents involving AI systems. It ensures compliance with Article 73 of the EU AI Act (serious incident reporting) and supports the organisation’s broader risk management obligations.
1.2 Scope
This plan applies to:
- All AI systems operated by the organisation in a deployer capacity
- All AI systems developed or provided by the organisation
- All staff, contractors, and third parties involved in operating or monitoring AI systems
AI Systems Covered:
| System Name | Risk Classification | System Owner | Version |
|---|---|---|---|
| ☐ High-risk ☐ Limited ☐ Minimal | |||
| ☐ High-risk ☐ Limited ☐ Minimal | |||
| ☐ High-risk ☐ Limited ☐ Minimal |
1.3 Regulatory Context
Under Article 73, providers of high-risk AI systems must report serious incidents to the relevant national market surveillance authority. A serious incident is defined as an incident or malfunctioning of an AI system that, directly or indirectly, leads to:
- The death of a person or serious damage to a person’s health
- A serious and irreversible disruption of critical infrastructure
- An infringement of fundamental rights
- Serious damage to property or the environment
Competent Authority for [Country]: _______________ Reporting Portal / Contact: _______________ Initial Report Deadline: 15 working days (life-threatening: 2 working days) after becoming aware
Section 2: Incident Categories
2.1 Category Framework
| Category | Code | Description | Examples |
|---|---|---|---|
| Safety Incident | CAT-S | AI output causes or risks physical harm | Medical AI misdiagnosis; autonomous system collision |
| Rights Violation | CAT-R | AI output infringes fundamental rights | Discriminatory hiring decision; unlawful profiling |
| Data Incident | CAT-D | AI system involved in data breach, leak, or misuse | Training data exfiltrated; personal data exposed in output |
| System Failure | CAT-F | AI system unavailable, degraded, or behaving unexpectedly | Model crashes; severe accuracy degradation |
| Manipulation / Misuse | CAT-M | AI system used or manipulated for harmful purposes | Adversarial attacks; prompt injection leading to policy violation |
| Compliance Incident | CAT-C | AI system found to be non-compliant with regulation | Missing conformity assessment; prohibited use discovered |
| Transparency Breach | CAT-T | Failure to disclose AI involvement as required | Chatbot not labelled; automated decision not notified |
| Reputational Incident | CAT-P | AI system causes significant public trust harm | Media coverage of biased AI output; customer complaints |
Section 3: Severity Levels
3.1 Severity Classification
| Level | Name | Definition | Response Timeframe |
|---|---|---|---|
| SEV-1 | Critical | Active harm occurring or imminent; fundamental rights severely violated; life at risk | Immediate (< 1 hour) |
| SEV-2 | High | Serious harm occurred or likely; significant regulatory obligation triggered | < 4 hours |
| SEV-3 | Medium | Harmful output detected; no immediate harm but risk of escalation | < 24 hours |
| SEV-4 | Low | Near-miss; policy violation without harm; isolated anomaly | < 5 business days |
| SEV-5 | Informational | Performance degradation below threshold; curiosity for monitoring | Next review cycle |
3.2 Severity Decision Matrix
Use the following questions to classify severity:
Step 1 — Has harm already occurred?
- Death or serious physical injury → SEV-1
- Significant financial or psychological harm → SEV-2
- Minor harm or harm risk only → SEV-3 or lower
Step 2 — Is a fundamental right involved?
- Irreversible rights violation → SEV-1
- Reversible rights violation → SEV-2
- Possible / minor rights impact → SEV-3
Step 3 — Is regulatory reporting triggered?
- Article 73 reporting threshold met → escalate to minimum SEV-2
Step 4 — Is the system still running and causing harm?
- Yes → escalate by one level; consider immediate shutdown
Section 4: Incident Response Procedures
4.1 Response Team
| Role | Responsibilities | Primary Contact | Backup Contact |
|---|---|---|---|
| Incident Commander (IC) | Overall incident ownership; decisions on containment and reporting | ||
| AI System Owner | Technical knowledge; system access | ||
| Legal / Compliance | Regulatory reporting; legal exposure assessment | ||
| DPO | Data protection assessment; GDPR obligations | ||
| Communications Lead | Internal and external communications | ||
| IT / Security | System access; logging; technical containment | ||
| Executive Sponsor | Authorises major decisions (shutdown, public disclosure) |
Escalation Hotline (internal): _______________ 24/7 On-call rotation: _______________
4.2 Phase 1: Detection and Intake (All Severities)
Detection Sources:
- Internal monitoring / alerting system
- User complaint or report
- Employee observation
- Third-party notification (vendor, partner, regulator)
- Media / social media monitoring
- Post-incident audit discovery
Immediate Actions (within first 60 minutes for SEV-1/2):
- Log the incident in the Incident Register (see Section 7)
- Assign an Incident ID: INC-[YYYY]-[NNN]
- Assign an Incident Commander
- Classify severity using the matrix in Section 3.2
- Notify the response team using communication templates in Section 6
- Preserve evidence: logs, screenshots, model versions, input/output data
- Assess whether the AI system must be suspended (see Section 4.3)
4.3 Phase 2: Containment
Containment Decision Checklist:
| Question | Yes → Action |
|---|---|
| Is the AI system currently producing harmful outputs? | Suspend or throttle the system immediately |
| Can the root cause be isolated to a specific feature or input type? | Restrict that input type; keep other functions running |
| Is the incident limited to one user/case? | Quarantine that case; continue with enhanced monitoring |
| Is the issue in the underlying model (not just an edge case)? | Full system suspension; notify vendor |
| Has a data breach occurred? | Invoke GDPR breach procedures; notify DPO |
System Suspension Protocol:
| Action | Authorised by | Documentation Required |
|---|---|---|
| Suspend AI system for < 4 hours | AI System Owner | Incident log entry |
| Suspend AI system for 4–24 hours | AI System Owner + IC | Incident log + executive notification |
| Suspend AI system for > 24 hours | Executive Sponsor + Legal | Board notification; customer communication |
Temporary workaround: _______________
4.4 Phase 3: Investigation
Investigation Checklist:
- Identify the precise AI system and version involved
- Retrieve and preserve input/output logs for the affected period
- Determine whether the incident is isolated or systemic
- Identify root cause (data quality, model error, configuration, misuse, infrastructure)
- Assess scope: how many individuals are affected?
- Assess duration: when did the incident begin?
- Review whether safeguards (monitoring, human oversight) failed and why
- Interview relevant staff
- Engage vendor if third-party model is involved (see communication template in Section 6)
- Document all investigation steps with timestamps
Root Cause Categories:
| Code | Root Cause | Investigation Focus |
|---|---|---|
| RC-01 | Training data issue | Data quality, bias, outdated data |
| RC-02 | Model architecture flaw | Model evaluation reports |
| RC-03 | Prompt / input manipulation | Input validation, security |
| RC-04 | Infrastructure / integration failure | System logs, API calls |
| RC-05 | Human oversight failure | Process review, training records |
| RC-06 | Scope creep / misuse | Usage logs, authorisation records |
| RC-07 | Vendor / third-party failure | SLA review, vendor communication |
| RC-08 | Regulatory gap | Policy and compliance review |
4.5 Phase 4: Remediation
Remediation Plan Template:
| Issue Identified | Remediation Action | Owner | Deadline | Status |
|---|---|---|---|---|
| ☐ Planned ☐ In progress ☐ Complete | ||||
| ☐ Planned ☐ In progress ☐ Complete | ||||
| ☐ Planned ☐ In progress ☐ Complete |
Before resuming system operation, confirm:
- Root cause identified and resolved
- Affected individuals notified (if required)
- Regulatory reports submitted (if required)
- Enhanced monitoring in place
- Sign-off from Incident Commander, Legal, and AI System Owner
4.6 Phase 5: Recovery and Monitoring
Recovery Checklist:
- System reinstated at [date/time]: _______________
- Enhanced monitoring period defined: _______________ days
- KPIs and anomaly thresholds reviewed and updated
- Staff briefed on post-incident monitoring responsibilities
- Post-incident review scheduled for: _______________
Section 5: Reporting Obligations
5.1 Internal Reporting
| Report Type | Audience | Timing | Template |
|---|---|---|---|
| Incident Alert | Response Team | Immediately on detection | Section 6.1 |
| Status Update | Executive Sponsor | Every 4 hours for SEV-1/2 | Section 6.2 |
| Board Notification | Board / Governance | For SEV-1 or regulatory report triggered | Section 6.3 |
| Incident Summary | All stakeholders | Within 5 days of closure | Section 6.4 |
5.2 External Reporting
Regulatory Reporting (Article 73 — Serious Incidents):
| Trigger | Timeline | Authority | Method |
|---|---|---|---|
| Life-threatening incident | 2 working days | National market surveillance authority | [Portal/contact] |
| Other serious incident | 15 working days | National market surveillance authority | [Portal/contact] |
| Root cause update | As available | Same authority | Follow-up notification |
| Final report | Within 3 months | Same authority | Written report |
GDPR Reporting (if personal data involved):
| Trigger | Timeline | Authority | Method |
|---|---|---|---|
| Personal data breach | 72 hours | Supervisory Authority (e.g., AEPD for Spain) | [Portal/contact] |
| High risk to individuals | Without undue delay | Affected individuals | Direct notification |
Affected Individual Notification: Required when: ☐ High-risk AI system ☐ Rights violated ☐ Personal data breached ☐ Decision affected them Notification method: ☐ Email ☐ Letter ☐ In-app ☐ Phone
Section 6: Communication Templates
6.1 Internal Incident Alert
Subject: [URGENT] AI Incident Alert — INC-[YYYY]-[NNN] — [System Name] — SEV-[X]
AI INCIDENT ALERT
Incident ID: INC-[YYYY]-[NNN] Date/Time: [DD/MM/YYYY HH:MM] AI System: [System Name and Version] Severity: SEV-[1/2/3/4] Category: [Category Code and Name] Incident Commander: [Name]
Summary: [2–3 sentence description of what has happened]
Current Status: [Ongoing / Contained / Resolved]
Immediate actions taken:
- [Action 1]
- [Action 2]
Next update by: [HH:MM]
Incident bridge/channel: [Link or number]
6.2 Executive Status Update
Subject: AI Incident Status Update — INC-[YYYY]-[NNN] — [Time]
STATUS UPDATE — [Time]
Incident: INC-[YYYY]-[NNN] | [System Name] | SEV-[X] Update #: [N]
Current situation: [Brief description] Individuals affected: [Number / Unknown] System status: ☐ Running normally ☐ Suspended ☐ Partially suspended Regulatory obligation triggered? ☐ Yes — report due by [date] ☐ No ☐ Under assessment
Actions completed since last update:
- [Action]
Actions in progress:
- [Action]
Decisions needed from executive sponsor:
- [Decision point, if any]
Next update by: [Time]
6.3 Regulatory Notification Letter (Article 73)
[Organisation letterhead]
Date: _______________ To: [National Market Surveillance Authority Name and Address]
Re: Serious Incident Notification — EU AI Act Article 73 Incident Reference: INC-[YYYY]-[NNN]
Dear [Authority Name / Sir or Madam],
In accordance with Article 73 of Regulation (EU) 2024/1689 (the EU AI Act), we are notifying you of a serious incident involving an AI system for which [Organisation Name] is the [provider / deployer].
1. AI System Details
- System name and version: _______________
- Risk classification: High-risk (Annex III, category: _______________)
- EU declaration of conformity reference: _______________
2. Incident Summary
- Date/time of occurrence: _______________
- Date/time we became aware: _______________
- Nature of incident: _______________
- Categories of persons affected: _______________
- Estimated number of affected individuals: _______________
- Geographical scope: _______________
3. Immediate Actions Taken [Describe containment and mitigation actions]
4. Ongoing Actions [Describe current and planned remediation]
5. Further Information A follow-up report with root cause analysis and final remediation measures will be provided by [date].
We are available for any further questions or information requests. Please contact [Name, Role] at [email / phone].
Yours sincerely,
[Name] [Title / AI Officer / DPO] [Organisation]
6.4 Affected Individual Notification
Subject: Important notice regarding an AI system that affected you
Dear [Name / Customer],
We are writing to let you know that an incident occurred involving an artificial intelligence system used by [Organisation Name] that may have affected [you / a decision made about you / your data].
What happened: [Plain language description of the incident]
How it affected you: [Explain the specific impact on this individual]
What we have done: [List actions taken to contain and remediate]
What you can do:
- If a decision was made about you using AI, you have the right to request a human review of that decision.
- You may request further information about how AI was used.
- You may contact our AI Officer at [contact details] with any questions.
We sincerely apologise for any inconvenience or harm caused.
[Name, Role] [Organisation] [Date]
6.5 Vendor Notification
Subject: AI Incident Notification — [System Name] — INC-[YYYY]-[NNN]
Dear [Vendor Contact],
We are writing to notify you of an incident involving [System Name / API / Model] provided by your organisation and deployed by us in [context].
Incident reference: INC-[YYYY]-[NNN] Date/time of incident: _______________ Nature of incident: [Brief description]
We require the following from you as a matter of urgency:
- Confirmation of whether this issue is known or has affected other customers
- Relevant system logs for the period [start time] to [end time]
- Preliminary root cause assessment by [deadline]
- Your incident response contact and escalation path
Please treat this as priority. Respond to [name] at [email] within [timeframe].
[Name, Role] [Organisation]
Section 7: Incident Register
Use the table below (or integrate with your GRC/ITSM platform) to maintain a log of all AI incidents.
| Incident ID | Date | System | Category | Severity | Status | Regulatory Report? | IC | Date Closed |
|---|---|---|---|---|---|---|---|---|
| INC-[YYYY]-001 | ☐ Open ☐ Closed | ☐ Yes ☐ No | ||||||
| INC-[YYYY]-002 | ☐ Open ☐ Closed | ☐ Yes ☐ No |
Retention: Incident records must be retained for a minimum of 5 years.
Section 8: Post-Incident Review
8.1 Review Process
A post-incident review (PIR) must be conducted for all SEV-1 to SEV-3 incidents within 10 business days of incident closure.
PIR Meeting Agenda:
- Incident timeline (15 min)
- Root cause walkthrough (20 min)
- What went well (10 min)
- What could have been better (15 min)
- Action items and owners (15 min)
- Policy / process updates required (10 min)
- Training updates required (5 min)
8.2 Post-Incident Review Report
Incident ID: _______________ PIR Date: _______________ Facilitator: _______________ Attendees: _______________
| Section | Summary |
|---|---|
| Root Cause (confirmed) | |
| Contributing factors | |
| Detection time | |
| Response time | |
| Containment time | |
| Total impact (individuals, duration) | |
| Regulatory notifications made | |
| What worked well | |
| What needs improvement |
Action Items from PIR:
| Action | Owner | Deadline | Status |
|---|---|---|---|
Policy or process updates triggered: ☐ Yes (specify): _______________ ☐ No Training updates triggered: ☐ Yes (specify): _______________ ☐ No FRIA update triggered: ☐ Yes ☐ No
Section 9: Plan Review and Maintenance
| Review Trigger | Action |
|---|---|
| Annually | Full plan review |
| After any SEV-1 or SEV-2 incident | Review and update affected sections |
| New AI system deployed | Verify coverage; update system registry |
| Regulatory update | Review reporting obligations |
| Change in team structure | Update contact details |
Last reviewed: _______________ Approved by: _______________
Template provided by VORLUX AI | vorluxai.com Version 1.0 — April 2026 | EU AI Act Article 73 compliant template This is guidance only, not legal advice. Consult qualified legal counsel for your specific situation.
Versión Española
Aviso: Este plantilla se proporciona solo con fines de orientación. No constituye asesoramiento legal. Las organizaciones deben consultar a abogados calificados para asegurarse de que cumplan con las leyes y regulaciones aplicables.
Plan de respuesta a incidentes de IA
Cumplimiento del artículo 73 de la Directiva de IA de la UE — Plantilla de cumplimiento
Organización: _______________ Referencia del documento: AIRP-[YYYY]-[NNN] Versión: _______________ Preparado por: _______________ Aprobado por: _______________ Fecha de entrada en vigor: _______________ Fecha de revisión: _______________
Sección 1: Propósito y alcance
1.1 Propósito
Este plan define cómo [Nombre de la organización] identifica, clasifica, responde a, reporta y aprende de los incidentes que involucran sistemas de IA. Garantiza el cumplimiento del artículo 73 de la Directiva de IA de la UE (informe sobre incidentes graves) y apoya las obligaciones de gestión de riesgos más amplias de la organización.
1.2 Alcance
Este plan se aplica a:
- Todos los sistemas de IA operados por la organización en una capacidad de proveedor
- Todos los sistemas de IA desarrollados o proporcionados por la organización
- Todo el personal, contratistas y terceros involucrados en la operación o monitoreo de sistemas de IA
Sistemas de IA cubiertos:
| Nombre del sistema | Clasificación de riesgo | Propietario del sistema | Versión |
|---|---|---|---|
| ☐ Alto riesgo ☐ Limitado ☐ Mínimo | |||
| ☐ Alto riesgo ☐ Limitado ☐ Mínimo | |||
| ☐ Alto riesgo ☐ Limitado ☐ Mínimo |
1.3 Contexto regulatorio
Según el artículo 73, los proveedores de sistemas de IA de alto riesgo deben informar sobre incidentes graves a la autoridad competente nacional correspondiente. Un incidente grave se define como un incidente o falla en un sistema de IA que, directa o indirectamente, lleva a:
- La muerte de una persona o daño grave a la salud de una persona
- Una interrupción grave e irreversible de infraestructura crítica
- Infracción de derechos fundamentales
- Daños graves a la propiedad o al medio ambiente
Autoridad competente para [País]: _______________ Portal de informes / Contacto: _______________ Plazo límite para el informe inicial: 15 días hábiles (vida en peligro: 2 días hábiles) después de que se tenga conocimiento
Sección 2: Categorías de incidentes
2.1 Marco de categorías
| Categoría | Código | Descripción | Ejemplos |
|---|---|---|---|
| Incidente de seguridad | CAT-S | Salida de IA que causa o pone en riesgo daño físico | Diagnóstico médico incorrecto; colisión del sistema autónomo |
| Violación de derechos | CAT-R | Salida de IA que infringe derechos fundamentales | Decisión de contratación discriminatoria; perfilaje ilegal |
| Incidente de datos | CAT-D | Sistema de IA involucrado en robo, filtración o uso inadecuado de datos | Exfiltración de datos de entrenamiento; exposición de datos personales en la salida |
| Falla del sistema | CAT-F | Sistema de IA no disponible, degradado o comportándose de manera inesperada | Colapso del modelo; degradación severa de precisión |
| Manipulación / Uso inadecuado | CAT-M | Sistema de IA utilizado o manipulado con fines perjudiciales | Ataque adversarial; inyección de promt que lleva a violación de política |
| Incidente de cumplimiento | CAT-C | Sistema de IA no cumple con la regulación | Faltan evaluaciones de conformidad; uso prohibido descubierto |
| Incumplimiento de transparencia | CAT-T | Falta de divulgación de la participación de IA como se requiere | Chatbot sin etiquetado; decisión automática no notificada |
| Incidente reputacional | CAT-P | Sistema de IA causa daño significativo a la confianza pública | Cobertura mediática de salida sesgada de IA; quejas del cliente |
Sección 3: Niveles de gravedad
3.1 Clasificación de severidad
| Nivel | Nombre | Definición | Plazo de respuesta |
|---|---|---|---|
| SEV-1 | Crítico | Daño activo o inminente; derechos fundamentales gravemente violados; vida en peligro | Inmediato (< 1 hora) |
| SEV-2 | Alto | Daño grave ocurrido o probable; obligación regulatoria significativa desencadenada | < 4 horas |
| SEV-3 | Medio | Salida dañina detectada; no hay daño inmediato pero existe riesgo de escalada | < 24 horas |
| SEV-4 | Bajo | Próximo accidente; |