Back to templates
governance Document

AI Policy Template

16-page governance framework covering organizational AI policy, roles, procedures, and compliance requirements under the EU AI Act.

Free download Request customization
🌐
Bilingual / Bilingüe

This template includes both English and Spanish versions. Scroll down to find "Versión Española".

AI Policy Template — Corporate Governance Framework

EU AI Act Compliance | VORLUX AI | Based on Regulation 2024/1689

1. Purpose and Scope

1.1 Purpose

This policy establishes the governance framework for the development, deployment, and use of Artificial Intelligence (AI) systems within [COMPANY NAME]. It ensures compliance with the EU AI Act (Regulation 2024/1689) and alignment with organizational values.

1.2 Scope

This policy applies to:

  • All AI systems developed, deployed, or used by [COMPANY NAME]
  • All employees, contractors, and third parties interacting with AI systems
  • All departments and business units
  • Both internal and client-facing AI applications

1.3 Definitions

TermDefinition
AI SystemA machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness and that infers, from input it receives, how to generate outputs such as predictions, content, recommendations, or decisions (Art. 3(1))
ProviderNatural or legal person that develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark (Art. 3(3))
DeployerNatural or legal person using an AI system under its authority (Art. 3(4))
High-Risk AIAI systems listed in Annex I or Annex III of the EU AI Act
GPAIGeneral-Purpose AI model trained with large amounts of data using self-supervision at scale

2. Governance Structure

2.1 AI Governance Board

RoleResponsibilityPerson
AI Compliance OfficerOverall compliance with EU AI Act_____________
Data Protection OfficerGDPR alignment for AI systems_____________
Technical LeadAI system architecture and security_____________
Legal CounselRegulatory interpretation and contracts_____________
Department HeadsUse case identification and risk assessment_____________

2.2 Decision-Making Authority

Decision TypeAuthority LevelApproval Required
New AI system procurementDepartment Head + AI Compliance OfficerYes
High-risk AI deploymentAI Governance BoardUnanimous
AI system modificationTechnical Lead + AI Compliance OfficerYes
Incident escalationAI Compliance OfficerImmediate
Policy exceptionsAI Governance BoardMajority

3. AI System Classification

3.1 Risk Assessment Process

For every AI system, complete the following:

  1. Identify: Document the AI system in the inventory (Art. 49)
  2. Classify: Determine risk level (prohibited/high/limited/minimal)
  3. Assess: Complete risk assessment and FRIA if high-risk (Art. 27)
  4. Document: Create technical documentation (Annex IV) if high-risk
  5. Monitor: Establish ongoing monitoring and review schedule

3.2 Classification Decision Tree

Is the AI system performing a prohibited practice (Art. 5)?
├── YES → PROHIBITED — Discontinue immediately
└── NO → Is it listed in Annex I or III?
    ├── YES → Does Art. 6(3) exception apply?
    │   ├── YES → LIMITED RISK
    │   └── NO → HIGH RISK — Full compliance required
    └── NO → Does it interact with humans or generate content?
        ├── YES → LIMITED RISK — Transparency obligations
        └── NO → MINIMAL RISK — No specific obligations

4. Acceptable Use

4.1 Permitted Uses

  • Document processing and classification
  • Customer service chatbots (with transparency disclosure)
  • Internal knowledge search and retrieval
  • Content generation (with human review)
  • Data analysis and reporting
  • Process automation

4.2 Prohibited Uses

  • Social scoring of employees or customers
  • Subliminal manipulation techniques
  • Real-time biometric identification without authorization
  • Emotion recognition in workplace (except safety-critical roles with consent)
  • Automated decision-making without human oversight for high-impact decisions

4.3 Employee Responsibilities

All employees using AI systems must:

  • Complete AI literacy training (Art. 4)
  • Use only approved AI systems
  • Report any AI-related incidents
  • Not input sensitive/personal data into unapproved AI tools
  • Review AI outputs before sharing externally
  • Follow department-specific AI usage guidelines

5. Transparency Requirements (Art. 50)

5.1 User Notification

When an AI system interacts with a person, they must be informed that they are interacting with AI. This includes:

  • Chatbots and virtual assistants
  • AI-generated email responses
  • Automated customer service
  • AI-powered recommendation systems

5.2 Content Marking

AI-generated content must be:

  • Identifiable as AI-generated when published externally
  • Reviewed by a human before external distribution
  • Stored with metadata indicating AI involvement

5.3 Template Notice

[TRANSPARENCY NOTICE]
This [content/interaction/response] was [generated/assisted] by an
artificial intelligence system. A human has reviewed this output.
For questions, contact: [contact details]

6. Data Governance

6.1 Data Quality Requirements (Art. 10)

AI training and operational data must:

  • Be relevant and representative
  • Be free from errors to the extent possible
  • Consider potential biases
  • Be appropriate for the intended purpose
  • Comply with GDPR data minimization principles

6.2 Data Processing Records

Maintain records of:

  • Data sources used for AI training
  • Data preprocessing and cleaning procedures
  • Bias testing and mitigation results
  • Data retention and deletion schedules

7. Human Oversight (Art. 14)

7.1 Oversight Mechanisms

For high-risk AI systems:

  • Designated human overseer for each system
  • Clear escalation procedures
  • Ability to override or stop AI decisions
  • Regular review of AI outputs and decisions
  • Documentation of override decisions

7.2 Override Authority

AI SystemOverseerOverride Threshold
[System 1][Name][When to override]
[System 2][Name][When to override]

8. Incident Management

8.1 Incident Categories

CategoryDescriptionResponse TimeReporting
CriticalSafety/rights impact, prohibited practice detectedImmediateBoard + authority
HighSignificant malfunction, bias detected4 hoursAI Compliance Officer
MediumPerformance degradation, minor errors24 hoursTechnical Lead
LowUser complaints, feature requests72 hoursDepartment Head

8.2 Incident Response Steps

  1. Detect: Identify the incident through monitoring or reports
  2. Contain: Isolate affected systems if necessary
  3. Assess: Determine severity and impact
  4. Report: Notify appropriate stakeholders per table above
  5. Remediate: Fix the issue and verify resolution
  6. Document: Record all actions taken
  7. Review: Post-incident analysis and policy updates

9. Vendor Management

9.1 AI Vendor Assessment Checklist

Before procuring any AI system, verify:

  • Vendor’s EU AI Act compliance status
  • Data processing location (EU vs non-EU)
  • Technical documentation availability
  • Transparency measures implemented
  • Incident response capabilities
  • Training and support offered
  • Contract terms include compliance obligations
  • Exit strategy and data portability

10. Training and Awareness (Art. 4)

10.1 Training Requirements

AudienceContentFrequency
All employeesAI basics, acceptable use, risksAnnual
AI usersSystem-specific training, oversightBefore use + annual refresh
Technical staffTechnical documentation, monitoringQuarterly
ManagementGovernance, risk assessment, complianceSemi-annual
AI Compliance OfficerFull EU AI Act, updates, case lawContinuous

11. Review and Updates

11.1 Review Schedule

Review TypeFrequencyOwner
Policy reviewAnnualAI Governance Board
System inventory updateQuarterlyAI Compliance Officer
Risk assessment refreshSemi-annualTechnical Lead
Training effectivenessAnnualHR + AI Compliance Officer
Vendor compliance checkAnnualProcurement + Legal

11.2 Change Management

All policy changes must be:

  • Approved by the AI Governance Board
  • Communicated to all affected employees
  • Reflected in training materials
  • Documented with version history

Document Control

FieldValue
Version1.0
Effective Date_____________
Next Review_____________
Approved By_____________
ClassificationInternal

Template provided by VORLUX AI | vorluxai.com Based on EU AI Act (Regulation 2024/1689). This template is for guidance only and does not constitute legal advice. Consult qualified legal counsel for your specific situation.


Versión Española

Plantilla de Política de Inteligencia Artificial — Marco de Gobernanza Corporativa

Cumplimiento con la Directiva UE sobre IA | VORLUX AI | Basado en Reglamento 2024/1689

1. Propósito y Ámbito

1.1 Propósito

Esta política establece el marco de gobernanza para el desarrollo, implementación y uso de sistemas de Inteligencia Artificial (IA) dentro de [NOMBRE DE LA COMPANÍA]. Garantiza la conformidad con la Directiva UE sobre IA (Reglamento 2024/1689) y alinea con los valores organizacionales.

1.2 Ámbito

Esta política se aplica a:

  • Todos los sistemas de IA desarrollados, implementados o utilizados por [NOMBRE DE LA COMPANÍA]
  • Todos los empleados, contratistas y terceros que interactúan con sistemas de IA
  • Todas las departamentos y unidades comerciales
  • Ambas aplicaciones internas y client-facing de IA

1.3 Definiciones

TérminoDefinición
Sistema de IAUn sistema basado en máquinas diseñado para operar con niveles variables de autonomía, que puede exhibir adaptabilidad y que infiere, a partir de la entrada que recibe, cómo generar salidas como predicciones, contenido, recomendaciones o decisiones (Art. 3(1))
ProveedorPersona natural o jurídica que desarrolla o tiene un sistema de IA desarrollado y lo coloca en el mercado o lo pone en servicio bajo su propio nombre o marca (Art. 3(3))
DesplegadorPersona natural o jurídica que utiliza un sistema de IA bajo su autoridad (Art. 3(4))
IA con alto riesgoSistemas de IA enumerados en el Anexo I o III de la Directiva UE sobre IA
GPAIModelo de IA generalizado entrenado con grandes cantidades de datos utilizando auto-supervisión a escala

2. Estructura de Gobernanza

2.1 Junta de Gobernanza de IA

RolResponsabilidadPersona
Oficial de Cumplimiento de IACumplimiento general con la Directiva UE sobre IA_____________
Delegado de Protección de DatosAlineación con el RGPD para sistemas de IA_____________
Líder TécnicoArquitectura y seguridad del sistema de IA_____________
Asesor JurídicoInterpretación regulatoria y contratos_____________
Jefes de DepartamentoIdentificación de casos de uso y evaluación de riesgos_____________

2.2 Autoridad para la Tomada de Decisiones

Tipo de decisiónNivel de autoridadAprobación requerida
Adquisición de nuevo sistema de IAJefe de Departamento + Oficial de Cumplimiento de IA
Implementación de IA con alto riesgoJunta de Gobernanza de IAUnánime
Modificación del sistema de IALíder Técnico + Oficial de Cumplimiento de IA
Escalada de incidentesOficial de Cumplimiento de IAInmediato
Excepciones a la políticaJunta de Gobernanza de IAMayoritaria

3. Clasificación del Sistema de IA

3.1 Proceso de Evaluación de Riesgos

Para cada sistema de IA, complete lo siguiente:

  1. Identificar: Documentar el sistema de IA en la inventario (Art. 49)
  2. Clasificar: Determinar nivel de riesgo (prohibido/alto/limitado/minimo)
  3. Evaluar: Realizar evaluación de riesgos y FRIA si es alto-riesgo (Art. 27)
  4. Documentar: Crear documentación técnica (Anexo IV) si es alto-riesgo
  5. Monitorear: Establecer horario para monitoreo y revisión continuos

3.2 Árbol de Decisiones de Clasificación

¿El sistema de IA realiza una práctica prohibida (Art. 5)?
├── SÍ → PROHIBIDO — Interrumpir inmediatamente
└── NO → ¿Está listado en el Anexo I o III?
    ├── SÍ → ¿Aplica la excepción del Art. 6(3)?
    │   ├── SÍ → RIESGO LIMITADO
    │   └── NO → RIESGO ALTO — Cumplimiento completo requerido
    └── NO → ¿Interactúa con humanos o genera contenido?
        ├── SÍ → RIESGO LIMITADO — Obligaciones de transparencia
        └── NO → RIESGO MINIMO — Ninguna obligación específica

4. Uso Aceptable

4.1 Usos Permisibles

  • Procesamiento y clasificación de documentos
  • Chatbots de servicio al cliente (con declaración de transparencia)
  • Búsqueda y recuperación de conocimiento interno
  • Generación de contenido (con revisión humana)
  • Análisis de datos y informes
  • Automatización de procesos

4.2 Usos Prohibidos

  • Puntuación social de empleados o clientes
  • Técnicas de manipulación subliminal
  • Identificación biométrica en tiempo real sin autorización
  • Reconocimiento emocional en el lugar de trabajo (excepto para roles críticos con seguridad y consentimiento)
  • Tomada de decisiones automatizada sin supervisión humana para alto-impact
EU AI Act: 99 days to deadline

15 minutes to evaluate your case

No-commitment initial consultation. We analyze your infrastructure and recommend the optimal hybrid architecture.

No commitment 15 minutes Custom proposal

136 pages of free resources · 26 compliance templates · 22 certified devices