AI Policy Template
16-page governance framework covering organizational AI policy, roles, procedures, and compliance requirements under the EU AI Act.
This template includes both English and Spanish versions. Scroll down to find "Versión Española".
AI Policy Template — Corporate Governance Framework
EU AI Act Compliance | VORLUX AI | Based on Regulation 2024/1689
1. Purpose and Scope
1.1 Purpose
This policy establishes the governance framework for the development, deployment, and use of Artificial Intelligence (AI) systems within [COMPANY NAME]. It ensures compliance with the EU AI Act (Regulation 2024/1689) and alignment with organizational values.
1.2 Scope
This policy applies to:
- All AI systems developed, deployed, or used by [COMPANY NAME]
- All employees, contractors, and third parties interacting with AI systems
- All departments and business units
- Both internal and client-facing AI applications
1.3 Definitions
| Term | Definition |
|---|---|
| AI System | A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness and that infers, from input it receives, how to generate outputs such as predictions, content, recommendations, or decisions (Art. 3(1)) |
| Provider | Natural or legal person that develops or has an AI system developed and places it on the market or puts it into service under its own name or trademark (Art. 3(3)) |
| Deployer | Natural or legal person using an AI system under its authority (Art. 3(4)) |
| High-Risk AI | AI systems listed in Annex I or Annex III of the EU AI Act |
| GPAI | General-Purpose AI model trained with large amounts of data using self-supervision at scale |
2. Governance Structure
2.1 AI Governance Board
| Role | Responsibility | Person |
|---|---|---|
| AI Compliance Officer | Overall compliance with EU AI Act | _____________ |
| Data Protection Officer | GDPR alignment for AI systems | _____________ |
| Technical Lead | AI system architecture and security | _____________ |
| Legal Counsel | Regulatory interpretation and contracts | _____________ |
| Department Heads | Use case identification and risk assessment | _____________ |
2.2 Decision-Making Authority
| Decision Type | Authority Level | Approval Required |
|---|---|---|
| New AI system procurement | Department Head + AI Compliance Officer | Yes |
| High-risk AI deployment | AI Governance Board | Unanimous |
| AI system modification | Technical Lead + AI Compliance Officer | Yes |
| Incident escalation | AI Compliance Officer | Immediate |
| Policy exceptions | AI Governance Board | Majority |
3. AI System Classification
3.1 Risk Assessment Process
For every AI system, complete the following:
- Identify: Document the AI system in the inventory (Art. 49)
- Classify: Determine risk level (prohibited/high/limited/minimal)
- Assess: Complete risk assessment and FRIA if high-risk (Art. 27)
- Document: Create technical documentation (Annex IV) if high-risk
- Monitor: Establish ongoing monitoring and review schedule
3.2 Classification Decision Tree
Is the AI system performing a prohibited practice (Art. 5)?
├── YES → PROHIBITED — Discontinue immediately
└── NO → Is it listed in Annex I or III?
├── YES → Does Art. 6(3) exception apply?
│ ├── YES → LIMITED RISK
│ └── NO → HIGH RISK — Full compliance required
└── NO → Does it interact with humans or generate content?
├── YES → LIMITED RISK — Transparency obligations
└── NO → MINIMAL RISK — No specific obligations
4. Acceptable Use
4.1 Permitted Uses
- Document processing and classification
- Customer service chatbots (with transparency disclosure)
- Internal knowledge search and retrieval
- Content generation (with human review)
- Data analysis and reporting
- Process automation
4.2 Prohibited Uses
- Social scoring of employees or customers
- Subliminal manipulation techniques
- Real-time biometric identification without authorization
- Emotion recognition in workplace (except safety-critical roles with consent)
- Automated decision-making without human oversight for high-impact decisions
4.3 Employee Responsibilities
All employees using AI systems must:
- Complete AI literacy training (Art. 4)
- Use only approved AI systems
- Report any AI-related incidents
- Not input sensitive/personal data into unapproved AI tools
- Review AI outputs before sharing externally
- Follow department-specific AI usage guidelines
5. Transparency Requirements (Art. 50)
5.1 User Notification
When an AI system interacts with a person, they must be informed that they are interacting with AI. This includes:
- Chatbots and virtual assistants
- AI-generated email responses
- Automated customer service
- AI-powered recommendation systems
5.2 Content Marking
AI-generated content must be:
- Identifiable as AI-generated when published externally
- Reviewed by a human before external distribution
- Stored with metadata indicating AI involvement
5.3 Template Notice
[TRANSPARENCY NOTICE]
This [content/interaction/response] was [generated/assisted] by an
artificial intelligence system. A human has reviewed this output.
For questions, contact: [contact details]
6. Data Governance
6.1 Data Quality Requirements (Art. 10)
AI training and operational data must:
- Be relevant and representative
- Be free from errors to the extent possible
- Consider potential biases
- Be appropriate for the intended purpose
- Comply with GDPR data minimization principles
6.2 Data Processing Records
Maintain records of:
- Data sources used for AI training
- Data preprocessing and cleaning procedures
- Bias testing and mitigation results
- Data retention and deletion schedules
7. Human Oversight (Art. 14)
7.1 Oversight Mechanisms
For high-risk AI systems:
- Designated human overseer for each system
- Clear escalation procedures
- Ability to override or stop AI decisions
- Regular review of AI outputs and decisions
- Documentation of override decisions
7.2 Override Authority
| AI System | Overseer | Override Threshold |
|---|---|---|
| [System 1] | [Name] | [When to override] |
| [System 2] | [Name] | [When to override] |
8. Incident Management
8.1 Incident Categories
| Category | Description | Response Time | Reporting |
|---|---|---|---|
| Critical | Safety/rights impact, prohibited practice detected | Immediate | Board + authority |
| High | Significant malfunction, bias detected | 4 hours | AI Compliance Officer |
| Medium | Performance degradation, minor errors | 24 hours | Technical Lead |
| Low | User complaints, feature requests | 72 hours | Department Head |
8.2 Incident Response Steps
- Detect: Identify the incident through monitoring or reports
- Contain: Isolate affected systems if necessary
- Assess: Determine severity and impact
- Report: Notify appropriate stakeholders per table above
- Remediate: Fix the issue and verify resolution
- Document: Record all actions taken
- Review: Post-incident analysis and policy updates
9. Vendor Management
9.1 AI Vendor Assessment Checklist
Before procuring any AI system, verify:
- Vendor’s EU AI Act compliance status
- Data processing location (EU vs non-EU)
- Technical documentation availability
- Transparency measures implemented
- Incident response capabilities
- Training and support offered
- Contract terms include compliance obligations
- Exit strategy and data portability
10. Training and Awareness (Art. 4)
10.1 Training Requirements
| Audience | Content | Frequency |
|---|---|---|
| All employees | AI basics, acceptable use, risks | Annual |
| AI users | System-specific training, oversight | Before use + annual refresh |
| Technical staff | Technical documentation, monitoring | Quarterly |
| Management | Governance, risk assessment, compliance | Semi-annual |
| AI Compliance Officer | Full EU AI Act, updates, case law | Continuous |
11. Review and Updates
11.1 Review Schedule
| Review Type | Frequency | Owner |
|---|---|---|
| Policy review | Annual | AI Governance Board |
| System inventory update | Quarterly | AI Compliance Officer |
| Risk assessment refresh | Semi-annual | Technical Lead |
| Training effectiveness | Annual | HR + AI Compliance Officer |
| Vendor compliance check | Annual | Procurement + Legal |
11.2 Change Management
All policy changes must be:
- Approved by the AI Governance Board
- Communicated to all affected employees
- Reflected in training materials
- Documented with version history
Document Control
| Field | Value |
|---|---|
| Version | 1.0 |
| Effective Date | _____________ |
| Next Review | _____________ |
| Approved By | _____________ |
| Classification | Internal |
Template provided by VORLUX AI | vorluxai.com Based on EU AI Act (Regulation 2024/1689). This template is for guidance only and does not constitute legal advice. Consult qualified legal counsel for your specific situation.
Versión Española
Plantilla de Política de Inteligencia Artificial — Marco de Gobernanza Corporativa
Cumplimiento con la Directiva UE sobre IA | VORLUX AI | Basado en Reglamento 2024/1689
1. Propósito y Ámbito
1.1 Propósito
Esta política establece el marco de gobernanza para el desarrollo, implementación y uso de sistemas de Inteligencia Artificial (IA) dentro de [NOMBRE DE LA COMPANÍA]. Garantiza la conformidad con la Directiva UE sobre IA (Reglamento 2024/1689) y alinea con los valores organizacionales.
1.2 Ámbito
Esta política se aplica a:
- Todos los sistemas de IA desarrollados, implementados o utilizados por [NOMBRE DE LA COMPANÍA]
- Todos los empleados, contratistas y terceros que interactúan con sistemas de IA
- Todas las departamentos y unidades comerciales
- Ambas aplicaciones internas y client-facing de IA
1.3 Definiciones
| Término | Definición |
|---|---|
| Sistema de IA | Un sistema basado en máquinas diseñado para operar con niveles variables de autonomía, que puede exhibir adaptabilidad y que infiere, a partir de la entrada que recibe, cómo generar salidas como predicciones, contenido, recomendaciones o decisiones (Art. 3(1)) |
| Proveedor | Persona natural o jurídica que desarrolla o tiene un sistema de IA desarrollado y lo coloca en el mercado o lo pone en servicio bajo su propio nombre o marca (Art. 3(3)) |
| Desplegador | Persona natural o jurídica que utiliza un sistema de IA bajo su autoridad (Art. 3(4)) |
| IA con alto riesgo | Sistemas de IA enumerados en el Anexo I o III de la Directiva UE sobre IA |
| GPAI | Modelo de IA generalizado entrenado con grandes cantidades de datos utilizando auto-supervisión a escala |
2. Estructura de Gobernanza
2.1 Junta de Gobernanza de IA
| Rol | Responsabilidad | Persona |
|---|---|---|
| Oficial de Cumplimiento de IA | Cumplimiento general con la Directiva UE sobre IA | _____________ |
| Delegado de Protección de Datos | Alineación con el RGPD para sistemas de IA | _____________ |
| Líder Técnico | Arquitectura y seguridad del sistema de IA | _____________ |
| Asesor Jurídico | Interpretación regulatoria y contratos | _____________ |
| Jefes de Departamento | Identificación de casos de uso y evaluación de riesgos | _____________ |
2.2 Autoridad para la Tomada de Decisiones
| Tipo de decisión | Nivel de autoridad | Aprobación requerida |
|---|---|---|
| Adquisición de nuevo sistema de IA | Jefe de Departamento + Oficial de Cumplimiento de IA | Sí |
| Implementación de IA con alto riesgo | Junta de Gobernanza de IA | Unánime |
| Modificación del sistema de IA | Líder Técnico + Oficial de Cumplimiento de IA | Sí |
| Escalada de incidentes | Oficial de Cumplimiento de IA | Inmediato |
| Excepciones a la política | Junta de Gobernanza de IA | Mayoritaria |
3. Clasificación del Sistema de IA
3.1 Proceso de Evaluación de Riesgos
Para cada sistema de IA, complete lo siguiente:
- Identificar: Documentar el sistema de IA en la inventario (Art. 49)
- Clasificar: Determinar nivel de riesgo (prohibido/alto/limitado/minimo)
- Evaluar: Realizar evaluación de riesgos y FRIA si es alto-riesgo (Art. 27)
- Documentar: Crear documentación técnica (Anexo IV) si es alto-riesgo
- Monitorear: Establecer horario para monitoreo y revisión continuos
3.2 Árbol de Decisiones de Clasificación
¿El sistema de IA realiza una práctica prohibida (Art. 5)?
├── SÍ → PROHIBIDO — Interrumpir inmediatamente
└── NO → ¿Está listado en el Anexo I o III?
├── SÍ → ¿Aplica la excepción del Art. 6(3)?
│ ├── SÍ → RIESGO LIMITADO
│ └── NO → RIESGO ALTO — Cumplimiento completo requerido
└── NO → ¿Interactúa con humanos o genera contenido?
├── SÍ → RIESGO LIMITADO — Obligaciones de transparencia
└── NO → RIESGO MINIMO — Ninguna obligación específica
4. Uso Aceptable
4.1 Usos Permisibles
- Procesamiento y clasificación de documentos
- Chatbots de servicio al cliente (con declaración de transparencia)
- Búsqueda y recuperación de conocimiento interno
- Generación de contenido (con revisión humana)
- Análisis de datos y informes
- Automatización de procesos
4.2 Usos Prohibidos
- Puntuación social de empleados o clientes
- Técnicas de manipulación subliminal
- Identificación biométrica en tiempo real sin autorización
- Reconocimiento emocional en el lugar de trabajo (excepto para roles críticos con seguridad y consentimiento)
- Tomada de decisiones automatizada sin supervisión humana para alto-impact