Acceptable AI Use Policy for Employees
A comprehensive policy governing employee use of AI tools, covering approved applications, prohibited uses, data handling, content review requirements, and reporting obligations — aligned with EU AI Act requirements.
This template includes both English and Spanish versions. Scroll down to find "Versión Española".
Disclaimer: This is guidance only, not legal advice. Consult qualified legal counsel before adopting this policy. | Este documento es solo orientativo, no constituye asesoramiento jurídico.
Acceptable AI Use Policy for Employees
Document Reference: [ORG-POL-AI-001] Version: 1.0 Effective Date: [DATE] Review Date: [DATE + 12 months] Owner: [AI Officer / CISO / Legal — delete as appropriate] Approved by: [Name, Title] Classification: Internal
1. Purpose and Scope
1.1 Purpose
This Acceptable AI Use Policy (“Policy”) establishes the rules and standards governing employee use of artificial intelligence (AI) tools within [ORGANISATION NAME] (“the Organisation”). It is designed to:
- Ensure AI tools are used responsibly, ethically, and in compliance with applicable law
- Protect the Organisation’s confidential information and data
- Satisfy obligations under the EU AI Act (Regulation (EU) 2024/1689), GDPR, and applicable Spanish/EU employment law
- Prevent reputational, legal, and operational harm
- Foster a culture of responsible AI use
1.2 Scope
This Policy applies to:
| Category | Covered? |
|---|---|
| All permanent employees | Yes |
| Contractors and freelancers with system access | Yes |
| Temporary staff | Yes |
| Board members and executives | Yes |
| Third-party service providers using Org systems | Yes |
| Personal devices used for work (BYOD) | Yes — when accessing work systems or handling work data |
This Policy covers all AI tools, whether accessed via the Organisation’s systems, personal devices, or third-party platforms, when used in connection with work activities.
2. Definitions
| Term | Definition |
|---|---|
| AI Tool | Any software application using machine learning, large language models, generative AI, or similar technologies, including chatbots, code assistants, image generators, translation tools, and analytics platforms |
| Approved AI Tool | An AI tool that has been formally assessed, approved, and listed in Annex A of this Policy |
| Shadow AI | Any AI tool used for work purposes without organisational approval |
| Confidential Information | Trade secrets, client data, employee data, financial data, strategic plans, and any data classified as Internal, Confidential, or Restricted |
| Personal Data | Any information relating to an identified or identifiable natural person, as defined under GDPR Art. 4(1) |
| AI-Generated Content | Any text, image, audio, video, code, or other output produced by an AI tool |
| Human Review | Review and verification of AI-generated content by a qualified employee before use |
3. Approved AI Tools
3.1 Current Approved Tool Register
Only tools listed in the Approved AI Tool Register (Annex A) may be used for work purposes. The register is maintained by [AI Officer/IT Department] and reviewed quarterly.
Current approved tools (as of effective date):
| Tool | Use Cases | Data Classification Permitted | Conditions |
|---|---|---|---|
| [Tool 1 — e.g., Microsoft Copilot (M365)] | Drafting, summarisation, analysis | Internal, Confidential | Must use enterprise tenant; no client PII |
| [Tool 2 — e.g., GitHub Copilot] | Code assistance, code review | Internal | No proprietary algorithms; no credentials in prompts |
| [Tool 3 — e.g., approved image generator] | Marketing visuals, presentations | Public, Internal | Human review required before publication |
| [Tool 4 — e.g., approved translation tool] | Document translation | Internal | Not for legally binding documents without legal review |
Specific rules for individual tools are set out in Section 4 of this Policy.
3.2 Requesting Approval for New Tools
Employees wishing to use an AI tool not listed in Annex A must submit an AI Tool Request Form (Annex B) to [IT/AI Officer] before use. The approval process takes a maximum of [15] business days. Using a tool prior to approval is a breach of this Policy.
3.3 Approval Criteria
Tools are assessed against:
- Data processing location and GDPR compliance
- Security certifications (ISO 27001, SOC 2 Type II, etc.)
- Terms of service and data retention/training practices
- Risk classification under the EU AI Act
- Business justification and available alternatives
4. Tool-Specific Rules
4.1 ChatGPT and OpenAI Products
Approved version(s): [e.g., ChatGPT Enterprise only / ChatGPT Team / Not approved — delete as applicable]
Permitted uses:
- Drafting initial text for internal documents (must be reviewed and edited)
- Brainstorming and idea generation
- Summarising public or internal non-confidential information
- Learning and skills development
Prohibited uses:
- Entering client names, contact details, or any client-specific information
- Uploading or pasting contracts, financial statements, or legal documents
- Entering employee personal data (names, salaries, performance data, health information)
- Entering proprietary source code, trade secrets, or competitive intelligence
- Using the free consumer version (ChatGPT.com without an enterprise agreement) for any work purpose
Data handling rules:
- If using an enterprise/API version: confirm “Do not train on my data” is enabled in account settings before use
- Never use the consumer (free) version for work-related tasks
- Treat all ChatGPT outputs as drafts requiring verification — do not quote statistics, legal citations, or facts without independent verification
Content review requirement: All ChatGPT-generated content used externally (client-facing, published, or submitted to regulators) must be reviewed and approved by [Department Head / designated reviewer] before use.
4.2 GitHub Copilot and AI Code Assistants
Approved version(s): [e.g., GitHub Copilot Business (enterprise-configured)]
Permitted uses:
- Generating code suggestions and autocomplete
- Code explanation and documentation
- Identifying potential bugs in code under review
- Writing unit tests
Prohibited uses:
- Entering authentication credentials, API keys, passwords, or secrets in any prompt or file that will be shared with the AI
- Using Copilot with repositories containing client data or personal data unless the repository is configured to exclude AI training
- Accepting code suggestions without review for security-critical functions
- Using Copilot to generate code that circumvents security controls
Security requirements:
- Enable
.gitignorefor all files containing secrets before using Copilot - Review all AI-generated code for security vulnerabilities before committing
- Do not accept Copilot suggestions in files containing environment variables or configuration secrets
- Run security scanning (e.g., [tool name]) on all code containing significant AI-generated portions
Content review requirement: All AI-generated code included in production systems must be reviewed by a qualified developer and included in the standard code review process. The AI assistance should be noted in the pull request description.
4.3 AI Image Generators
Approved tool(s): [e.g., Adobe Firefly (enterprise), Canva AI (enterprise)]
Permitted uses:
- Creating marketing and presentation visuals
- Generating placeholder content for internal use
- Exploring design concepts
Prohibited uses:
- Generating images of real, identifiable individuals without their written consent
- Generating images that could be defamatory, misleading, or in breach of third-party intellectual property rights
- Creating “deepfake” or synthetic media that misrepresents real events or people
- Generating images for any official regulatory submission, legal document, or financial report
- Creating images depicting violence, nudity, or other harmful content
Intellectual property rules:
- Images generated using the Organisation’s approved enterprise tools are generally permitted for commercial use — check the specific tool’s commercial use terms before publication
- Do not use images that closely replicate a specific artist’s style if the artist has objected to their work being used for AI training
- Retain records of prompts used for images in official publications
Content review requirement: All AI-generated images used in external communications, marketing materials, or official documents must be reviewed by [Marketing Manager / Brand Owner] and confirmed to meet brand standards and legal requirements.
5. Prohibited Uses — All AI Tools
The following uses are prohibited regardless of which AI tool is used:
5.1 Absolute Prohibitions
| # | Prohibited Activity | Reason |
|---|---|---|
| P1 | Processing special category personal data (health, religion, political views, biometrics, sexual orientation) through any AI tool | GDPR Art. 9; high risk of harm |
| P2 | Using AI to make or substantially influence decisions with significant legal or personal effects on individuals (hiring, dismissal, credit, benefits) without human oversight | EU AI Act Art. 14; GDPR Art. 22 |
| P3 | Using AI to generate content that is intentionally deceptive or designed to manipulate individuals against their interests | EU AI Act Art. 5 (prohibited practices) |
| P4 | Using AI for any form of subliminal manipulation, social scoring, or real-time biometric surveillance | EU AI Act Art. 5 (prohibited practices) |
| P5 | Using AI to produce content that infringes copyright, trade marks, or other intellectual property rights | IP law |
| P6 | Using AI to produce content that is defamatory, discriminatory, or harassing | Employment law; civil liability |
| P7 | Misrepresenting AI-generated content as human-created in contexts where this distinction is material | EU AI Act Art. 50; professional standards |
| P8 | Bypassing or attempting to bypass the organisation’s AI approval and oversight processes | Policy breach |
| P9 | Using personal AI tool accounts (subscriptions paid by the employee) to process organisational data | Data security |
| P10 | Using AI tools to exfiltrate, leak, or improperly transfer organisational data | Data protection; potential criminal liability |
5.2 Restricted Uses (Require Prior Approval)
| # | Activity | Approval Required From |
|---|---|---|
| R1 | Using AI in any system that interfaces directly with customers or the public | AI Officer + Legal |
| R2 | Using AI to analyse employee behaviour, performance, or communications | HR Director + DPO + Legal |
| R3 | Using AI for any medical, legal, or financial advice provided to third parties | Relevant professional + Legal |
| R4 | Using AI to process client personal data | DPO + AI Officer |
| R5 | Integrating AI into automated decision-making workflows | AI Officer + Legal + DPO |
6. Data Handling Requirements
6.1 Data Classification Before Use
Before using any AI tool, employees must classify the data they intend to input:
| Classification | Definition | Examples | Permitted in AI? |
|---|---|---|---|
| Public | Information already in the public domain | Published reports, press releases | Yes — any approved tool |
| Internal | Not public but not confidential | Internal memos, meeting notes | Yes — approved tools only |
| Confidential | Business-sensitive; restricted circulation | Client contracts, financial forecasts, HR data | Only if tool is specifically approved for this level |
| Restricted | Highest sensitivity; strict need-to-know | Personal data, legal privilege, M&A information | Do not enter into any AI tool without explicit approval from DPO/Legal |
6.2 Personal Data Rules
- Do not enter personal data into any AI tool unless the tool has been approved for personal data processing and a Data Processing Agreement (DPA) is in place with the tool provider
- This includes: names, email addresses, phone numbers, IP addresses, location data, job titles combined with names, and any information that could identify a specific individual
- Anonymise or pseudonymise data before AI processing wherever possible
- If you are unsure whether data is personal data, treat it as personal data and seek guidance from the DPO
6.3 Minimisation Principle
Only provide AI tools with the minimum information necessary to complete the task. Do not paste entire documents when a summary or excerpt would suffice.
6.4 Output Data
AI-generated outputs that contain or could be derived from personal data must be handled with the same care as the input data. Do not share AI outputs more widely than the input data would have been shared.
7. Content Review Requirements
7.1 Review Tiers
All AI-generated content must be reviewed before use according to the following tiers:
| Tier | Content Type | Minimum Review Standard |
|---|---|---|
| Tier 1 — Internal Low Risk | Internal emails, notes, non-binding communications | Employee self-review for accuracy, tone, and appropriateness |
| Tier 2 — Internal High Risk | Board reports, HR communications, financial analysis | Line manager or department head review |
| Tier 3 — External Standard | Client-facing documents, website content, marketing materials | Department head + Comms/Marketing sign-off |
| Tier 4 — External High Stakes | Legal documents, regulatory submissions, press releases, financial reports | Legal / Compliance + relevant C-suite sign-off |
7.2 Factual Verification Obligation
Employees are responsible for verifying the factual accuracy of all AI-generated content before use. AI tools frequently produce:
- Incorrect statistics and figures
- Fabricated citations and references (“hallucinations”)
- Outdated information
- Plausible-sounding but incorrect legal or technical assertions
Verification checklist for AI-generated factual claims:
- All statistics traced to an original, verifiable source
- All legal citations checked against current legislation
- All named individuals or organisations confirmed to exist and details verified
- All dates and timelines confirmed as accurate
- Content checked for internal consistency
7.3 Transparency Obligations
| Situation | Required Disclosure |
|---|---|
| Submitting AI-generated content to a regulator or court | Disclose AI assistance to legal counsel for advice on disclosure requirements |
| Publishing AI-generated content on the website | Include AI-assisted disclaimer if the content is substantially AI-generated |
| Providing AI-generated advice to a client | Disclose AI assistance and confirm human professional review |
| Responding to a procurement/tender | Check tender instructions — many public bodies require declaration of AI use |
8. Reporting Obligations
8.1 What Must Be Reported
Employees must report the following to [AI Officer / IT Security / DPO — as applicable]:
| Event | Report To | Timeframe |
|---|---|---|
| Discovery of a colleague using an unapproved AI tool for work | Line Manager or AI Officer | Within 5 business days |
| Accidental input of personal data or confidential information into an AI tool | DPO + Line Manager | Within 24 hours |
| Suspected data breach via an AI tool | DPO + IT Security | Immediately (within 1 hour) |
| AI tool producing discriminatory, harmful, or illegal content | AI Officer + Legal | Within 24 hours |
| Discovering an AI tool has been used to make a significant decision affecting an individual without human review | HR Director + Legal + DPO | Within 24 hours |
| Any incident where AI use has caused or may cause harm to a third party | Legal + AI Officer | Immediately |
8.2 How to Report
Reports should be made via:
- Primary: [email address — e.g., ai-incidents@organisation.com]
- Secondary: Direct communication to [AI Officer name/role]
- Urgent incidents: [Phone number / IT Security hotline]
8.3 Whistleblower Protection
Employees who report AI-related concerns in good faith are protected under the Organisation’s whistleblower policy and applicable EU law (Directive (EU) 2019/1937). Retaliation against reporters is a disciplinary offence.
9. Responsibilities
| Role | Responsibilities |
|---|---|
| All Employees | Comply with this Policy; complete mandatory AI literacy training; report incidents |
| Line Managers | Ensure team compliance; review Tier 2 content; support reporting culture |
| AI Officer | Maintain approved tool register; manage approval requests; oversee incident response; report to board |
| DPO | Advise on personal data processing; review DPIA for AI tools; handle data protection incidents |
| IT/Security | Implement technical controls; manage tool access; monitor for shadow AI; conduct security assessments |
| Legal | Review high-stakes content; advise on compliance; manage external disclosures |
| HR | Manage policy communications; handle employee relations aspects; oversee training |
10. Training Requirements
| Role | Training Required | Frequency |
|---|---|---|
| All employees | AI Literacy Awareness (minimum 1 hour) | At onboarding + annual refresh |
| Employees regularly using approved AI tools | Role-specific AI tool training | At tool adoption + when tool updates materially |
| Managers | AI governance and oversight responsibilities | Annual |
| AI Officer, DPO, Legal, CISO | Advanced EU AI Act compliance training | Annual + when regulation updates |
Training completion is recorded in [HR System name]. Failure to complete mandatory training within [30 days] of due date is a disciplinary matter.
11. Consequences of Breach
Breaches of this Policy will be investigated and may result in disciplinary action up to and including dismissal, in accordance with the Organisation’s disciplinary procedure. Serious breaches (e.g., data breaches involving personal data, use of prohibited AI practices) may also result in:
- Notification to supervisory authorities (AEPD / EDPS)
- Civil or criminal liability for the individual
- Regulatory fines for the Organisation
12. Policy Review
This Policy will be reviewed:
- At least annually
- Following any significant change in applicable law or regulation
- Following a material AI-related incident
- When new AI tools are approved that require material policy changes
Annex A — Approved AI Tool Register
To be maintained by [AI Officer/IT Department] — updated quarterly
| Tool Name | Provider | Version/Tier | Approved Uses | Data Classes Permitted | Date Approved | Next Review |
|---|---|---|---|---|---|---|
Annex B — AI Tool Request Form
Requestor Name: Department: Date: Tool Name: Provider: URL/Access Method: Proposed Use Case: Data Classification of Data to be Processed: Business Justification: Have you reviewed the tool’s privacy policy and terms of service? Yes / No Does the provider offer a Data Processing Agreement? Yes / No / Unknown Urgency:
Submit to: [email] | Response within: [15] business days
Annex C — Politica de Uso Aceptable de IA (Resumen en Español)
Versión resumida en español para empleados
Herramientas aprobadas: Solo se pueden utilizar las herramientas de IA que figuran en el Registro de Herramientas Aprobadas (Anexo A). Para solicitar la aprobación de una nueva herramienta, completa el Formulario de Solicitud (Anexo B).
Reglas clave para ChatGPT:
- No introduzcas datos de clientes, datos personales ni información confidencial
- Usa únicamente la versión empresarial (no la versión gratuita en ChatGPT.com) para trabajo
- Verifica siempre la exactitud del contenido generado antes de usarlo
Reglas clave para GitHub Copilot:
- No incluyas credenciales, claves API ni contraseñas en los archivos que usa Copilot
- Revisa todo el código generado por IA antes de incluirlo en producción
Reglas clave para generadores de imágenes:
- No generes imágenes de personas reales identificables sin su consentimiento
- No generes contenido engañoso, difamatorio ni que infrinja derechos de propiedad intelectual
Usos absolutamente prohibidos:
- Procesar datos de salud, religiosos, políticos, biométricos o de orientación sexual mediante IA
- Usar IA para tomar decisiones que afecten significativamente a personas sin supervisión humana
- Usar IA para manipular o engañar a individuos
- Utilizar herramientas de IA no aprobadas para trabajo
Cómo notificar un incidente: Envía un correo a [ai-incidents@organisation.com] o contacta directamente al Responsable de IA.
Los empleados que incumplan esta política pueden estar sujetos a medidas disciplinarias.
Document Footer: Template provided by VORLUX AI | vorluxai.com This is guidance only, not legal advice. Version 1.0 | For EU AI Act compliance use | Last updated: 2026-04-05
Versión Española
Politica de Uso Aceptable de IA para Empleados
Referencia: [ORG-POL-AI-001] Version: 1.0 Fecha de vigencia: [FECHA] Fecha de revision: [FECHA + 12 meses] Responsable: [Responsable de IA / CISO / Juridico] Aprobado por: [Nombre, Cargo] Clasificacion: Interno
1. Objeto y Ambito
1.1 Objeto
Esta Politica establece las normas que regulan el uso de herramientas de inteligencia artificial (IA) por parte de los empleados de [NOMBRE DE LA ORGANIZACION]. Su finalidad es:
- Garantizar un uso responsable, etico y conforme a la legislacion vigente
- Proteger la informacion confidencial y los datos de la organizacion
- Cumplir con el EU AI Act (Reglamento (UE) 2024/1689), el GDPR y la legislacion laboral espanola/europea aplicable
- Prevenir danos reputacionales, legales y operativos
- Fomentar una cultura de uso responsable de la IA
1.2 Ambito de aplicacion
| Categoria | Incluido? |
|---|---|
| Empleados permanentes | Si |
| Contratistas y freelancers con acceso a sistemas | Si |
| Personal temporal | Si |
| Miembros del consejo y directivos | Si |
| Proveedores externos que usen sistemas de la organizacion | Si |
| Dispositivos personales utilizados para el trabajo (BYOD) | Si, al acceder a sistemas o datos corporativos |
Cubre todas las herramientas de IA utilizadas en conexion con actividades laborales, independientemente del dispositivo o plataforma.
2. Definiciones
| Termino | Definicion |
|---|---|
| Herramienta de IA | Cualquier software que utilice machine learning, LLMs, IA generativa o tecnologias similares (chatbots, asistentes de codigo, generadores de imagenes, traductores, plataformas de analisis) |
| Herramienta aprobada | Herramienta evaluada, aprobada y registrada en el Anexo A |
| Shadow AI | Cualquier herramienta de IA utilizada sin aprobacion organizativa |
| Informacion confidencial | Secretos comerciales, datos de clientes, datos de empleados, datos financieros, planes estrategicos y cualquier dato clasificado como Interno, Confidencial o Restringido |
| Datos personales | Toda informacion relativa a una persona fisica identificada o identificable, segun GDPR Art. 4(1) |
| Contenido generado por IA | Todo texto, imagen, audio, video, codigo u otro resultado producido por una herramienta de IA |
| Revision humana | Verificacion del contenido generado por IA por un empleado cualificado antes de su uso |
3. Herramientas de IA Aprobadas
3.1 Registro actual
Solo pueden utilizarse las herramientas del Registro de Herramientas Aprobadas (Anexo A), mantenido por [Responsable de IA/Departamento de TI] y revisado trimestralmente.
| Herramienta | Casos de uso | Clasificacion de datos permitida | Condiciones |
|---|---|---|---|
| [Herramienta 1 — ej. Microsoft Copilot (M365)] | Redaccion, resumen, analisis | Interno, Confidencial | Usar tenant empresarial; sin PII de clientes |
| [Herramienta 2 — ej. GitHub Copilot] | Asistencia y revision de codigo | Interno | Sin algoritmos propietarios; sin credenciales en prompts |
| [Herramienta 3 — ej. generador de imagenes aprobado] | Visuales de marketing, presentaciones | Publico, Interno | Revision humana antes de publicar |
| [Herramienta 4 — ej. herramienta de traduccion aprobada] | Traduccion de documentos | Interno | No para documentos legalmente vinculantes sin revision juridica |
3.2 Solicitud de nuevas herramientas
Los empleados deben enviar el Formulario de Solicitud de Herramienta de IA (Anexo B) a [TI/Responsable de IA] antes de su uso. El proceso de aprobacion tarda un maximo de [15] dias habiles. Utilizar una herramienta sin aprobacion previa constituye un incumplimiento.
3.3 Criterios de aprobacion
- Ubicacion del tratamiento de datos y conformidad con GDPR
- Certificaciones de seguridad (ISO 27001, SOC 2 Type II, etc.)
- Condiciones de servicio y practicas de retencion/entrenamiento de datos
- Clasificacion de riesgo segun el EU AI Act
- Justificacion de negocio y alternativas disponibles
4. Reglas por Herramienta
4.1 ChatGPT y productos OpenAI
Usos permitidos: redaccion de borradores internos (con revision), brainstorming, resumen de informacion publica o interna no confidencial, aprendizaje.
Usos prohibidos: introducir datos de clientes, datos personales de empleados, codigo fuente propietario, contratos o documentos financieros. Prohibido usar la version gratuita para el trabajo.
Manejo de datos: confirmar “No entrenar con mis datos” activo; verificar siempre la exactitud del contenido generado.
Revision: todo contenido externo debe ser aprobado por [Jefe de Departamento / revisor designado].
4.2 GitHub Copilot y asistentes de codigo
Usos permitidos: sugerencias de codigo, documentacion, deteccion de bugs, tests unitarios.
Usos prohibidos: credenciales, claves API o secretos en prompts; aceptar sugerencias sin revisar en funciones criticas de seguridad.
Seguridad: activar .gitignore para secretos; revisar todo codigo generado por IA antes de hacer commit; ejecutar analisis de seguridad.
Revision: todo codigo generado por IA para produccion debe pasar por el proceso estandar de code review, indicando la asistencia de IA en el pull request.
4.3 Generadores de imagenes
Usos permitidos: visuales de marketing, contenido provisional interno, exploracion de conceptos de diseno.
Usos prohibidos: imagenes de personas reales sin consentimiento, contenido enganoso o difamatorio, deepfakes, violencia o desnudos.
Propiedad intelectual: verificar terminos comerciales de la herramienta; no replicar estilos de artistas que se hayan opuesto; conservar registro de prompts para publicaciones oficiales.
Revision: aprobacion de [Responsable de Marketing / Marca] antes de publicar externamente.
5. Usos Prohibidos — Todas las Herramientas
5.1 Prohibiciones absolutas
| N.o | Actividad prohibida | Motivo |
|---|---|---|
| P1 | Tratar datos de categorias especiales (salud, religion, opiniones politicas, biometria, orientacion sexual) | GDPR Art. 9 |
| P2 | Usar IA para tomar o influir sustancialmente en decisiones con efectos legales o personales significativos sin supervision humana | EU AI Act Art. 14; GDPR Art. 22 |
| P3 | Generar contenido intencionalmente enganoso o manipulador | EU AI Act Art. 5 |
| P4 | Manipulacion subliminal, social scoring o vigilancia biometrica en tiempo real | EU AI Act Art. 5 |
| P5 | Producir contenido que infrinja derechos de propiedad intelectual | Legislacion de PI |
| P6 | Producir contenido difamatorio, discriminatorio o acosador | Legislacion laboral; responsabilidad civil |
| P7 | Presentar contenido generado por IA como creado por humanos cuando la distincion sea relevante | EU AI Act Art. 50 |
| P8 | Eludir los procesos de aprobacion y supervision de IA de la organizacion | Incumplimiento de politica |
| P9 | Usar cuentas personales de IA para tratar datos de la organizacion | Seguridad de datos |
| P10 | Usar IA para exfiltrar o transferir indebidamente datos organizativos | Proteccion de datos; posible responsabilidad penal |
5.2 Usos restringidos (requieren aprobacion previa)
| N.o | Actividad | Aprobacion requerida de |
|---|---|---|
| R1 | Usar IA en sistemas que interactuen directamente con clientes o el publico | Responsable de IA + Juridico |
| R2 | Analizar comportamiento, rendimiento o comunicaciones de empleados | Director de RRHH + DPO + Juridico |
| R3 | Usar IA para asesoramiento medico, legal o financiero a terceros | Profesional correspondiente + Juridico |
| R4 | Tratar datos personales de clientes con IA | DPO + Responsable de IA |
| R5 | Integrar IA en flujos de toma de decisiones automatizada | Responsable de IA + Juridico + DPO |
6. Requisitos de Manejo de Datos
6.1 Clasificacion antes de su uso
| Clasificacion | Definicion | Ejemplos | Permitido en IA? |
|---|---|---|---|
| Publico | Informacion ya publica | Informes publicados, notas de prensa | Si, cualquier herramienta aprobada |
| Interno | No publico pero no confidencial | Memorandos internos, actas de reuniones | Si, solo herramientas aprobadas |
| Confidencial | Sensible para el negocio | Contratos de clientes, previsiones financieras, datos de RRHH | Solo si la herramienta esta aprobada para este nivel |
| Restringido | Maxima sensibilidad | Datos personales, privilegio legal, informacion de M&A | No introducir sin aprobacion explicita de DPO/Juridico |
6.2 Reglas sobre datos personales
- No introducir datos personales salvo que la herramienta este aprobada para ello y exista un DPA con el proveedor
- Incluye: nombres, emails, telefonos, IPs, ubicaciones, titulos de puesto combinados con nombres
- Anonimizar o pseudonimizar siempre que sea posible
- En caso de duda, tratar como datos personales y consultar al DPO
6.3 Principio de minimizacion
Proporcionar solo la informacion minima necesaria. No pegar documentos completos cuando baste un resumen o extracto.
6.4 Datos de salida
Los resultados generados por IA que contengan o deriven de datos personales deben manejarse con el mismo cuidado que los datos de entrada.
7. Requisitos de Revision de Contenido
7.1 Niveles de revision
| Nivel | Tipo de contenido | Estandar minimo de revision |
|---|---|---|
| Nivel 1 — Interno bajo riesgo | Emails internos, notas, comunicaciones no vinculantes | Auto-revision del empleado |
| Nivel 2 — Interno alto riesgo | Informes al consejo, comunicaciones de RRHH, analisis financiero | Revision del responsable directo o jefe de departamento |
| Nivel 3 — Externo estandar | Documentos para clientes, contenido web, materiales de marketing | Jefe de departamento + aprobacion de Comunicacion/Marketing |
| Nivel 4 — Externo alto impacto | Documentos legales, presentaciones regulatorias, informes financieros | Juridico/Compliance + aprobacion de la direccion |
7.2 Obligacion de verificacion factual
El empleado es responsable de verificar la exactitud de todo contenido generado por IA. Lista de comprobacion:
- Estadisticas trazadas a una fuente original verificable
- Citas legales verificadas contra legislacion vigente
- Personas u organizaciones mencionadas confirmadas
- Fechas y plazos confirmados
- Contenido verificado por coherencia interna
7.3 Obligaciones de transparencia
| Situacion | Divulgacion requerida |
|---|---|
| Contenido generado por IA enviado a un regulador o tribunal | Informar al asesor juridico para valorar la divulgacion |
| Contenido publicado en la web sustancialmente generado por IA | Incluir aviso de asistencia de IA |
| Asesoramiento generado por IA proporcionado a un cliente | Divulgar asistencia de IA y confirmar revision profesional humana |
| Respuesta a licitacion/concurso publico | Verificar si se exige declaracion de uso de IA |
8. Obligaciones de Notificacion
8.1 Que debe notificarse
| Evento | Notificar a | Plazo |
|---|---|---|
| Descubrimiento de uso de herramienta no aprobada | Responsable directo o Responsable de IA | 5 dias habiles |
| Introduccion accidental de datos personales/confidenciales en una herramienta de IA | DPO + Responsable directo | 24 horas |
| Sospecha de violacion de datos via herramienta de IA | DPO + Seguridad TI | Inmediatamente (1 hora) |
| Contenido discriminatorio, danino o ilegal generado por IA | Responsable de IA + Juridico | 24 horas |
| Uso de IA para tomar decisiones significativas sin revision humana | Director de RRHH + Juridico + DPO | 24 horas |
| Cualquier incidente donde el uso de IA haya causado o pueda causar dano a terceros | Juridico + Responsable de IA | Inmediatamente |
8.2 Como notificar
- Principal: [email — ej. ai-incidents@organisation.com]
- Secundario: Comunicacion directa al [Responsable de IA]
- Urgente: [Telefono / linea de Seguridad TI]
8.3 Proteccion del denunciante
Los empleados que notifiquen de buena fe estan protegidos conforme a la politica de denuncias interna y la Directiva (UE) 2019/1937. Las represalias constituyen infraccion disciplinaria.
9. Responsabilidades
| Rol | Responsabilidades |
|---|---|
| Todos los empleados | Cumplir esta Politica; completar la formacion obligatoria en alfabetizacion de IA; notificar incidentes |
| Responsables directos | Asegurar el cumplimiento del equipo; revisar contenido de Nivel 2; fomentar la cultura de notificacion |
| Responsable de IA | Mantener el registro de herramientas; gestionar solicitudes; supervisar la respuesta a incidentes; informar al consejo |
| DPO | Asesorar sobre tratamiento de datos personales; revisar EIPD para herramientas de IA; gestionar incidentes de proteccion de datos |
| TI/Seguridad | Implementar controles tecnicos; gestionar accesos; monitorizar shadow AI; realizar evaluaciones de seguridad |
| Juridico | Revisar contenido de alto impacto; asesorar sobre cumplimiento; gestionar divulgaciones externas |
| RRHH | Gestionar comunicaciones de la politica; manejar aspectos de relaciones laborales; supervisar la formacion |
10. Requisitos de Formacion
| Rol | Formacion requerida | Frecuencia |
|---|---|---|
| Todos los empleados | Concienciacion sobre alfabetizacion en IA (minimo 1 hora) | Al incorporarse + actualizacion anual |
| Empleados que usen herramientas aprobadas regularmente | Formacion especifica de la herramienta por rol | Al adoptarla + cuando se actualice sustancialmente |
| Directivos | Gobernanza de IA y responsabilidades de supervision | Anual |
| Responsable de IA, DPO, Juridico, CISO | Formacion avanzada de conformidad con el EU AI Act | Anual + cuando se actualice la regulacion |
El cumplimiento se registra en [Sistema de RRHH]. No completar la formacion obligatoria en [30 dias] constituye infraccion disciplinaria.
11. Consecuencias del Incumplimiento
Los incumplimientos seran investigados y podran resultar en medidas disciplinarias, incluido el despido, conforme al procedimiento disciplinario. Los incumplimientos graves pueden ademas conllevar:
- Notificacion a las autoridades de supervision (AEPD / EDPS)
- Responsabilidad civil o penal para el individuo
- Sanciones regulatorias para la organizacion
12. Revision de la Politica
Esta Politica se revisara:
- Al menos anualmente
- Tras cualquier cambio significativo en la legislacion aplicable
- Tras un incidente material relacionado con la IA
- Cuando se aprueben nuevas herramientas que requieran cambios sustanciales en la politica
Anexo A — Registro de Herramientas de IA Aprobadas
Mantenido por [Responsable de IA/Departamento de TI] — actualizado trimestralmente
| Herramienta | Proveedor | Version/Nivel | Usos aprobados | Clases de datos permitidas | Fecha de aprobacion | Proxima revision |
|---|---|---|---|---|---|---|
Anexo B — Formulario de Solicitud de Herramienta de IA
Nombre del solicitante: Departamento: Fecha: Nombre de la herramienta: Proveedor: URL / Metodo de acceso: Caso de uso propuesto: Clasificacion de los datos a tratar: Justificacion de negocio: Ha revisado la politica de privacidad y condiciones de uso? Si / No El proveedor ofrece un DPA? Si / No / Desconocido Urgencia:
Enviar a: [email] | Respuesta en: [15] dias habiles
Plantilla proporcionada por VORLUX AI | vorluxai.com Este documento es solo orientativo, no constituye asesoramiento juridico. Version 1.0 | Para uso de conformidad con el EU AI Act | Ultima actualizacion: 2026-04-05