Back to templates
policy policy-document

Acceptable AI Use Policy for Employees

A comprehensive policy governing employee use of AI tools, covering approved applications, prohibited uses, data handling, content review requirements, and reporting obligations — aligned with EU AI Act requirements.

Request customization
🌐
Bilingual / Bilingüe

This template includes both English and Spanish versions. Scroll down to find "Versión Española".

Disclaimer: This is guidance only, not legal advice. Consult qualified legal counsel before adopting this policy. | Este documento es solo orientativo, no constituye asesoramiento jurídico.


Acceptable AI Use Policy for Employees

Document Reference: [ORG-POL-AI-001] Version: 1.0 Effective Date: [DATE] Review Date: [DATE + 12 months] Owner: [AI Officer / CISO / Legal — delete as appropriate] Approved by: [Name, Title] Classification: Internal


1. Purpose and Scope

1.1 Purpose

This Acceptable AI Use Policy (“Policy”) establishes the rules and standards governing employee use of artificial intelligence (AI) tools within [ORGANISATION NAME] (“the Organisation”). It is designed to:

  • Ensure AI tools are used responsibly, ethically, and in compliance with applicable law
  • Protect the Organisation’s confidential information and data
  • Satisfy obligations under the EU AI Act (Regulation (EU) 2024/1689), GDPR, and applicable Spanish/EU employment law
  • Prevent reputational, legal, and operational harm
  • Foster a culture of responsible AI use

1.2 Scope

This Policy applies to:

CategoryCovered?
All permanent employeesYes
Contractors and freelancers with system accessYes
Temporary staffYes
Board members and executivesYes
Third-party service providers using Org systemsYes
Personal devices used for work (BYOD)Yes — when accessing work systems or handling work data

This Policy covers all AI tools, whether accessed via the Organisation’s systems, personal devices, or third-party platforms, when used in connection with work activities.


2. Definitions

TermDefinition
AI ToolAny software application using machine learning, large language models, generative AI, or similar technologies, including chatbots, code assistants, image generators, translation tools, and analytics platforms
Approved AI ToolAn AI tool that has been formally assessed, approved, and listed in Annex A of this Policy
Shadow AIAny AI tool used for work purposes without organisational approval
Confidential InformationTrade secrets, client data, employee data, financial data, strategic plans, and any data classified as Internal, Confidential, or Restricted
Personal DataAny information relating to an identified or identifiable natural person, as defined under GDPR Art. 4(1)
AI-Generated ContentAny text, image, audio, video, code, or other output produced by an AI tool
Human ReviewReview and verification of AI-generated content by a qualified employee before use

3. Approved AI Tools

3.1 Current Approved Tool Register

Only tools listed in the Approved AI Tool Register (Annex A) may be used for work purposes. The register is maintained by [AI Officer/IT Department] and reviewed quarterly.

Current approved tools (as of effective date):

ToolUse CasesData Classification PermittedConditions
[Tool 1 — e.g., Microsoft Copilot (M365)]Drafting, summarisation, analysisInternal, ConfidentialMust use enterprise tenant; no client PII
[Tool 2 — e.g., GitHub Copilot]Code assistance, code reviewInternalNo proprietary algorithms; no credentials in prompts
[Tool 3 — e.g., approved image generator]Marketing visuals, presentationsPublic, InternalHuman review required before publication
[Tool 4 — e.g., approved translation tool]Document translationInternalNot for legally binding documents without legal review

Specific rules for individual tools are set out in Section 4 of this Policy.

3.2 Requesting Approval for New Tools

Employees wishing to use an AI tool not listed in Annex A must submit an AI Tool Request Form (Annex B) to [IT/AI Officer] before use. The approval process takes a maximum of [15] business days. Using a tool prior to approval is a breach of this Policy.

3.3 Approval Criteria

Tools are assessed against:

  • Data processing location and GDPR compliance
  • Security certifications (ISO 27001, SOC 2 Type II, etc.)
  • Terms of service and data retention/training practices
  • Risk classification under the EU AI Act
  • Business justification and available alternatives

4. Tool-Specific Rules

4.1 ChatGPT and OpenAI Products

Approved version(s): [e.g., ChatGPT Enterprise only / ChatGPT Team / Not approved — delete as applicable]

Permitted uses:

  • Drafting initial text for internal documents (must be reviewed and edited)
  • Brainstorming and idea generation
  • Summarising public or internal non-confidential information
  • Learning and skills development

Prohibited uses:

  • Entering client names, contact details, or any client-specific information
  • Uploading or pasting contracts, financial statements, or legal documents
  • Entering employee personal data (names, salaries, performance data, health information)
  • Entering proprietary source code, trade secrets, or competitive intelligence
  • Using the free consumer version (ChatGPT.com without an enterprise agreement) for any work purpose

Data handling rules:

  • If using an enterprise/API version: confirm “Do not train on my data” is enabled in account settings before use
  • Never use the consumer (free) version for work-related tasks
  • Treat all ChatGPT outputs as drafts requiring verification — do not quote statistics, legal citations, or facts without independent verification

Content review requirement: All ChatGPT-generated content used externally (client-facing, published, or submitted to regulators) must be reviewed and approved by [Department Head / designated reviewer] before use.


4.2 GitHub Copilot and AI Code Assistants

Approved version(s): [e.g., GitHub Copilot Business (enterprise-configured)]

Permitted uses:

  • Generating code suggestions and autocomplete
  • Code explanation and documentation
  • Identifying potential bugs in code under review
  • Writing unit tests

Prohibited uses:

  • Entering authentication credentials, API keys, passwords, or secrets in any prompt or file that will be shared with the AI
  • Using Copilot with repositories containing client data or personal data unless the repository is configured to exclude AI training
  • Accepting code suggestions without review for security-critical functions
  • Using Copilot to generate code that circumvents security controls

Security requirements:

  • Enable .gitignore for all files containing secrets before using Copilot
  • Review all AI-generated code for security vulnerabilities before committing
  • Do not accept Copilot suggestions in files containing environment variables or configuration secrets
  • Run security scanning (e.g., [tool name]) on all code containing significant AI-generated portions

Content review requirement: All AI-generated code included in production systems must be reviewed by a qualified developer and included in the standard code review process. The AI assistance should be noted in the pull request description.


4.3 AI Image Generators

Approved tool(s): [e.g., Adobe Firefly (enterprise), Canva AI (enterprise)]

Permitted uses:

  • Creating marketing and presentation visuals
  • Generating placeholder content for internal use
  • Exploring design concepts

Prohibited uses:

  • Generating images of real, identifiable individuals without their written consent
  • Generating images that could be defamatory, misleading, or in breach of third-party intellectual property rights
  • Creating “deepfake” or synthetic media that misrepresents real events or people
  • Generating images for any official regulatory submission, legal document, or financial report
  • Creating images depicting violence, nudity, or other harmful content

Intellectual property rules:

  • Images generated using the Organisation’s approved enterprise tools are generally permitted for commercial use — check the specific tool’s commercial use terms before publication
  • Do not use images that closely replicate a specific artist’s style if the artist has objected to their work being used for AI training
  • Retain records of prompts used for images in official publications

Content review requirement: All AI-generated images used in external communications, marketing materials, or official documents must be reviewed by [Marketing Manager / Brand Owner] and confirmed to meet brand standards and legal requirements.


5. Prohibited Uses — All AI Tools

The following uses are prohibited regardless of which AI tool is used:

5.1 Absolute Prohibitions

#Prohibited ActivityReason
P1Processing special category personal data (health, religion, political views, biometrics, sexual orientation) through any AI toolGDPR Art. 9; high risk of harm
P2Using AI to make or substantially influence decisions with significant legal or personal effects on individuals (hiring, dismissal, credit, benefits) without human oversightEU AI Act Art. 14; GDPR Art. 22
P3Using AI to generate content that is intentionally deceptive or designed to manipulate individuals against their interestsEU AI Act Art. 5 (prohibited practices)
P4Using AI for any form of subliminal manipulation, social scoring, or real-time biometric surveillanceEU AI Act Art. 5 (prohibited practices)
P5Using AI to produce content that infringes copyright, trade marks, or other intellectual property rightsIP law
P6Using AI to produce content that is defamatory, discriminatory, or harassingEmployment law; civil liability
P7Misrepresenting AI-generated content as human-created in contexts where this distinction is materialEU AI Act Art. 50; professional standards
P8Bypassing or attempting to bypass the organisation’s AI approval and oversight processesPolicy breach
P9Using personal AI tool accounts (subscriptions paid by the employee) to process organisational dataData security
P10Using AI tools to exfiltrate, leak, or improperly transfer organisational dataData protection; potential criminal liability

5.2 Restricted Uses (Require Prior Approval)

#ActivityApproval Required From
R1Using AI in any system that interfaces directly with customers or the publicAI Officer + Legal
R2Using AI to analyse employee behaviour, performance, or communicationsHR Director + DPO + Legal
R3Using AI for any medical, legal, or financial advice provided to third partiesRelevant professional + Legal
R4Using AI to process client personal dataDPO + AI Officer
R5Integrating AI into automated decision-making workflowsAI Officer + Legal + DPO

6. Data Handling Requirements

6.1 Data Classification Before Use

Before using any AI tool, employees must classify the data they intend to input:

ClassificationDefinitionExamplesPermitted in AI?
PublicInformation already in the public domainPublished reports, press releasesYes — any approved tool
InternalNot public but not confidentialInternal memos, meeting notesYes — approved tools only
ConfidentialBusiness-sensitive; restricted circulationClient contracts, financial forecasts, HR dataOnly if tool is specifically approved for this level
RestrictedHighest sensitivity; strict need-to-knowPersonal data, legal privilege, M&A informationDo not enter into any AI tool without explicit approval from DPO/Legal

6.2 Personal Data Rules

  • Do not enter personal data into any AI tool unless the tool has been approved for personal data processing and a Data Processing Agreement (DPA) is in place with the tool provider
  • This includes: names, email addresses, phone numbers, IP addresses, location data, job titles combined with names, and any information that could identify a specific individual
  • Anonymise or pseudonymise data before AI processing wherever possible
  • If you are unsure whether data is personal data, treat it as personal data and seek guidance from the DPO

6.3 Minimisation Principle

Only provide AI tools with the minimum information necessary to complete the task. Do not paste entire documents when a summary or excerpt would suffice.

6.4 Output Data

AI-generated outputs that contain or could be derived from personal data must be handled with the same care as the input data. Do not share AI outputs more widely than the input data would have been shared.


7. Content Review Requirements

7.1 Review Tiers

All AI-generated content must be reviewed before use according to the following tiers:

TierContent TypeMinimum Review Standard
Tier 1 — Internal Low RiskInternal emails, notes, non-binding communicationsEmployee self-review for accuracy, tone, and appropriateness
Tier 2 — Internal High RiskBoard reports, HR communications, financial analysisLine manager or department head review
Tier 3 — External StandardClient-facing documents, website content, marketing materialsDepartment head + Comms/Marketing sign-off
Tier 4 — External High StakesLegal documents, regulatory submissions, press releases, financial reportsLegal / Compliance + relevant C-suite sign-off

7.2 Factual Verification Obligation

Employees are responsible for verifying the factual accuracy of all AI-generated content before use. AI tools frequently produce:

  • Incorrect statistics and figures
  • Fabricated citations and references (“hallucinations”)
  • Outdated information
  • Plausible-sounding but incorrect legal or technical assertions

Verification checklist for AI-generated factual claims:

  • All statistics traced to an original, verifiable source
  • All legal citations checked against current legislation
  • All named individuals or organisations confirmed to exist and details verified
  • All dates and timelines confirmed as accurate
  • Content checked for internal consistency

7.3 Transparency Obligations

SituationRequired Disclosure
Submitting AI-generated content to a regulator or courtDisclose AI assistance to legal counsel for advice on disclosure requirements
Publishing AI-generated content on the websiteInclude AI-assisted disclaimer if the content is substantially AI-generated
Providing AI-generated advice to a clientDisclose AI assistance and confirm human professional review
Responding to a procurement/tenderCheck tender instructions — many public bodies require declaration of AI use

8. Reporting Obligations

8.1 What Must Be Reported

Employees must report the following to [AI Officer / IT Security / DPO — as applicable]:

EventReport ToTimeframe
Discovery of a colleague using an unapproved AI tool for workLine Manager or AI OfficerWithin 5 business days
Accidental input of personal data or confidential information into an AI toolDPO + Line ManagerWithin 24 hours
Suspected data breach via an AI toolDPO + IT SecurityImmediately (within 1 hour)
AI tool producing discriminatory, harmful, or illegal contentAI Officer + LegalWithin 24 hours
Discovering an AI tool has been used to make a significant decision affecting an individual without human reviewHR Director + Legal + DPOWithin 24 hours
Any incident where AI use has caused or may cause harm to a third partyLegal + AI OfficerImmediately

8.2 How to Report

Reports should be made via:

  • Primary: [email address — e.g., ai-incidents@organisation.com]
  • Secondary: Direct communication to [AI Officer name/role]
  • Urgent incidents: [Phone number / IT Security hotline]

8.3 Whistleblower Protection

Employees who report AI-related concerns in good faith are protected under the Organisation’s whistleblower policy and applicable EU law (Directive (EU) 2019/1937). Retaliation against reporters is a disciplinary offence.


9. Responsibilities

RoleResponsibilities
All EmployeesComply with this Policy; complete mandatory AI literacy training; report incidents
Line ManagersEnsure team compliance; review Tier 2 content; support reporting culture
AI OfficerMaintain approved tool register; manage approval requests; oversee incident response; report to board
DPOAdvise on personal data processing; review DPIA for AI tools; handle data protection incidents
IT/SecurityImplement technical controls; manage tool access; monitor for shadow AI; conduct security assessments
LegalReview high-stakes content; advise on compliance; manage external disclosures
HRManage policy communications; handle employee relations aspects; oversee training

10. Training Requirements

RoleTraining RequiredFrequency
All employeesAI Literacy Awareness (minimum 1 hour)At onboarding + annual refresh
Employees regularly using approved AI toolsRole-specific AI tool trainingAt tool adoption + when tool updates materially
ManagersAI governance and oversight responsibilitiesAnnual
AI Officer, DPO, Legal, CISOAdvanced EU AI Act compliance trainingAnnual + when regulation updates

Training completion is recorded in [HR System name]. Failure to complete mandatory training within [30 days] of due date is a disciplinary matter.


11. Consequences of Breach

Breaches of this Policy will be investigated and may result in disciplinary action up to and including dismissal, in accordance with the Organisation’s disciplinary procedure. Serious breaches (e.g., data breaches involving personal data, use of prohibited AI practices) may also result in:

  • Notification to supervisory authorities (AEPD / EDPS)
  • Civil or criminal liability for the individual
  • Regulatory fines for the Organisation

12. Policy Review

This Policy will be reviewed:

  • At least annually
  • Following any significant change in applicable law or regulation
  • Following a material AI-related incident
  • When new AI tools are approved that require material policy changes

Annex A — Approved AI Tool Register

To be maintained by [AI Officer/IT Department] — updated quarterly

Tool NameProviderVersion/TierApproved UsesData Classes PermittedDate ApprovedNext Review

Annex B — AI Tool Request Form

Requestor Name: Department: Date: Tool Name: Provider: URL/Access Method: Proposed Use Case: Data Classification of Data to be Processed: Business Justification: Have you reviewed the tool’s privacy policy and terms of service? Yes / No Does the provider offer a Data Processing Agreement? Yes / No / Unknown Urgency:

Submit to: [email] | Response within: [15] business days


Annex C — Politica de Uso Aceptable de IA (Resumen en Español)

Versión resumida en español para empleados

Herramientas aprobadas: Solo se pueden utilizar las herramientas de IA que figuran en el Registro de Herramientas Aprobadas (Anexo A). Para solicitar la aprobación de una nueva herramienta, completa el Formulario de Solicitud (Anexo B).

Reglas clave para ChatGPT:

  • No introduzcas datos de clientes, datos personales ni información confidencial
  • Usa únicamente la versión empresarial (no la versión gratuita en ChatGPT.com) para trabajo
  • Verifica siempre la exactitud del contenido generado antes de usarlo

Reglas clave para GitHub Copilot:

  • No incluyas credenciales, claves API ni contraseñas en los archivos que usa Copilot
  • Revisa todo el código generado por IA antes de incluirlo en producción

Reglas clave para generadores de imágenes:

  • No generes imágenes de personas reales identificables sin su consentimiento
  • No generes contenido engañoso, difamatorio ni que infrinja derechos de propiedad intelectual

Usos absolutamente prohibidos:

  • Procesar datos de salud, religiosos, políticos, biométricos o de orientación sexual mediante IA
  • Usar IA para tomar decisiones que afecten significativamente a personas sin supervisión humana
  • Usar IA para manipular o engañar a individuos
  • Utilizar herramientas de IA no aprobadas para trabajo

Cómo notificar un incidente: Envía un correo a [ai-incidents@organisation.com] o contacta directamente al Responsable de IA.

Los empleados que incumplan esta política pueden estar sujetos a medidas disciplinarias.


Document Footer: Template provided by VORLUX AI | vorluxai.com This is guidance only, not legal advice. Version 1.0 | For EU AI Act compliance use | Last updated: 2026-04-05


Versión Española

Politica de Uso Aceptable de IA para Empleados

Referencia: [ORG-POL-AI-001] Version: 1.0 Fecha de vigencia: [FECHA] Fecha de revision: [FECHA + 12 meses] Responsable: [Responsable de IA / CISO / Juridico] Aprobado por: [Nombre, Cargo] Clasificacion: Interno


1. Objeto y Ambito

1.1 Objeto

Esta Politica establece las normas que regulan el uso de herramientas de inteligencia artificial (IA) por parte de los empleados de [NOMBRE DE LA ORGANIZACION]. Su finalidad es:

  • Garantizar un uso responsable, etico y conforme a la legislacion vigente
  • Proteger la informacion confidencial y los datos de la organizacion
  • Cumplir con el EU AI Act (Reglamento (UE) 2024/1689), el GDPR y la legislacion laboral espanola/europea aplicable
  • Prevenir danos reputacionales, legales y operativos
  • Fomentar una cultura de uso responsable de la IA

1.2 Ambito de aplicacion

CategoriaIncluido?
Empleados permanentesSi
Contratistas y freelancers con acceso a sistemasSi
Personal temporalSi
Miembros del consejo y directivosSi
Proveedores externos que usen sistemas de la organizacionSi
Dispositivos personales utilizados para el trabajo (BYOD)Si, al acceder a sistemas o datos corporativos

Cubre todas las herramientas de IA utilizadas en conexion con actividades laborales, independientemente del dispositivo o plataforma.


2. Definiciones

TerminoDefinicion
Herramienta de IACualquier software que utilice machine learning, LLMs, IA generativa o tecnologias similares (chatbots, asistentes de codigo, generadores de imagenes, traductores, plataformas de analisis)
Herramienta aprobadaHerramienta evaluada, aprobada y registrada en el Anexo A
Shadow AICualquier herramienta de IA utilizada sin aprobacion organizativa
Informacion confidencialSecretos comerciales, datos de clientes, datos de empleados, datos financieros, planes estrategicos y cualquier dato clasificado como Interno, Confidencial o Restringido
Datos personalesToda informacion relativa a una persona fisica identificada o identificable, segun GDPR Art. 4(1)
Contenido generado por IATodo texto, imagen, audio, video, codigo u otro resultado producido por una herramienta de IA
Revision humanaVerificacion del contenido generado por IA por un empleado cualificado antes de su uso

3. Herramientas de IA Aprobadas

3.1 Registro actual

Solo pueden utilizarse las herramientas del Registro de Herramientas Aprobadas (Anexo A), mantenido por [Responsable de IA/Departamento de TI] y revisado trimestralmente.

HerramientaCasos de usoClasificacion de datos permitidaCondiciones
[Herramienta 1 — ej. Microsoft Copilot (M365)]Redaccion, resumen, analisisInterno, ConfidencialUsar tenant empresarial; sin PII de clientes
[Herramienta 2 — ej. GitHub Copilot]Asistencia y revision de codigoInternoSin algoritmos propietarios; sin credenciales en prompts
[Herramienta 3 — ej. generador de imagenes aprobado]Visuales de marketing, presentacionesPublico, InternoRevision humana antes de publicar
[Herramienta 4 — ej. herramienta de traduccion aprobada]Traduccion de documentosInternoNo para documentos legalmente vinculantes sin revision juridica

3.2 Solicitud de nuevas herramientas

Los empleados deben enviar el Formulario de Solicitud de Herramienta de IA (Anexo B) a [TI/Responsable de IA] antes de su uso. El proceso de aprobacion tarda un maximo de [15] dias habiles. Utilizar una herramienta sin aprobacion previa constituye un incumplimiento.

3.3 Criterios de aprobacion

  • Ubicacion del tratamiento de datos y conformidad con GDPR
  • Certificaciones de seguridad (ISO 27001, SOC 2 Type II, etc.)
  • Condiciones de servicio y practicas de retencion/entrenamiento de datos
  • Clasificacion de riesgo segun el EU AI Act
  • Justificacion de negocio y alternativas disponibles

4. Reglas por Herramienta

4.1 ChatGPT y productos OpenAI

Usos permitidos: redaccion de borradores internos (con revision), brainstorming, resumen de informacion publica o interna no confidencial, aprendizaje.

Usos prohibidos: introducir datos de clientes, datos personales de empleados, codigo fuente propietario, contratos o documentos financieros. Prohibido usar la version gratuita para el trabajo.

Manejo de datos: confirmar “No entrenar con mis datos” activo; verificar siempre la exactitud del contenido generado.

Revision: todo contenido externo debe ser aprobado por [Jefe de Departamento / revisor designado].

4.2 GitHub Copilot y asistentes de codigo

Usos permitidos: sugerencias de codigo, documentacion, deteccion de bugs, tests unitarios.

Usos prohibidos: credenciales, claves API o secretos en prompts; aceptar sugerencias sin revisar en funciones criticas de seguridad.

Seguridad: activar .gitignore para secretos; revisar todo codigo generado por IA antes de hacer commit; ejecutar analisis de seguridad.

Revision: todo codigo generado por IA para produccion debe pasar por el proceso estandar de code review, indicando la asistencia de IA en el pull request.

4.3 Generadores de imagenes

Usos permitidos: visuales de marketing, contenido provisional interno, exploracion de conceptos de diseno.

Usos prohibidos: imagenes de personas reales sin consentimiento, contenido enganoso o difamatorio, deepfakes, violencia o desnudos.

Propiedad intelectual: verificar terminos comerciales de la herramienta; no replicar estilos de artistas que se hayan opuesto; conservar registro de prompts para publicaciones oficiales.

Revision: aprobacion de [Responsable de Marketing / Marca] antes de publicar externamente.


5. Usos Prohibidos — Todas las Herramientas

5.1 Prohibiciones absolutas

N.oActividad prohibidaMotivo
P1Tratar datos de categorias especiales (salud, religion, opiniones politicas, biometria, orientacion sexual)GDPR Art. 9
P2Usar IA para tomar o influir sustancialmente en decisiones con efectos legales o personales significativos sin supervision humanaEU AI Act Art. 14; GDPR Art. 22
P3Generar contenido intencionalmente enganoso o manipuladorEU AI Act Art. 5
P4Manipulacion subliminal, social scoring o vigilancia biometrica en tiempo realEU AI Act Art. 5
P5Producir contenido que infrinja derechos de propiedad intelectualLegislacion de PI
P6Producir contenido difamatorio, discriminatorio o acosadorLegislacion laboral; responsabilidad civil
P7Presentar contenido generado por IA como creado por humanos cuando la distincion sea relevanteEU AI Act Art. 50
P8Eludir los procesos de aprobacion y supervision de IA de la organizacionIncumplimiento de politica
P9Usar cuentas personales de IA para tratar datos de la organizacionSeguridad de datos
P10Usar IA para exfiltrar o transferir indebidamente datos organizativosProteccion de datos; posible responsabilidad penal

5.2 Usos restringidos (requieren aprobacion previa)

N.oActividadAprobacion requerida de
R1Usar IA en sistemas que interactuen directamente con clientes o el publicoResponsable de IA + Juridico
R2Analizar comportamiento, rendimiento o comunicaciones de empleadosDirector de RRHH + DPO + Juridico
R3Usar IA para asesoramiento medico, legal o financiero a tercerosProfesional correspondiente + Juridico
R4Tratar datos personales de clientes con IADPO + Responsable de IA
R5Integrar IA en flujos de toma de decisiones automatizadaResponsable de IA + Juridico + DPO

6. Requisitos de Manejo de Datos

6.1 Clasificacion antes de su uso

ClasificacionDefinicionEjemplosPermitido en IA?
PublicoInformacion ya publicaInformes publicados, notas de prensaSi, cualquier herramienta aprobada
InternoNo publico pero no confidencialMemorandos internos, actas de reunionesSi, solo herramientas aprobadas
ConfidencialSensible para el negocioContratos de clientes, previsiones financieras, datos de RRHHSolo si la herramienta esta aprobada para este nivel
RestringidoMaxima sensibilidadDatos personales, privilegio legal, informacion de M&ANo introducir sin aprobacion explicita de DPO/Juridico

6.2 Reglas sobre datos personales

  • No introducir datos personales salvo que la herramienta este aprobada para ello y exista un DPA con el proveedor
  • Incluye: nombres, emails, telefonos, IPs, ubicaciones, titulos de puesto combinados con nombres
  • Anonimizar o pseudonimizar siempre que sea posible
  • En caso de duda, tratar como datos personales y consultar al DPO

6.3 Principio de minimizacion

Proporcionar solo la informacion minima necesaria. No pegar documentos completos cuando baste un resumen o extracto.

6.4 Datos de salida

Los resultados generados por IA que contengan o deriven de datos personales deben manejarse con el mismo cuidado que los datos de entrada.


7. Requisitos de Revision de Contenido

7.1 Niveles de revision

NivelTipo de contenidoEstandar minimo de revision
Nivel 1 — Interno bajo riesgoEmails internos, notas, comunicaciones no vinculantesAuto-revision del empleado
Nivel 2 — Interno alto riesgoInformes al consejo, comunicaciones de RRHH, analisis financieroRevision del responsable directo o jefe de departamento
Nivel 3 — Externo estandarDocumentos para clientes, contenido web, materiales de marketingJefe de departamento + aprobacion de Comunicacion/Marketing
Nivel 4 — Externo alto impactoDocumentos legales, presentaciones regulatorias, informes financierosJuridico/Compliance + aprobacion de la direccion

7.2 Obligacion de verificacion factual

El empleado es responsable de verificar la exactitud de todo contenido generado por IA. Lista de comprobacion:

  • Estadisticas trazadas a una fuente original verificable
  • Citas legales verificadas contra legislacion vigente
  • Personas u organizaciones mencionadas confirmadas
  • Fechas y plazos confirmados
  • Contenido verificado por coherencia interna

7.3 Obligaciones de transparencia

SituacionDivulgacion requerida
Contenido generado por IA enviado a un regulador o tribunalInformar al asesor juridico para valorar la divulgacion
Contenido publicado en la web sustancialmente generado por IAIncluir aviso de asistencia de IA
Asesoramiento generado por IA proporcionado a un clienteDivulgar asistencia de IA y confirmar revision profesional humana
Respuesta a licitacion/concurso publicoVerificar si se exige declaracion de uso de IA

8. Obligaciones de Notificacion

8.1 Que debe notificarse

EventoNotificar aPlazo
Descubrimiento de uso de herramienta no aprobadaResponsable directo o Responsable de IA5 dias habiles
Introduccion accidental de datos personales/confidenciales en una herramienta de IADPO + Responsable directo24 horas
Sospecha de violacion de datos via herramienta de IADPO + Seguridad TIInmediatamente (1 hora)
Contenido discriminatorio, danino o ilegal generado por IAResponsable de IA + Juridico24 horas
Uso de IA para tomar decisiones significativas sin revision humanaDirector de RRHH + Juridico + DPO24 horas
Cualquier incidente donde el uso de IA haya causado o pueda causar dano a tercerosJuridico + Responsable de IAInmediatamente

8.2 Como notificar

  • Principal: [email — ej. ai-incidents@organisation.com]
  • Secundario: Comunicacion directa al [Responsable de IA]
  • Urgente: [Telefono / linea de Seguridad TI]

8.3 Proteccion del denunciante

Los empleados que notifiquen de buena fe estan protegidos conforme a la politica de denuncias interna y la Directiva (UE) 2019/1937. Las represalias constituyen infraccion disciplinaria.


9. Responsabilidades

RolResponsabilidades
Todos los empleadosCumplir esta Politica; completar la formacion obligatoria en alfabetizacion de IA; notificar incidentes
Responsables directosAsegurar el cumplimiento del equipo; revisar contenido de Nivel 2; fomentar la cultura de notificacion
Responsable de IAMantener el registro de herramientas; gestionar solicitudes; supervisar la respuesta a incidentes; informar al consejo
DPOAsesorar sobre tratamiento de datos personales; revisar EIPD para herramientas de IA; gestionar incidentes de proteccion de datos
TI/SeguridadImplementar controles tecnicos; gestionar accesos; monitorizar shadow AI; realizar evaluaciones de seguridad
JuridicoRevisar contenido de alto impacto; asesorar sobre cumplimiento; gestionar divulgaciones externas
RRHHGestionar comunicaciones de la politica; manejar aspectos de relaciones laborales; supervisar la formacion

10. Requisitos de Formacion

RolFormacion requeridaFrecuencia
Todos los empleadosConcienciacion sobre alfabetizacion en IA (minimo 1 hora)Al incorporarse + actualizacion anual
Empleados que usen herramientas aprobadas regularmenteFormacion especifica de la herramienta por rolAl adoptarla + cuando se actualice sustancialmente
DirectivosGobernanza de IA y responsabilidades de supervisionAnual
Responsable de IA, DPO, Juridico, CISOFormacion avanzada de conformidad con el EU AI ActAnual + cuando se actualice la regulacion

El cumplimiento se registra en [Sistema de RRHH]. No completar la formacion obligatoria en [30 dias] constituye infraccion disciplinaria.


11. Consecuencias del Incumplimiento

Los incumplimientos seran investigados y podran resultar en medidas disciplinarias, incluido el despido, conforme al procedimiento disciplinario. Los incumplimientos graves pueden ademas conllevar:

  • Notificacion a las autoridades de supervision (AEPD / EDPS)
  • Responsabilidad civil o penal para el individuo
  • Sanciones regulatorias para la organizacion

12. Revision de la Politica

Esta Politica se revisara:

  • Al menos anualmente
  • Tras cualquier cambio significativo en la legislacion aplicable
  • Tras un incidente material relacionado con la IA
  • Cuando se aprueben nuevas herramientas que requieran cambios sustanciales en la politica

Anexo A — Registro de Herramientas de IA Aprobadas

Mantenido por [Responsable de IA/Departamento de TI] — actualizado trimestralmente

HerramientaProveedorVersion/NivelUsos aprobadosClases de datos permitidasFecha de aprobacionProxima revision

Anexo B — Formulario de Solicitud de Herramienta de IA

Nombre del solicitante: Departamento: Fecha: Nombre de la herramienta: Proveedor: URL / Metodo de acceso: Caso de uso propuesto: Clasificacion de los datos a tratar: Justificacion de negocio: Ha revisado la politica de privacidad y condiciones de uso? Si / No El proveedor ofrece un DPA? Si / No / Desconocido Urgencia:

Enviar a: [email] | Respuesta en: [15] dias habiles


Plantilla proporcionada por VORLUX AI | vorluxai.com Este documento es solo orientativo, no constituye asesoramiento juridico. Version 1.0 | Para uso de conformidad con el EU AI Act | Ultima actualizacion: 2026-04-05

EU AI Act: 99 days to deadline

15 minutes to evaluate your case

No-commitment initial consultation. We analyze your infrastructure and recommend the optimal hybrid architecture.

No commitment 15 minutes Custom proposal

136 pages of free resources · 26 compliance templates · 22 certified devices