Last Updated on May 13, 2026 by Narendra Sahoo
1️⃣ What Is an EU AI Act Compliance Checklist?
An EU AI Act compliance checklist is a structured framework that helps organisations systematically identify, classify, and govern all AI systems within scope of Regulation (EU) 2024/1689. It covers AI system inventory, risk classification (unacceptable, high-risk, limited, and minimal), conformity assessment requirements, technical documentation (Annex IV), human oversight obligations, GPAI model obligations, and post-market monitoring. Any organisation deploying or providing AI systems affecting EU persons requires a documented EU AI Act compliance checklist to evidence readiness before enforcement.
This EU AI Act compliance checklist covers every obligation your organisation must evidence before the August 2026 enforcement deadline — from AI system inventory and risk classification to Annex IV documentation, human oversight verification, and post-market monitoring.
Lukas’s Frankfurt insurance group felt prepared with ethics boards and documented models until regulators demanded a full AI system inventory and classification evidence. The firm stalled. Undocumented HR tools, embedded SaaS assistants, and pilot engines remained unassessed. Most programs fail here: not in policy writing, but in the granular evidence required for every system affecting EU persons.
Violations of prohibited AI practices carry fines up to €35 million or 7% of global annual turnover — whichever is higher — and market surveillance authorities can order non-compliant systems withdrawn from the EU market entirely.
A Deloitte survey of 500 managers found that only 35.7% feel adequately prepared for EU AI Act compliance, while just 26.2% have started concrete compliance activities. More than half of organisations have not established systematic inventories of the AI systems they operate — the minimum prerequisite for any compliance programme. You cannot classify what you cannot see.
2️⃣ EU AI Act Enforcement Timeline: Key Dates You Must Know
Understanding when each obligation becomes enforceable is the first step in any EU AI Act compliance checklist. Enforcement is phased — missing a date does not mean missing everything, but it does mean accumulating risk.
| Compliance Date | What Becomes Enforceable | Who Is Affected |
| February 2025 (LIVE) | Prohibited AI practices banned (social scoring, subliminal manipulation, real-time biometric surveillance in public spaces) | All providers and deployers |
| August 2025 (LIVE) | GPAI model obligations: transparency, technical documentation, copyright compliance for systemic-risk models | GPAI / foundation model providers |
| August 2026 | High-risk AI system obligations: conformity assessment, Annex IV documentation, human oversight, CE marking, EU database registration | High-risk AI providers and deployers |
| August 2027 | High-risk AI in Annex I (regulated products) must comply | Medical device AI, safety-critical AI in machinery etc. |
3️⃣ EU AI Act Compliance Checklist: Risk Classification Comes First
The first step in your EU AI Act compliance programme is correctly classifying your AI system. This determines market access, governance obligations, and potential penalties. A misclassification can significantly increase compliance requirements. For example, an HR screening tool may seem ‘limited risk’ but qualify as Annex III high-risk because it affects employment decisions.
| Risk Category | Example Systems | Key Obligations | Maximum Penalties | Applies From |
| Unacceptable Risk | Social scoring, manipulative AI | Prohibited outright | €35M or 7% of global turnover | Feb 2025 |
| High-Risk AI | Recruitment, credit scoring | Conformity assessment, human oversight | €15M or 3% | Aug 2026 |
| GPAI / LLMs | Foundation models | Technical documentation, transparency | €15M or 3% | Aug 2025 |
| Limited Risk | Chatbots, deepfakes | User disclosure | €7.5M or 1% | Aug 2026 |
| Minimal Risk | Spam filters | Voluntary controls | Low | Ongoing |
Which Sectors Are Most Exposed to High-Risk AI Classification?
The following sectors contain AI systems most commonly misclassified as lower risk than they are under Annex III:
- Healthcare: Clinical decision support, diagnostic AI, patient triage and prioritisation tools — all potentially Annex III high-risk
- HR and recruitment technology: CV screening, candidate scoring, employee performance monitoring, work allocation tools
- Financial services: Credit scoring, insurance risk assessment, fraud detection affecting individual EU persons
- Education technology: Student assessment tools, adaptive learning systems, examination proctoring AI
- Public sector: Any AI affecting access to benefits, services, or critical infrastructure decisions
4️⃣ Build Your AI System Inventory — You Cannot Classify What You Cannot See
More than half of organisations have not established systematic AI inventories — without one, risk classification and conformity assessment cannot even be scoped. Most organisations attempting an EU AI Act compliance programme start in the wrong place: policy drafting. The correct starting point is visibility.
HR departments procure AI recruitment tools independently. Legal teams adopt contract analysis platforms without security review. SaaS vendors silently introduce generative AI features that materially affect decision workflows — and nobody updates the inventory. An applied AI study of 106 enterprise AI systems found 18% were high-risk and 40% had unclear risk classification — primarily in employment, critical infrastructure, and law enforcement.
AI System Inventory Checklist
- ☐ Map all AI systems in use across every business unit — not just IT-owned systems
- ☐ Include third-party and vendor-embedded AI (SaaS co-pilots, analytics engines, scoring tools)
- ☐ Document each system: name, vendor, version, purpose, data inputs, outputs, and affected persons
- ☐ Identify whether the system affects EU persons regardless of where your organisation is based
- ☐ Record the AI system’s role in decision-making: advisory, augmented, or fully automated
- ☐ Assign an inventory owner responsible for maintaining and updating the record
- ☐ Schedule inventory review at least quarterly and whenever new AI tools are procured
5️⃣ What Are High-Risk AI Deployer Obligations Under the EU AI Act?
This is the misconception driving the largest compliance gap across enterprise AI governance today. Many organisations assume their vendor’s certification protects them. It does not.
Under Article 26, deployers hold independent obligations separate from providers. Regulators can demand evidence of oversight, logging, governance, and lawful operational use regardless of vendor certification status.
| Requirement | Provider | Deployer |
| Technical conformity assessment | ✅ Required | ❌ Not required |
| Human oversight implementation | Partial — design only | ✅ Required operationally |
| Operational logging retention | Partial | ✅ Required |
| Fundamental Rights Impact Assessment | ❌ Not required | ✅ Required |
| Employee and operator training | ❌ Not required | ✅ Required |
A conformity assessment proves the provider built the system correctly. It does not prove your organisation operates lawfully.
Deployer Obligations Checklist (Article 26)
- ☐ Implement and document human oversight procedures for every high-risk AI system in operation
- ☐ Maintain operational logs for the entire period of AI system use
- ☐ Conduct a Fundamental Rights Impact Assessment before deploying high-risk AI affecting vulnerable groups or public services
- ☐ Ensure all staff operating or monitoring the AI system have received documented training
- ☐ Establish and test an incident escalation and reporting workflow
- ☐ Register the high-risk AI system in the EU AI Act public database (Article 71) where required
- ☐ Monitor the AI system’s performance in production and report serious incidents to the market surveillance authority
6️⃣ EU AI Act Conformity Assessment Checklist: Documentation Is the Audit
Under the EU AI Act, documentation is the compliance programme. Annex IV requirements for high-risk systems are extensive, and regulators will expect consistency across all of them. Organisations starting from scratch need 3–6 months to complete this — any organisation not already in this process is on the critical path.
| Required Area | Evidence Expected | Key Article |
| Risk management system | Article 9 governance documentation, risk log, mitigation records | Article 9 |
| Technical documentation | Full Annex IV documentation package, system description, architecture | Annex IV |
| Data governance controls | Training and validation data records, bias assessment, data provenance | Article 10 |
| Human oversight | Escalation and intervention procedures, override mechanisms | Article 14 |
| Logging and traceability | Operational logs, event records, retention policy | Article 12 |
| Post-market monitoring | Incident response plan, review processes, serious incident reports | Article 72 |
| CE marking readiness | Conformity assessment completion evidence, declaration of conformity | Article 48 |
Annex IV Technical Documentation Checklist
- ☐ General description of the AI system including its intended purpose and the version of the software
- ☐ Description of the elements of the AI system and the process for its development
- ☐ Detailed information on the monitoring, functioning, and control of the AI system
- ☐ Description of the risk management system (Article 9) and results of testing
- ☐ Training, validation, and testing data governance documentation
- ☐ Description of the human oversight measures and the technical means enabling human oversight
- ☐ Description of the logging capabilities including what is captured and the retention period
- ☐ Instructions for use, including contact details of the provider
7️⃣ How Do GDPR and the EU AI Act Overlap?
The EU AI Act’s penalty framework already exceeds GDPR’s maximum of €20 million — a failure triggering both regimes creates compounding exposure. McKinsey data shows 88% of organisations already use AI in at least one business function, the majority involving personal data.
Treating GDPR and AI governance as separate programmes creates audit risks. The overlap is operational. Article 10 (data governance) intersects with GDPR’s lawful basis, while Article 14 (human oversight) aligns with GDPR Article 22 on automated decision-making.
| GDPR Provision | EU AI Act Equivalent | Combined Obligation |
| Article 22 — automated decisions | Article 14 — human oversight | Document both: the legal basis for automated decision and the override mechanism |
| Lawful basis documentation | Article 10 data governance | One unified data governance record covering both regimes |
| Data Protection Impact Assessment | Fundamental Rights Impact Assessment | Run jointly — one assessment satisfying both requirements |
| Data minimisation principle | Training data governance controls | Single data minimisation policy covering training and operational data |
| Accountability principle | Quality management systems | Unified governance accountability framework |
8️⃣ Common EU AI Act Compliance Mistakes Organisations Are Making
| ❌ Common Mistake | ✅ What to Do Instead |
| Treating the Act as a documentation exercise | Build operational evidence alongside governance — documentation must reflect actual system behaviour |
| Assuming vendor certification removes liability | Document all deployer obligations independently from provider certification status |
| Running GDPR and AI governance separately | Create unified system-level governance records satisfying both regimes simultaneously |
| Ignoring embedded AI in SaaS platforms | Include all vendor AI in inventories — silent AI feature updates do not reduce liability |
| Classifying systems informally | Create written, legally defensible classification rationale for every system in scope |
9️⃣ EU AI Act Compliance for SMEs: What Proportionality Means in Practice
The EU AI Act includes specific proportionality provisions for small and medium-sized enterprises (SMEs) and startups. Article 55 provides regulatory sandboxes and simplified conformity assessment pathways. However, proportionality does not mean exemption — the core risk classification, inventory, and deployer obligation requirements apply regardless of company size.
- SMEs are not exempt from high-risk AI obligations — proportionality applies to administrative burden, not to substantive safety requirements
- National competent authorities must provide SMEs with dedicated guidance, support, and sandbox access
- Reduced fees apply for SMEs in certain conformity assessment procedures
- The financial cost estimates (€2–5M for mid-size organisations) assume full implementation from scratch — SMEs with existing ISO 27001 or SOC 2 controls can significantly reduce this through control mapping
🔟How to Comply With the EU AI Act: 8-Step Implementation Roadmap
|
Step |
Action | Timeline |
Output |
| 1 | Build complete AI system inventory across all business units and vendors | Weeks 1–2 | Documented inventory register with system owner assignments |
| 2 | Conduct formal risk classification for each system against Annex III | Weeks 2–4 | Written classification rationale for each system |
| 3 | Perform gap assessment against high-risk obligations for in-scope systems | Weeks 3–6 | Gap register with prioritised remediation plan |
| 4 | Draft Annex IV technical documentation for each high-risk system | Months 2–4 | Complete technical documentation package per system |
| 5 | Implement and document human oversight procedures and escalation workflows | Months 2–3 | Operational oversight procedures with training records |
| 6 | Conduct Fundamental Rights Impact Assessment for relevant deployers | Month 3 | Completed FRIA with mitigation actions documented |
| 7 | Align GDPR and EU AI Act controls into unified governance framework | Months 3–4 | Unified compliance evidence library |
| 8 | Complete conformity assessment and prepare for EU database registration | Months 4–6 | Conformity declaration and CE marking readiness |
1️⃣1️⃣ Frequently Asked Questions: EU AI Act Compliance
| Q: When does the EU AI Act become enforceable for high-risk AI systems? |
| The EU AI Act enforcement for high-risk AI systems under Annex III begins in August 2026. Prohibited AI practices (unacceptable risk) have been enforceable since February 2025. GPAI and foundation model obligations became enforceable in August 2025. Organisations deploying high-risk AI must have completed inventory, classification, Annex IV documentation, human oversight implementation, and conformity assessment before August 2026. |
| Q: Does the EU AI Act apply to companies outside the EU? |
| Yes. The EU AI Act applies based on the effect of the AI system on EU persons, not where the company is incorporated. A US insurer using AI to score EU applicants is in scope. A Singapore HR platform screening EU candidates is in scope. Any organisation providing or deploying AI systems whose output affects individuals located in the EU is subject to the regulation’s requirements. |
| Q: What is the difference between a provider and a deployer under the EU AI Act? |
| A provider is the organisation that develops or places an AI system on the market. A deployer is any organisation that uses an AI system under its own authority for professional purposes. Under Article 26, deployers have independent obligations including human oversight implementation, operational logging, Fundamental Rights Impact Assessments, and employee training — regardless of whether the provider has completed their own conformity assessment. |
| Q: How long does EU AI Act compliance take to complete for a high-risk AI system? |
| Organisations starting from scratch typically require 3 to 6 months to achieve compliance readiness for a single high-risk AI system — covering inventory, classification, gap assessment, Annex IV technical documentation, human oversight procedures, and conformity assessment preparation. Organisations with existing ISO 27001 or SOC 2 controls can reduce this timeline through systematic control mapping. Vista InfoSec recommends beginning no later than Q1 2026 for August 2026 deadline readiness. |
| Q: What happens if our organisation is found non-compliant with the EU AI Act? |
| Non-compliance consequences depend on the violation tier. Prohibited AI practices carry fines up to €35 million or 7% of global annual turnover. Violations of high-risk AI obligations carry fines up to €15 million or 3% of global turnover. Providing incorrect information to authorities carries fines up to €7.5 million or 1.5%. Market surveillance authorities can also order non-compliant systems withdrawn from the EU market and require corrective action within specified timeframes. |
| Q: Is vendor AI covered under our organisation’s EU AI Act compliance obligations? |
| Yes. If your organisation deploys a vendor AI system for professional purposes and that system is in scope of the EU AI Act, your organisation holds independent deployer obligations under Article 26. Additionally, the AI inventory requirement means all vendor-embedded AI — including SaaS co-pilots, analytics engines, and scoring tools introduced through silent product updates — must be inventoried, classified, and governed. Vendor certification does not transfer deployer liability. |
1️⃣2️⃣ How to Comply With the EU AI Act Before August 2026
The organisations most exposed under the EU AI Act are not the ones using the most advanced AI. They are the ones unable to prove operational control. An effective EU AI Act compliance checklist requires: complete AI system inventorying, written risk classification rationale, independent deployer governance evidence, Annex IV technical documentation readiness, GDPR-integrated controls, human oversight verification, and logging and traceability mechanisms.
Most importantly, organisations need evidence consistency across every governance layer. When enforcement begins, regulators will not assess intentions. They will assess proof.
Request an EU AI Act Readiness Assessment from VISTA InfoSec today.
VISTA InfoSec is a CREST-approved cybersecurity consultancy delivering practitioner-led AI governance assessments grounded in operational audit experience — not theoretical compliance templates. For organisations building an actionable EU AI Act checklist for businesses, the window for preparation is already narrowing.
Email: eusales[@]vistainfosec.com
Contact Us: https://vistainfosec.com/contact-us/
Narendra Sahoo (PCI QPA, PCI QSA, PCI SSF ASSESSOR, CISSP, CISA, CRISC, 27001 LA) is the Founder and Director of VISTA InfoSec, a global Information Security Consulting firm, based in the US, Singapore & India. Mr. Sahoo holds more than 25 years of experience in the IT Industry, with expertise in Information Risk Consulting, Assessment, & Compliance services. VISTA InfoSec specializes in Information Security audit, consulting and certification services which include GDPR, HIPAA, CCPA, NESA, MAS-TRM, PCI DSS Compliance & Audit, PCI PIN, SOC2 Compliance & Audit, PDPA, PDPB to name a few. The company has for years (since 2004) worked with organizations across the globe to address the Regulatory and Information Security challenges in their industry. VISTA InfoSec has been instrumental in helping top multinational companies achieve compliance and secure their IT infrastructure.