The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. It entered into force in August 2024, with obligations phased in through 2027. If your startup builds, deploys, or imports AI systems used in the EU, this regulation applies to you — regardless of where your company is based.
What Is the EU AI Act?
The AI Act takes a risk-based approach to AI regulation. It classifies AI systems into four risk tiers and assigns obligations proportional to the risk. The goal is to ensure AI is safe, transparent, and respects fundamental rights while still allowing innovation.
The Four Risk Categories
Unacceptable risk (banned)
These AI practices are prohibited entirely:
- Social scoring by governments
- Real-time biometric identification in public spaces (with limited exceptions for law enforcement)
- Manipulation techniques that exploit vulnerabilities (age, disability, economic situation)
- Emotion recognition in workplaces and schools (with limited exceptions)
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
High risk
AI systems in these domains face the most stringent requirements:
- Biometric identification and categorization
- Critical infrastructure — Energy, water, transport, digital infrastructure
- Education and training — Admissions, assessments, proctoring
- Employment — Recruitment, HR decisions, performance evaluation
- Essential services — Credit scoring, insurance, social benefits
- Law enforcement — Risk assessment, evidence evaluation
- Migration and border control
- Justice and democracy — Court decision assistance
Limited risk (transparency obligations)
AI systems like chatbots, deepfake generators, and emotion recognition systems must disclose that users are interacting with AI. Content generated by AI must be labeled as such.
Minimal risk
Most AI systems — spam filters, AI-powered games, inventory management — fall here. No specific obligations, though voluntary codes of conduct are encouraged.
Compliance Checklist for High-Risk AI Systems
1. Classify your AI system
Determine which risk category your system falls into. If it’s used in any of the high-risk domains listed above, you’ll need to comply with the full set of requirements. When in doubt, consult the annexes of the regulation or seek legal advice.
2. Implement a risk management system
Establish a continuous risk management process throughout the AI system’s lifecycle. This includes: identifying risks, estimating their likelihood and severity, implementing mitigation measures, and testing effectiveness.
3. Ensure data quality
Training, validation, and testing datasets must be relevant, representative, free of errors, and complete. Document your data governance practices including collection sources, processing decisions, and bias assessments.
4. Create technical documentation
Before placing a high-risk AI system on the market, prepare detailed technical documentation covering: system description, design specifications, development process, risk management results, and testing outcomes.
5. Implement logging and monitoring
High-risk systems must automatically record events (logs) to enable post-market monitoring. Logs should capture: input data, system decisions, operator interactions, and any anomalies.
6. Ensure transparency and provide instructions
Provide clear instructions for use to downstream deployers. Include: system capabilities and limitations, intended purpose, performance metrics, known risks, and human oversight requirements.
7. Enable human oversight
High-risk systems must be designed to allow effective human oversight. This means: humans must be able to understand system outputs, decide not to use the system, intervene in real-time, or stop the system entirely.
8. Ensure accuracy, robustness, and cybersecurity
Systems must achieve appropriate levels of accuracy and be resilient to errors, faults, and adversarial attacks. Implement cybersecurity measures proportionate to the risks.
9. Register in the EU database
High-risk AI systems must be registered in a public EU database before being placed on the market. This includes system details, intended purpose, and conformity assessment results.
Timeline: When Do Obligations Apply?
- February 2025: Bans on prohibited AI practices take effect
- August 2025: Obligations for general-purpose AI models begin
- August 2026: Most AI Act obligations become enforceable, including high-risk system requirements
- August 2027: Remaining obligations for high-risk systems in Annex I (EU harmonisation legislation)
Penalties
Non-compliance penalties are significant:
- Prohibited practices: Up to €35 million or 7% of global annual turnover
- High-risk violations: Up to €15 million or 3% of global annual turnover
- Incorrect information: Up to €7.5 million or 1% of global annual turnover
SMEs and startups receive proportionate caps on fines — the lower of the two amounts applies.
Frequently Asked Questions
Does the EU AI Act apply to U.S. startups?
Yes, if your AI system is placed on the EU market or its output is used in the EU. Like GDPR, the AI Act has extraterritorial reach.
Is my chatbot or AI assistant high-risk?
Probably not, unless it’s used in a high-risk domain (employment decisions, credit scoring, etc.). General-purpose chatbots are “limited risk” and must simply disclose they’re AI. However, general-purpose AI models (like foundation models) have separate transparency obligations.
How do I start preparing?
Classify your AI systems, identify which risk tier they fall into, and start building documentation. Use Complara’s EU AI Act checklist to track requirements and evidence as you go.