The EU AI Act, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, is the world’s first comprehensive horizontal regulation of artificial intelligence. It classifies AI systems by risk and imposes obligations on providers and deployers proportional to that risk. For US companies that hire, evaluate, or pay people in the EU using AI tools, the act is a structural compliance event because most HR AI uses sit in the high-risk tier.
How the EU AI Act Works
The act uses a four-tier risk pyramid:
- Unacceptable risk (prohibited). Social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, and manipulative or exploitative AI systems are banned outright. These prohibitions have applied since February 2, 2025.
- High risk. AI systems listed in Annex III (across eight domain areas including employment) or used as safety components of regulated products under Annex I. High-risk systems must meet detailed requirements for risk management, data quality, technical documentation, logging, transparency, human oversight, accuracy, and cybersecurity, and must be registered in an EU database before being placed on the market.
- Limited risk. Systems that interact with users (such as chatbots) or generate synthetic content (such as deepfakes) must meet transparency duties.
- Minimal risk. No mandatory obligations under the act, though voluntary codes of conduct are encouraged.
Per Annex III to the Regulation, point 4 covers employment and worker-management AI. This includes recruitment and selection systems (such as resume screening, interview scoring), systems used to make or materially influence decisions on terms of work, promotion, or termination, task allocation based on individual behavior, and performance monitoring and evaluation systems. Most modern people-operations AI tools land here.
Who Must Comply
The act applies to a broad set of actors:
- Providers that place AI systems on the EU market or put them into service in the EU, wherever the provider is established
- Deployers of AI systems that are established in the EU
- Providers and deployers established outside the EU where the output of the AI system is used in the EU
- Importers and distributors of AI systems
- Product manufacturers placing AI systems on the EU market together with their product
A US HR-tech vendor selling a recruiting model in the EU is a provider. A US company using that model to screen EU applicants is a deployer. Both have obligations.
Phased Application
Per Article 113 of the regulation, the AI Act applies in phases:
- August 1, 2024: Entry into force
- February 2, 2025: Prohibitions on unacceptable-risk AI and AI literacy obligations
- August 2, 2025: Rules on general-purpose AI models, governance bodies, and confidentiality
- August 2, 2026: Main obligations for high-risk AI systems under Annex III, including HR and worker-management systems
- August 2, 2027: High-risk AI systems that are safety components of products under Annex I
Penalties
Article 99 of the regulation sets administrative fines:
- Up to 35 million euros or 7 percent of worldwide annual turnover for violation of prohibitions on unacceptable-risk AI
- Up to 15 million euros or 3 percent of turnover for violation of most other obligations on operators
- Up to 7.5 million euros or 1 percent of turnover for supplying incorrect, incomplete, or misleading information to authorities
For SMEs and startups, the lower of the two figures applies. National competent authorities and the new European AI Office share enforcement.
Common Pitfalls
- Assuming “we just use a vendor.” A deployer of a high-risk system has independent obligations, including human oversight, monitoring, logging, and informing affected workers.
- Treating the act as a future problem. Prohibitions already apply since February 2025. The main high-risk obligations apply from August 2, 2026, which means risk-management systems must be in place before that date, not after.
- Skipping the workplace-AI prohibition. Emotion recognition in workplaces (subject to narrow exceptions for medical or safety reasons) is prohibited. Inferring stress, attention, or mood from worker video or audio is a high-fine category.
- Forgetting fundamental-rights impact assessments. Public bodies and certain private deployers of high-risk AI must run a fundamental-rights impact assessment under Article 27 before deployment.
- EU Platform Work Directive: adjacent rule that layers worker-facing transparency and human-review rights on top of the AI Act’s risk-management duties.
- DAC7: not an AI rule but the data-collection regime most platforms must run alongside.
- KYC: identity verification flows that use biometric AI systems and intersect with AI Act biometric categorization rules.
Omnivoo Contract Management records the AI systems used in contractor selection, evaluation, and termination workflows, retains the logs and human-review evidence, and produces the deployer-side documentation EU regulators look for under the high-risk AI regime.