Compliance

EU AI Act

The EU AI Act, Regulation (EU) 2024/1689, is the first comprehensive horizontal regulation of artificial intelligence systems, classifying systems by risk and imposing the heaviest obligations on prohibited and high-risk uses including most HR and worker-management AI.

The EU AI Act, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, is the world’s first comprehensive horizontal regulation of artificial intelligence. It classifies AI systems by risk and imposes obligations on providers and deployers proportional to that risk. For US companies that hire, evaluate, or pay people in the EU using AI tools, the act is a structural compliance event because most HR AI uses sit in the high-risk tier.

How the EU AI Act Works

The act uses a four-tier risk pyramid:

  • Unacceptable risk (prohibited). Social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, and manipulative or exploitative AI systems are banned outright. These prohibitions have applied since February 2, 2025.
  • High risk. AI systems listed in Annex III (across eight domain areas including employment) or used as safety components of regulated products under Annex I. High-risk systems must meet detailed requirements for risk management, data quality, technical documentation, logging, transparency, human oversight, accuracy, and cybersecurity, and must be registered in an EU database before being placed on the market.
  • Limited risk. Systems that interact with users (such as chatbots) or generate synthetic content (such as deepfakes) must meet transparency duties.
  • Minimal risk. No mandatory obligations under the act, though voluntary codes of conduct are encouraged.

Per Annex III to the Regulation, point 4 covers employment and worker-management AI. This includes recruitment and selection systems (such as resume screening, interview scoring), systems used to make or materially influence decisions on terms of work, promotion, or termination, task allocation based on individual behavior, and performance monitoring and evaluation systems. Most modern people-operations AI tools land here.

Who Must Comply

The act applies to a broad set of actors:

  • Providers that place AI systems on the EU market or put them into service in the EU, wherever the provider is established
  • Deployers of AI systems that are established in the EU
  • Providers and deployers established outside the EU where the output of the AI system is used in the EU
  • Importers and distributors of AI systems
  • Product manufacturers placing AI systems on the EU market together with their product

A US HR-tech vendor selling a recruiting model in the EU is a provider. A US company using that model to screen EU applicants is a deployer. Both have obligations.

Phased Application

Per Article 113 of the regulation, the AI Act applies in phases:

  • August 1, 2024: Entry into force
  • February 2, 2025: Prohibitions on unacceptable-risk AI and AI literacy obligations
  • August 2, 2025: Rules on general-purpose AI models, governance bodies, and confidentiality
  • August 2, 2026: Main obligations for high-risk AI systems under Annex III, including HR and worker-management systems
  • August 2, 2027: High-risk AI systems that are safety components of products under Annex I

Penalties

Article 99 of the regulation sets administrative fines:

  • Up to 35 million euros or 7 percent of worldwide annual turnover for violation of prohibitions on unacceptable-risk AI
  • Up to 15 million euros or 3 percent of turnover for violation of most other obligations on operators
  • Up to 7.5 million euros or 1 percent of turnover for supplying incorrect, incomplete, or misleading information to authorities

For SMEs and startups, the lower of the two figures applies. National competent authorities and the new European AI Office share enforcement.

Common Pitfalls

  • Assuming “we just use a vendor.” A deployer of a high-risk system has independent obligations, including human oversight, monitoring, logging, and informing affected workers.
  • Treating the act as a future problem. Prohibitions already apply since February 2025. The main high-risk obligations apply from August 2, 2026, which means risk-management systems must be in place before that date, not after.
  • Skipping the workplace-AI prohibition. Emotion recognition in workplaces (subject to narrow exceptions for medical or safety reasons) is prohibited. Inferring stress, attention, or mood from worker video or audio is a high-fine category.
  • Forgetting fundamental-rights impact assessments. Public bodies and certain private deployers of high-risk AI must run a fundamental-rights impact assessment under Article 27 before deployment.
  • EU Platform Work Directive: adjacent rule that layers worker-facing transparency and human-review rights on top of the AI Act’s risk-management duties.
  • DAC7: not an AI rule but the data-collection regime most platforms must run alongside.
  • KYC: identity verification flows that use biometric AI systems and intersect with AI Act biometric categorization rules.

Omnivoo Contract Management records the AI systems used in contractor selection, evaluation, and termination workflows, retains the logs and human-review evidence, and produces the deployer-side documentation EU regulators look for under the high-risk AI regime.

Frequently asked questions

When do the EU AI Act obligations apply?
The AI Act entered into force on August 1, 2024 and applies in phases. Prohibited AI practices and AI literacy duties applied from February 2, 2025. General-purpose AI model rules and governance applied from August 2, 2025. The main obligations for high-risk AI systems under Annex III, including most HR and worker-management systems, apply from August 2, 2026. High-risk AI systems embedded in regulated products under Annex I have an extended transition until August 2, 2027.
Are HR and worker-management AI systems high-risk?
Yes for most uses. Annex III, point 4 lists AI systems used in employment, workers management and access to self-employment, specifically systems for recruitment or selection, decisions affecting promotion or termination, allocating tasks based on individual behavior, and monitoring or evaluating performance. These systems carry the full set of high-risk obligations: risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness, and cybersecurity.
Does the EU AI Act apply to non-EU companies?
Yes. It applies to providers placing AI systems on the EU market and to providers or deployers established outside the EU where the output of the AI system is used in the EU. A US company that runs a CV-screening model on EU applicant data, or whose AI-generated decision is acted on inside the EU, falls in scope.
What are the penalties under the EU AI Act?
Penalties are tiered. Non-compliance with prohibited AI practices can be fined up to 35 million euros or 7 percent of worldwide annual turnover, whichever is higher. Non-compliance with other obligations (most high-risk system duties) can be fined up to 15 million euros or 3 percent of turnover. Supplying incorrect information to authorities is fined up to 7.5 million euros or 1 percent of turnover. Small and medium-sized businesses face the lower of the two figures.
How does the EU AI Act overlap with the GDPR?
They apply in parallel. The AI Act regulates the AI system. The GDPR regulates the personal data it processes. An HR AI tool can be both a high-risk AI system under the AI Act and a controller-processor activity under the GDPR. Both regimes require documentation, risk assessment, and human review of automated decisions, but with different scopes and remedies.

Related articles

Omnivoo handles this for you

Stop worrying about Indian payroll and compliance terms. Omnivoo manages everything (PF, ESI, TDS, professional tax, and more) across all 28 states.

Get started