Overview of the EU AI Act – Part 4: AI in the Workplace

Understanding the EU AI Act: AI in the Workplace
The European Union’s AI Act marks a significant regulatory development aimed at addressing the challenges and risks posed by Artificial Intelligence (AI) technologies, especially in the workplace. With the increasing integration of algorithmic management systems, many employers are using automated tools to oversee, assess, and guide employee performance. A recent survey by the OECD revealed that over 70% of managers reported utilizing at least one automated tool for managing employees. This surge in AI use has sparked concerns among workers, leading to a strong demand for regulations governing AI’s implementation in work settings.
The EU AI Act: A Regulatory Framework
The EU AI Act is the first extensive regulation focusing on AI, recognizing inherent risks in workplace AI systems and establishing guidelines to safeguard workers. These guidelines aim to ensure ethical use of AI while prohibiting certain high-risk practices in the employment context.
Key Prohibitions on AI Systems
1. Biometric Categorization: A Complete Ban
The AI Act strictly prohibits AI systems that categorize individuals based on biometric data to infer attributes like race, political views, or union membership. Employers using such categorization could face significant consequences, especially concerning recruitment practices.
2. Emotion Recognition Systems: Limited Exceptions
Emotion recognition within the workplace is largely prohibited due to concerns over reliability. The Act allows these systems only for specific medical or safety-related purposes. Notably, this restriction extends to recruitment processes, meaning even job candidates are protected.
3. Social Scoring: Conditional Bans
The Act forbids social scoring systems that unjustly penalize individuals based on their behavior. For instance, if a worker is demoted due to perceived personality traits linked to their social behavior, this could be deemed as unfair treatment under the Act.
High-Risk AI Systems in Employment
According to the AI Act, certain AI systems are classified as "high-risk." This categorization impacts their deployment and mandates certain safeguards to protect employees. High-risk systems include those used for:
- Recruitment and selection
- Employee promotions and firing
- Task allocation and monitoring
- Performance evaluations
Interestingly, AI systems may escape the high-risk designation if a provider assesses them as posing minimal risks. Such decisions, while documented, can lead to gaps in worker protections.
Employer Obligations to Workers
The EU AI Act outlines several responsibilities for employers deploying AI systems:
General Responsibilities
- Employers must ensure some degree of human oversight and regularly monitor the functioning of their AI systems.
- Employers also have a specific obligation to inform workers and their representatives prior to implementing a high-risk AI system.
Enhanced Obligations for Public Authorities
For public sector employers, additional regulations apply, including the necessity to conduct a Fundamental Rights Impact Assessment (FRIA) before deploying high-risk systems. This assessment aims to evaluate how AI systems might influence employees’ fundamental rights.
Worker Rights and Remedies
The AI Act provides limited remedies for individuals affected by high-risk AI systems:
- Right to Explanation: Individuals can seek a clear understanding of decisions made based on AI outputs, enhancing transparency.
- Right to Lodge Complaints: Workers can file grievances against potential violations of the AI Act, irrespective of personal impact.
While these rights offer a basic scaffold of protection, the lack of obligatory investigations by national authorities into complaints limits their effectiveness.
Monitoring and Engaging Stakeholders
As the implementation of the AI Act progresses, ongoing oversight is essential. Key areas to watch include the operationalization of worker consultation mechanisms, the scope of employer obligations, and the requirements surrounding emotional recognition systems. Active engagement from civil society organizations and workers’ rights groups will be crucial in shaping an environment where workers’ rights are effectively defended and enhanced in the age of AI.