Regulation and compliance – AI and data privacy

Safeguarding employees’ and candidates’ rights has become a priority as use of AI raises concerns around bias, for example in hiring decisions. This is mostly regulated through general data privacy laws. But some countries are applying employment-specific regulation.

Key themes

  • There’s little legislation specifically regulating AI in employment.
  • This is changing in the United States, where New York City regulates AI as a recruitment tool.
  • Other countries typically use general data privacy laws to regulate AI. 
Quote

Other countries typically use general data privacy laws to regulate AI.

Quote

Following guidance on AI and disability in 2022, in 2023 the U.S. Equal Employment Opportunity Commission published guidance on the risks of other types of discrimination against job seekers if employers use AI or other automated systems.

The use of AI in employment, whether for recruitment, performance management or monitoring purposes, remains a hot topic. It’s a priority for governments and regulators. There’s not much employment-specific regulation, which is generally regulated through general data privacy laws.

The United States is an exception. New York City has led the way in regulating employers’ use of AI for recruitment. The New York City AI law makes it unlawful to use automated employment decision tools to screen candidates and employees within New York City, unless certain bias audit and notice requirements are met. Several states are considering similar restrictions.

The New York City law came into force in July 2023. It applies to employers that want to use automated decision tools in employment processes, such as hiring or promotion decisions, and at all stages of the recruitment process. It doesn’t only apply to final decisions about which candidate to appoint. For example, the law applies if an automated tool is used to make shortlisting decisions.

Under the rules, employers must carry out independent annual bias audits before using automated tools. The aim is to ensure the tools don’t discriminate because of gender, ethnicity or race. Employers must publish the audit results, notify candidates that an automated tool will be used and allow them to ask for a different means of assessment. Employers risk fines of up to US$1,500 for breaking the law.

Although the U.S. law is employment specific, it’s similar to general data privacy protections in other countries. Under the EU GDPR, individuals are entitled not to be subject to a decision solely based on automated processing unless an exception applies. If automated processing is allowed, an employer must adopt measures to safeguard the employee’s or candidate’s rights and freedoms. These include a right to ask for human intervention and to contest an automated decision. Employers should carry out data protection impact assessments before using automated systems.

Similar rules apply under the UK GDPR, although the government’s draft Data Protection and Digital Information Bill expands and clarifies automated decision-making. The Bill removes the ban on automated decision-making and instead makes it subject to safeguards. It maintains the ban on processing special category data (such as information about someone’s race, beliefs, health or sexual orientation) for automated decision-making, unless the processing falls under a pre-defined condition. The effect of these amendments is that for special categories of personal data, automated decision-making is still specifically restricted, while for personal data in general a system of safeguards will be introduced.

Under both the EU and the UK GDPR, employers must give individuals meaningful information about the logic involved in automated decision-making and the consequences of the processing. This includes how the information they provide is relevant to the decision-making process, the criteria the system uses to make decisions and the risks involved and how they’re mitigated – bias testing to ensure that results are non-discriminatory, for example. Individuals must also be given information about how to challenge automated decisions and ask for human intervention.

The German government is considering an Employee Data Protection Act to sit alongside the existing Federal Data Protection Act. The Employee Data Protection Act would regulate data processing in employment more precisely. Proposals include transparency requirements if employers are using AI and enhancing works council co-determination rights in employee data.

Italy already has a Transparency Decree, as we reported last year. It relates to the use of automated systems by employers and supplements GDPR requirements.

Regulators remain active in the area. Following guidance on AI and disability in 2022, in 2023 the U.S. Equal Employment Opportunity Commission published guidance on the risks of other types of discrimination against job seekers if employers use AI or other automated systems. It focuses on assessing whether AI-influenced selection processes have a disparate impact on protected groups and how employers should respond.

The UK Information Commissioner’s Office released draft guidance at the end of 2023 on processing employee data during recruitment processes. It focuses on data protection, not the discrimination risks. Addressing the equality implications of AI is one of the Equality and Human Rights Commission’s strategic priorities, although it hasn’t produced technical guidance for employers so far.

The regulatory focus on AI in Asia-Pacific also applies in a wider context. Hong Kong’s Office of the Government Chief Information Officer published a revised Ethical Artificial Intelligence Framework in August 2023. It includes principles of transparency, fairness and the need for human intervention in appropriate cases. The Indonesian government is proposing similar guidelines.

Looking forward, the EU’s AI Act was agreed in December 2023. It’s designed to pave the way for comprehensive regulation of AI in the EU and is likely to become a global benchmark. Employers should note that under the Act, AI systems designed to recognise emotions in the workplace are banned for posing an unacceptable level of risk.