Digitizing Employment

There are many aspects to digitization in employment but one of the latest is incorporation of AI to employment and resources applications. AI is being embraced by organizations to perform a range of employment-related functions. In this chapter, we consider how employers can build digital trust when using AI to make or influence employment decisions.

  • Download

Introduction

According to a 2022 survey from the Society of Human Resources Management, at least 79% of employers use some form of automation or AI in their recruitment and hiring decisions. Using AI to perform a range of employment-related functions, including recruitment, has clear business benefits: technology helps streamline processes, making them more efficient and cost effective, and improves employee productivity by automating routine tasks. Recent exponential growth in employers’ use of AI, not just in the U.S. but globally, shows that organizations are embracing those advantages.

However, employers need to balance the benefits of using AI against the potential legal and reputational risks. As Charlotte Burrows, chair of the U.S. Equal Employment Opportunity Commission (“EEOC”), the federal agency that enforces anti-discrimination laws in the U.S., commented recently, “AI and other algorithmic decision making tools offer potential for great advances, but they also may perpetuate or create discriminatory barriers, including in employment.” 

To be sure, AI, if used effectively, can improve decision making and help reduce the risk of bias in hiring and other decisions by eliminating subjective factors. But AI can also make ineffective or discriminatory decisions - such as by selecting poor-performing job candidates, or by favoring (perhaps inadvertently) job candidates based on factors relating to race, gender, or other protected characteristics.

Legislators and regulators are starting to recognise the potential for AI to cause harm to employees. In some regions, governments are imposing restrictions on employers’ AI use, such as New York City’s recent Automated Employment Decision Tools Law. Read more here. More commonly, at least for now, regulators and individuals are relying on existing discrimination and data privacy laws to challenge AI-influenced decisions and to require greater transparency about their use.

Quote

Employers need to balance the benefits of using AI against the potential legal and reputational risks.

Quote

The most obvious way for employers to build digital trust is by ensuring that AI systems used in sensitive personal data scenarios like employment are extremely secure and that they assist in making decisions based on legitimate business criteria.

How can employers build digital trust when using AI to make or influence employment decisions?

The most obvious way for employers to build digital trust is by ensuring that AI systems used in sensitive personal data scenarios like employment are extremely secure and that they assist in making decisions based on legitimate business criteria, clearly they need to avoid discrimination against job applicants or employees based on legally-protected characteristics such as race, sex, age, or disability and they also need to be effective in making or supporting successful choices. Both the employers and the applicants need to be able to trust the system being used. Relying on output from AI systems to make employment decisions can lead to discrimination in several different ways. For example, using a recruitment tool that treats some candidates less favorably based on a protected characteristic, or that sets a quota of ensuring certain numbers of individuals of a protected characteristic will be selected by the tool, is disparate treatment (under U.S. law) or direct discrimination (in the U.K. and Europe).

Risk of discrimination

A milestone settlement recently reached by the EEOC over AI discrimination in hiring highlights these risks. In that case, which settled for US$365,000, the EEOC alleged that a company programmed its AI-powered application software to automatically reject female applicants over the age of 55 and male applicants over the age of 60. However, even when systems are not designed to intentionally discriminate, problems can become embedded in AI systems in unintentional ways.

For example, data used to train AI tools may not be statistically balanced, or may even reflect past discrimination, which may unintentionally lead an AI to favor or disfavor certain groups on the basis of a protected characteristic. They can also lead to disparate impact (indirect discrimination in Europe) if outcomes put people who share a protected characteristic at a disadvantage, even though the AI tool is not specifically taking the characteristic into account in its decision making.

If an employer identifies unjustified disparate impact, it may be difficult to practically or legally adjust an AI system to remove that disadvantage. Where AI involves machine learning, it can be hard to identify why an algorithm or the training data is causing the relevant effect, which makes correcting it problematic. Additionally, an employer may face claims of discrimination by trying to “fix” a potential disparate impact caused by AI by reprogramming the tool to favour the disadvantaged group based on a protected characteristic, such as having different “pass marks” based on protected characteristics to equalise successful male and female candidates, or establishing a quota based on protected class status.

Another potential risk arises if AI systems put employees with a disability at a disadvantage. For example, an AI-assisted interview that uses visual or verbal data to make hiring recommendations may disadvantage candidates with some types of disability. In that case, the employer may be under a duty to make a reasonable accommodation/reasonable adjustment to ensure that systems do not disadvantage candidates with disabilities in that way.

Speed of change

Above all, this is an area where governments and regulators are aware of the issues but struggling to keep up with advances in technology. Individuals are becoming increasingly alive to the potential risks of AI and the routes available to challenge decisions they disagree with. Over the next few years, the law will likely begin to catch up, so employers should monitor developments closely.

Ultimately, biased systems that are subject to consistent successful legal challenges, or insecure systems that are subject to data breaches and cyberattacks, will never be trusted. So it is vital to get all of these features right in any system that is deployed in the employment context.

Key recommendations

1

Algorithms

Ensure algorithms do not rely on discriminatory assumptions, which may require the involvement of multiple individuals during a design phase to identify potential issues before they become baked in and review algorithmic training data to ensure it is representative.

2

AI systems

Monitor the ongoing security of AI systems to avoid data breaches, potential manipulation and other security issues, together with monitoring the output of AI systems to check for potential disparate impact and adjust systems as appropriate and without engaging in disparate treatment by virtue of any such adjustments to reduce any such discriminatory impact. Document the business reasons for using relevant AI systems and explain why alternatives would not achieve those objectives to support a defense to any potential disparate impact claim.

3

Applicant and employee accommodations

Consider – possibly with input from accessibility experts – whether systems could put employees with disabilities at a disadvantage and ensure that applicants and employees can request a reasonable accommodation in connection with employment or application for employment.

Key contacts