OM Feature Operations Management

AI and the Future of Hiring

Does artificial intelligence help or hurt bias when it comes to hiring?

It stands to reason that using artificial intelligence (AI) in the hiring process would eliminate biases. After all, a machine doesn’t have the preconceived notions that humans have baked into their subconscious. 

Madeline Parisi

Or can it?

In 2023, the U.S. Equal Employment Opportunity Commission (EEOC) settled its first-ever AI-based hiring discrimination case with iTutorGroup Inc., a Chinese online education platform that is alleged to have programmed its recruitment software to screen out older job candidates.

Following this EEOC decision, providers of AI hiring tools and employers utilizing these tools may be looking closer at whether the safeguards applied to AI technology help — not hinder — their recruitment efforts.

“It is unrealistic to think there won’t be bias in the data set. You can’t be ‘free from bias,’ says John Boyce, Head of People Development at AMSOIL, Inc. “The best you can do is get very transparent about the biases that are in the data set and try to understand the methodology used in sampling and building the algorithm.”  

AI tools are not thinking and creating independently — yet. The AI tool requires training on search criteria, the information to collect and how to fulfill replies. Currently two methods are employed to train AI: One is data scraping, using external data publicly available, and the other uses internal, historical data only.

“Artificial intelligence itself does not contain bias; however, the data it is trained on absolutely can contain bias and could cause adverse selection,” says Matthew Spencer, Co-Founder and Chief Executive Officer of Suited, Inc., an AI-based assessment provider of supplementary data to law firms regarding candidate behavioral traits and relevant cognitive skills. “A key focus of any hiring tools using AI needs to be centered around training the models on well-defined and fully understood data sets.”

For example, Spencer says solutions with data trained on large language models or other types of generative AI (which are two newer technologies) are often trained on massive data sets, the content and relationship of which is not fully understood. “The result can be outputs that are inconsistent, unexpected and unexplainable. Alternatively, tools that leverage machine learning technology (a much more understood form of AI) and are often trained on defined data sets, result in consistent, expected and explainable outcomes.”

You have heard about ChatGPT — which is a large language model — hallucinating. These hallucinations result when artificial intelligence systems generate outputs that are not grounded in reality and are unexpected, or do not correspond to the input data, creating inconsistencies. AI-produced hallucinations can arise due to limitations in the training data, model architecture or optimization processes, causing the AI to produce such unexpected or illogical outputs.

Concerned over AI algorithm use, in 2021 the EEOC launched its Artificial Intelligence and Algorithmic Fairness Initiative to ensure Automated Employment Decision Tools (AEDT) comply with federal civil rights laws. New York City also enacted Local Law 144, which prohibits employers and employment agencies from using an AEDT in New York City unless they ensure a bias audit was done and provide required notices. The law was enacted in 2021, but enforcement began on July 5, 2023. 

AI AND THE HIRING PROCESS

Boyce has not moved AMSOIL into AI for hiring because of concerns regarding transparency and not expecting AI systems to be more ethical than the people who develop the systems. “Ethical decisions cannot simply adhere to a predefined series of rules and procedures. Novel situations will come up and I don’t trust AI to make the right ethical decision, especially with its inability to take responsibility for decisions. Humans may be less accountable saying, ‘It’s the algorithm, not me.’ That is a problem for me.”

Given the unique requirements of many roles and responsibilities in the legal community, more novel situations are often the norm. Yaima Valdivia, Principal Software Engineer at Vercara, believes that by incorporating a wide variety of data that captures the full spectrum of potential candidates — including those considered outliers — AI may be trained to handle unusual cases.

“Artificial intelligence itself does not contain bias; however, the data it is trained on absolutely can contain bias and could cause adverse selection.” 

Valdivia notes that diversity in training helps the AI learn to evaluate candidates fairly and without undue bias toward the majority. By including provisions for exceptions, the AI system may accommodate a broad range of candidates, ensuring that the system is inclusive. “Failing to account for anomalies can lead to a system that unfairly discriminates against candidates who do not fit within a narrow set of parameters, undermining the fairness and integrity of the hiring process.”

As the firm’s hiring process is assessed and evaluated, so should the tools designed to assist in that process. AEDT tools should not be exempt from preacquisition assessment or ongoing audits. For example, in Suited’s audit process, Spencer indicates that every machine-learning driven candidate-scoring model that Suited develops is examined by an independent auditor, and the audits are conducted in compliance with New York City Local Law 144 on at least an annual basis.

“The audits for law firms are conducted by scoring a pool of more than 14,000 law candidates who have completed Suited’s assessment as part of our clients’ hiring processes,” says Spencer. “Each candidate is scored using the individual AI-model, and the scores are compared across demographic groups to determine if the algorithms are scoring candidates differently by demographic groups, in accordance with the formula set out by NYC LL 144. We also test every model against the EEOC prescribed formula for testing for adverse impact.”

Suited provides the audit reports to their clients, to uphold that no adverse impact is being caused. The report also provides documentation to clients where law or jurisdiction regarding the use of AI tools is applicable.

“AI systems inherently reflect the values, biases and ethical considerations of those who develop and train them. While we can design AI to operate within predefined ethical guidelines, its ability to be ‘more ethical’ is limited by the scope of its programming and the data it’s trained on.” 

AI tools make the hiring process more efficient, but is time saved worth the real or perceived lack of communication and interpersonal requirements of a law firm? Attorney Heather Parker of the Parker Law Office, LLC, has not considered implementing AI. Parker’s main reason is that the practice area is very interpersonal and is concerned “there would be a lack of genuineness felt by the candidate or that the communication would not ‘sound like me.’  We each have a style and uniqueness about us, and I would not want that to be lacking in communication with someone who might ultimately deal with my clients and the rest of my team,” she says.

AMSOIL and Boyce are not rushing to utilize an AI tool in hiring, believing that the ethical issues are significant and not enough attention is being paid to them yet.

Given Boyce’s concerns, can we expect AI systems to be more ethical than the people who develop the systems or than the people who input the data?

“AI systems inherently reflect the values, biases and ethical considerations of those who develop and train them. While we can design AI to operate within predefined ethical guidelines, its ability to be ‘more ethical’ is limited by the scope of its programming and the data it’s trained on,” says Valdivia.

The moral responsibility of the AI tool provider is significant, according to Valdivia. It includes ensuring that the AI is developed with an awareness of potential biases and that efforts are made to mitigate these biases through diverse data sets and continuous monitoring. “Providers must also adhere to ethical guidelines prioritizing fairness, transparency and accountability in AI systems,” she says.

As part of the ethical considerations, Valdivia believes that transparency is crucial, indicating the candidate be advised that AI is involved in — or is making — the hiring decisions. This transparency respects their right to understand how decisions about their candidacy are made, and to withhold this information denies candidates the opportunity to provide informed consent and potentially to challenge decisions that they believe to be unfair. 

GOING FORWARD

AI-assisted hiring is only one piece of the larger conversation around the use of AI in organizations. So where is AI hiring going and how do we prepare? Valdivia agrees that as AI advances, organizations, developers and legal professionals must work together to ensure that AI tools are used to enhance fairness and inclusivity in hiring processes. “Ethical AI use in hiring benefits candidates and enriches organizations by promoting a diverse and capable workforce.”

Spencer supports the benefits to candidates and organizations. “When properly built and deployed, AI tools can have tremendous positive impacts that create more equitable and effective hiring processes. They can help firms make more accurate and less biased hiring decisions, resulting in increased performance, reduced attrition, greater diversity and improved efficiency.”

In the implementation and use of an AI tool, it is important to recognize that AI is your assistant in the hiring process and does not replace human intervention and interaction. As author Adam Grant indicates in his latest book Hidden Potential: The Science of Attaining Greater Things: “An algorithm is an input to human judgment – not a substitute for it.”

When used with transparency, and the data sets and output are frequently tested, AI may be an additional resource in the hiring process. Are you ready for AI to be your assistant in the hiring process?

Want more AI conversation? Tune in to Legal Management Talk, where we recently talked with Kriti Sharma, Chief Product Officer of Legal Tech at Thomson Reuters, where we discuss AI software, what the regulation landscape looks like and how AI will change the future of talent recruitment. Check out Part 1 and Part 2 to hear the full conversation.