In the midst of the Great Resignation, companies are scrambling to find the right people to fill jobs. Some of these companies are turning to AI and machine-learning algorithms to sort through applicants faster. While this method may meet leadership’s hiring timeline, it can also amplify implicit bias in the hiring process if proper safeguards are not put into place.
The paths of human resources, AI and job applicants intersect when HR departments offload the initial candidate screening process to an AI system. These AI systems are typically trained to identify the best person for a particular role based on the candidate’s resume and additional metadata the candidate may provide. However, Harvard Business School research shows automated screening processes can reject qualified workers who don’t meet the machine’s programmed-in criteria, which is based on past applicants, who were often white, American, and male.
But there's a growing awareness among senior leaders that AI programs are often biased: 64% of senior executives identified existing bias in AI technologies used by their company according to RELX findings. It’s well established that the output of AI algorithms we build can mirror human biases and lead to unfair outcomes. To ensure these hiring systems remain impartial, we must start by understanding where bias can potentially enter every step of the process where AI technology is used.
Developing Fair Job Descriptions
Hiring starts well before the selection process. Ensuring that job descriptions in hiring posts are appropriate and inclusive is key. For example, if in describing your perfect candidate, you write, “He is a motivated, self-starter with at least eight years of experience,” you're automatically denying that opportunity to female candidates. While few cases are so obviously biased, implicit bias can be quite pervasive and more difficult to spot.
Properly trained humans can help spot potential AI bias within systems on a case-by-case basis, but it’s harder to identify in an algorithm unless you apply safeguard methodologies to the results before moving to the next stage in the hiring process.
Trained employees should be able to review existing AI programing and identify: why are certain candidate getting rejected by this algorithm? Are there formatting or styling aspects of the resumes that could have influenced the selection process? As a rule of thumb, minimizing bias begins and ends with proper training to program effective AI technologies and recognize implicit bias in existing systems.
Related Article: Why Is AI Adoption for Recruiting Just Crawling Along?
Analyzing the Candidate Pool
Technology enters the picture once again to help narrow down the most qualified candidates for the job from all of the applications. Algorithms at this stage can automatically reject a significant proportion of candidates. To prevent bias, leadership will need to rigorously examine the rejection patterns. To combat automatic rejections, trained employees can encourage the use of 'blind applications' that remove factors that can lead to biased decisions including race, nationality, gender and age.
Learning Opportunities
While every AI system is different, it’s possible that the training set leverages data from people within the organization fulfilling a particular role to filter out candidates. The danger of doing this is that candidates would be benchmarked against existing employees which would perpetuate organizational biases. Instead of benchmarking against existing employees, technologists can program AI to pre-filter candidates by the job role criteria to avoid biased rejections. For example, discarding all candidates without a university degree, which may seem like a good idea to shrink the pool of candidates, in reality can have a negative effect where perfectly viable candidates are left out.
Related Article: Artificial Intelligence in HR Remains a Work in Progress
Train Colleagues to Identify Bias
Bias must be addressed at every step of a candidate’s process — and this includes integrating a training process for programmers who work on hiring AI to continuously prevent bias. At LexisNexis Risk Solutions, we train anyone designing or using algorithms to identify and solve potential biases and related issues. With this approach, we are always actively monitoring for unethical issues and preventing bias.
Technologists can help spot potential bias by looking intrinsically for it within the AI system decision and extrinsically by looking at the output and validating not just which candidates were found viable but, more important, those who were rejected. For example, even if the model doesn’t evaluate race, it may be indirectly inferring race based on a person’s name.
Ethical AI is the foundation of a fair hiring practice and should be seen as an enabler of business growth. Continuous dedication to ethical AI training and maintenance of AI-supported hiring programs including the supervision of developing fair job descriptions and analyzing the potential candidates is worth companies’ time and effort.
Learn how you can join our contributor community.