Let me preface this by saying it has been my experience, that barring the obvious bad apples, most people are basically good and want to do the right thing. So in 2018, here in our comfortable Western (and litigious) society let me submit that a hiring manager is unlikely to look at a resume and say to himself, “I don’t want a woman in this role.” And let me finally submit that this otherwise decent hiring manager might look at the same resume and think enthusiastically that this female or African American candidate, “would be a great fit for another position” which happens to be lower level or less technical.

It is called unconscious bias and it is the subject of growing interest in both academia and human resource departments. Likewise, battling this tendency within all humans is a new trend in software for HR—there have been many vendors coming to market with AI-driven products that promise to weed out unconscious bias from the hiring system. The question is, do they work? And if they do, will they introduce a brave new world of sorts in which hiring decisions are finally made strictly on merit and ability? Neither are easy questions to answer.

Related Article: Why the Benefits of Artificial Intelligence Outweigh the Risks

Do We Want To Be Tricked Into Behaving Better?

There are a lot of companies coming out with products to help in the hiring process and that seems to make sense, said Rumman Chowdhury, Responsible AI Lead at Accenture. “But here’s where I think people need to evolve a bit—some of these solutions are designed around ‘tricking’ us into behaving and thinking better. But for AI to have the positive effect that people want, instead of trying to trick you into having a more diverse candidate pool, what it really should do is nudge you achieve that on your own.” 

In other words, she said, AI can help us identify that bias exists but it should be used to nudge us into being better people instead of doing the work for us. That makes eminent sense save for the uncomfortable fact that “tricking” does seem to work in some instances.

One innovation in this area has been the blinding or masking of names and other identifying features on a resume. It has been found that hiring managers, at least in the initial round of selection, show less tendency for bias when they don’t know the gender or ethnicity of the applicant, Calvin Lai, Assistant Professor of Psychological and Brain Sciences at Washington University in St. Louis. said. “That is one example how bias can be eliminated from a portion of the hiring process.”

How Much AI is Enough?

Such measures, while welcome in most quarters of the corporate world, are widely acknowledged not to be enough to completely remove bias from their system. AI won’t be able to end bias in hiring and promotions because this is not a problem that can be impacted at a single point in time, said Bobbie Carlton, founder of Innovation Women. “There are so many factors that impact a person’s hiring profile leading up to the moment in time when the individual is being considered, either for a promotion or a new job. In order to reach that moment in time, they need the right education—which is further influenced by accidents of birth, upbringing, income, etc.—combinations of promotions, new jobs, bosses, mentors, etc. Even so much as the wrong word on a resume—or lack of a word—can pull you into the wrong pile, not the right pile," said Carlton.

Also, biases tend to be subtle and—as our fictional hiring manager at the start of the article illustrated—outside of conscious awareness or control, Lai said. “It is very hard to change these implicit biases through some intentional intervention. The type of experiences that do tend to work tend to be things that are enduring like having a roommate of another race,” he said. “There doesn’t seem to be a tractable way of moving the needle on cases of bias and discrimination.”

Related Article: 8 Examples of Artificial Intelligence (AI) in the Workplace

Unconscious Bias Data Can Be Baked in Unknowingly

Indeed, even the software applications designed to identify the right candidate in the hiring process can become suspect when one digs into the algorithms used to write the application or the data used to train those algorithms. For example, Lai said, when thinking about what matters for a particular job, you may make one of the criteria interest in the same hobbies that people in the company enjoy. “If it is a tech company where the majority of employees are men then the hobbies that people like would be disproportionately endorsed by men instead of women, whites instead of nonwhites, etc. You can have these kind of criteria that seem to be race and gender neutral, that aren’t about race and gender but that might disproportionately advantage one group over the other,” Lai said.

Learning Opportunities

Another example might be giving weight to summer internships that are unpaid. “Who are people who can take an unpaid summer internship—people who are already pretty well off. By giving that more weight than we might otherwise we might be unwittingly incorporating or baking in bias in an algorithm or procedure," says Lai.

Another way in which algorithms can become biased is the data used to train the algorithm, Lai continued. For example, let’s say you’re training an algorithm that is looking at bodies of text—what people write. If you choose to use a text database that is based off, let’s say, the website Reddit, which is used mainly by men you’re going to end up with an algorithm that is trained in a way that is similar to how men might talk in an online social media channel, compared to if you trained your algorithm on data from Tumblr, which leans towards women. “So sometimes the data the algorithm is trained on might lead you down a path that isn’t necessarily generalized or applicable to the people you are trying to service,” Lai said.

This is not to say there is “right” data and “wrong” data, according to Chowdhury, “Data are darkly objective. It’s like turning on the florescent light in the bathroom and you can then see all your wrinkles. Data are like that — it is 100% correct and it can be ugly.” So data will show that black people don’t get promoted and women are paid less, which is objectively true in many cases. According to Chowdhury, what's important to remember is that the data is not some magical objective truth but more a reflection of the status quo.

Where AI is Working

None of this is to say that it is pointless to use AI as a tool against bias. AI performs well when given enough good data and the right goals, said Karrie Sullivan, partner in the Culminate Strategy Group. “AI could be applied to learn and identify profile(s) with innate and learned characteristics and behaviors that perform well in a particular role—regardless of race or gender,” she said. AI learning can also be directed to assume that there will be multiple types of profiles that will do well and—with enough data—it can be applied to optimizing teams, leadership profiles that work well with them, and even job listings that attract diverse candidate pools.”

Human Bias is Adaptive

This sounds close to Chowdhury’s ideal of having HR software nudge rather than trick people into behaving correctly and little wonder—it is possible today for AI to nudge people in the right direction, said Todd Maddox, a contributing analyst at Amalgam Insights There are a lot of “ifs” of course, he said—if you have the right data, if you’ve trained the algorithm correctly, if the data are represented and utilized in the appropriate manner. “If all these are in place then absolutely I think it can nudge people in the right direction,” he said.

That is because human bias is adaptive, he explained. “It is important to acknowledge that our beliefs are driven by our experiences. If I am in a large group of people I might look at a middle age heterosexual white man and say to myself, ‘I’m more like him than anyone else in the group.’ But what if I take a deeper look? I might realize that ‘oh wow that young transgender kid, he really thinks that machine learning and AI is cool—and so do I. We have a lot to talk about.’"

The point of the example, according to Maddox, is that once we take a little bit of a deeper look at people, going beyond surface features, we start to see similarities and these biases go away. It’s adaptive. This is how software can nudge us in the right direction, Maddox said. “If I were a hiring manager and I had information on a candidate’s people skills or his ability to code or information about her leadership abilities that could definitely push me to make a better choice.”