August 25, 2021
Topics: AI

Why Your AI Tech Is Biased — And What You Can Do About It

Bias in HR technology supported by artificial intelligence (AI) is an important conversation. Organizations that use AI recruiting solutions have a duty to guard against introducing bias that can negatively impact diversity in hiring.

And that starts with a firm grasp of what bias actually means in relation to AI tech, and how we can guide AI-driven technology toward optimal hiring outcomes.

On our August 19 episode of Talent Experience Live, Tan Chen, AI Product Manager at Phenom, dove into the key aspects of how organizations can harness the power of AI to work for — rather than against — diverse hiring. Catch the full episode with Chen below, or read on for highlights!


What exactly is bias in recruitment technology?


From a data scientist’s perspective, “bias” simply refers to thinking about patterns in data. “In data science, we say that all data is biased,” Chen said.


Some bias is desirable. For example, you might want to train your algorithmic models to serve up job recommendations that match candidate preferences using location, Chen said. “You want to be biased toward giving people jobs that are in their preferred locations. That’s a bias; you want to bias toward that. You want to satisfy people’s preferences. Your machine learning algorithms should … nudge closer to that.”


What to watch out for: social bias


On the other hand, social bias is what gives the word its bad reputation in HR circles. Social bias refers to eliminating job candidates based on gender, race, age, disability, or other demographic attributes. AI algorithms may also end up repeatedly selecting candidates from the same universities or previous employers.


Related: The Definitive Guide to Diversity & Inclusion for HR


Because algorithmic models inform future screening and hiring decisions by finding patterns in historical data, there’s a risk the solution will continuously recommend candidates with skills and backgrounds similar to successful hires. While this might not inherently be a bad thing at first, you could end up with reduced diversity in your candidate pipeline.

The Amazon example

To illustrate how AI tech can introduce bias, Chen shared the well-known example of Amazon’s experience several years ago using machine learning (ML) to screen resumes.

Amazon noticed that the tool was recommending far more men for technical job roles than women. This was because the model was trained on Amazon’s historical data, which included many more male applicants and hires than women.

In essence, the technology learned to penalize resumes that included terms associated with female applicants, such as all-women’s colleges and clubs with the word “women’s” in the title.


AI illuminates bias — Humans course correct


Key to understanding AI’s role in elevating diverse hiring is realizing that the solution alone won’t eliminate bias. Rather, AI helps illuminate where bias is occurring. Then it’s up to humans to intervene and course correct.


“Artificial intelligence doesn’t actually help you reduce bias. It actually amplifies the bias … and it’s then easier to find,” Chen clarified.

To prevent social bias from entering AI algorithms, organizations need to make sure there’s plenty of ongoing human involvement and oversight throughout the TA process.

“Generally, the term we use is ‘human in the loop.’ We try to make sure that there is somebody who’s watching it and checking for things. It could be as simple as every week, somebody looks at a dashboard and takes a look at how many males versus females are being hired in X category,” Chen said.

On the front end, HR managers and recruiters will play a crucial role in analyzing the data for any existing biases before the process even starts.

Analyze data at the outset and monitor ongoing activity

Referring again to Amazon’s experience, Chen said, “What the AI community learned from that is that when you have historically-biased data – if Amazon’s historical hiring was already biased and you used that to train a new model – it’s only going to amplify that. So now that we know that a human in the loop … can analyze the data before the model is even trained to see if there’s any gender bias or social bias in there.”

Someone will also need to regularly monitor the solution dashboard for indicators of negative social bias (e.g., selecting significantly more male candidates).

This responsibility doesn’t require highly technical skills, Chen added. Anyone who’s passionate about advocating for diversity can check for indicators of bias and alert the team.


Related: The Definitive Guide to Artificial Intelligence


When end users can identify bias


No system is going to be perfect, Chen emphasized. In addition to the team member designated to monitor results, recruiters and other end users should know that if they detect indicators of bias, they should raise the issue with the respective product manager.

“A data scientist can now go back and research and figure out 'What happened there? Why was that?' And they can avoid those pitfalls,” Chen said. (And if your organization doesn’t have data scientists on staff, make sure you go with a vendor that can offer this support.)


Set realistic goals and monitor outcomes


It's critical to continuously check for evidence that the AI model is supporting organizational hiring goals, Chen said. In other words, be aware that AI isn’t a plug-and-play technology.


“When organizations think about AI as ‘set it and forget it’, that’s a big red flag. Because it’s really everybody’s responsibility to understand what it’s doing, how it’s doing it, and understand the social impacts of it.”

Make sure diversity hiring goals are realistic and data-backed, Chen later added. This will require analyzing data on population distribution for specific job locations.

For example, expecting a 50-50 split of male and female job candidates won’t always be realistic. “Organizations need to be aware: if you’re going to set these goals – great. But have some data behind it and figure out what would be a realistic goal.”


Watch your language: AI is always learning


Because the algorithmic models that power the system’s AI capabilities are always learning, users need to carefully consider the content that’s feeding the system (e.g., job descriptions) and ensure the algorithms are trained to prevent bias.

It may not even be your organization introducing the terms. Chen offered the example of a long-time flight attendant who may still have the gender-skewing term “stewardess” on her resume. The algorithm needs to be trained to treat these job titles equally.


Questions to ask AI tech vendors


The market is full of vendors riding the AI wave. How do you know you’re selecting a tool that will help deliver the outcomes you want?


It’s all about the data

Because an AI solution is only as good as the data it’s trained on, questions should zero in on the vendor’s data practices.

Vendors often consider their data sets to be proprietary information, Chen said. But you don’t need to know to get overly technical or ask for actual data points. Instead, ask:

  • What initial data set did the vendor use?
  • How did the vendor collect the data?
  • How did they train the algorithmic models?
  • How does the tool accurately match a candidate to an open job?
  • Does the system use a single algorithm or multiple algorithms? (A layered solution is better.)


Leveraging AI’s full potential depends on human oversight


Chen is optimistic that AI capabilities will only get better, especially as awareness grows regarding the tech’s limitations and the importance of human involvement in the process.

“I think in the past, people got too optimistic about technology and they overused it, or they used it incorrectly," Chen said. "And there wasn’t really any oversight or any guardrails. The fact that we’re even having this discussion is a sign that we’re optimistic – but cautious."


Blog: Leveraging Analytics To Help HR Make Stronger Data-Driven Decisions


Sign up to get notified about future episodes of Talent Experience Live! Catch us on LinkedIn, YouTube, Twitter, and Facebook every Thursday at noon ET to get the latest in recruiting, talent acquisition, talent management, and HR tech.

Related Posts:

Get the latest talent experience insights delivered to your inbox.

Sign up to the Phenom email list for weekly updates!

Loading...

© 2024 Phenom People, Inc. All Rights Reserved.

  • TRUSTe
  • CSA logo
  • IAF
  • ISO
  • ISO
  • ISO
  • ISO
  • ANAB