There’s been a lot of discussion around the topic of ethical concerns with A.I. and machine learning technology. Although the EEOC has put out correspondence in the past regarding some of these concerns already, it’s a long way off before organizations are even partially relying on A.I. technology for full decision-making capabilities.
Considering only 7 percent of organizations are doing anything with A.I. or machine learning at this very moment, it’s really more of a conversation point as they continue to research what technology solutions are out there.
After attending HR Tech World San Francisco in June, I had a lot of excellent takeaways. One that really stood out was a presentation around A.I. and machine learning from John Sumser, Founder of HRExaminer, and Stacey Harris, VP of Research and Analytics at Sierra-Cedar. They presented some great questions to consider as you research potential A.I. and machine learning vendors, along with my thoughts on each one.
Can you use data without a control group? What are the risks of too little data? (i.e. traceable validations) Everyone talks about Big Data, and there’s a lot of truth to the capabilities when you have more and more data available. The more data, the more accurate information, results, and analytics will be available to an organization. With too little data, it’s harder to make accurate predictions, because there aren’t as many data patterns available to analyze.
If you have insight on what changes human behavior, is it ethical to use it? What’s the line between motivation and manipulation? Marketers have been using insights and data for years to change messaging in order to influence buyer behavior, and it will be no different in the HR and talent acquisition space either. Using insight on what changes human behavior to fine-tune recruitment and HR messaging, both externally and internally, will help organizations align themselves with their employees.
Can statistics actually be more reliable than human prediction? What is the difference between HR and LinkedIn data? It’s not about statistics versus the reliability of human prediction. A.I. and machine learning solutions are there to provide more unbiased and pattern-based predictions based on historical data and actions. This is only in an effort to make human predictions and decision-making more accurate the first time around. As for the difference between HR and LinkedIn data, one is internally collected as a result of dialogue, performance reviews, interactions. The other is an external source of information on a candidate or employee.
How do you disagree with the machine’s recommendations? How do you see bias? What are proper expectations? Do we get stupid when the computer thinks for us? These are all very relevant questions, but our CEO, Mahe Bayireddi, has discussed this topic pretty extensively. Prior to implementing an A.I. or machine learning solution, you have to make sure to remove any unwanted bias before you launch. Then, you have to take the time to periodically test and remove unwanted biases throughout the time you are using the technology. There may be a time where AI gets advanced enough to remove bias, but that technology doesn’t look like it will be here in the near future.
How do you limit the data’s ability to influence the company? What role does trust and transparency play? Once again, data helps to provide statistics and insight into better decision-making. When using a navigation system, you still need to keep your eyes on the road as the driver is the one ultimately making decisions on where to turn. At the end of the day, the technology isn’t making the decision – the HR or talent acquisition professional is making the decision. Look at A.I. and machine learning technology as a helper to become better business partners, not a replacement.
These are excellent questions to think about if you are deciding to implement technology that uses A.I. or machine learning.