Mahe BayireddiNovember 15, 2022
Topics: AI

More Legislation Around AI in Hiring is Coming — We Should Welcome It

AI is all around us; it’s like oxygen. AI fuels many of our consumer and professional experiences every day. Whether it’s receiving package delivery notifications or using an app to avoid traffic, AI provides the tailored experiences we’ve come to expect in today’s digitized world.

AI works in a similar fashion with employment. Organizations are delivering personalized experiences to job seekers with technology that recommends jobs based on information like their personal preferences, skills, experience, and location. Employees get this, and more — based on their information, they can also receive tailored opportunities for career paths, learning and development, gigs, and mentoring. Authentic connections with qualified candidates and making it easy to apply to jobs via a conversational chatbot, campaigns, and SMS all help organizations fill open roles faster.

Gone are the days when HR departments simply listed a thousand openings on a career site and hoped people would apply. Millennials and Gen Z — the largest majorities of the U.S. labor force — drove the adoption of personalization in their everyday lives. Today, all generations expect personalization not only as consumers, but also as job seekers and employees.

AI has become an indispensable tool in talent experience, particularly for organizations that need to hire qualified candidates at scale — and retain them for the long haul. With millions more open roles than people to fill them, AI is the only way organizations can effectively scale.

Legislative efforts are underway at the state and local level in the United States for the use of AI in the hiring process — expanding on the AI governance that has already existed in Europe for years. In October, the current administration outlined the Blueprint for an AI Bill of Rights.

Recent legislative efforts in the U.S. build on decades of regulation at the state and federal levels that protect the rights of candidates and employees. As early as 1974, the Uniform Guidelines on Employee Selection Procedures discussed adverse impact, validity, and other relevant issues. AI brings new capabilities, speed, and scale, but other procedures and tools for decision-making in recruitment and hiring have existed for years. The new regulatory efforts address the new issues that arise with AI in the context of existing regulatory frameworks, such as guiding that the “reasonable accommodation” requirements of the Americans with Disabilities Act apply to “algorithmic decision-making tools.”

Legislation does not prohibit the use of AI in hiring and retaining talent. Rather, it provides guidelines for how AI should be used.

Audits are a core element to AI legislation. Similarly, audits have long been part of the business landscape, such as in accounting, IT security, and federal health information privacy. Organizations of all sizes and across various industries must follow accounting and IT security standards. Audits ensure that those standards are being met.

Both the standards and the audits are designed to provide transparency and accountability. The same is true for AI, which is why legislation and audits are both expected and welcomed — and demonstrate the technology’s maturity.

AI is inherently complex. Trust in the technology and future potential needs to be built through education and transparency.

AI provides benefits for all stakeholders throughout the talent journey, including candidates, employees, and employers. It increases overall job matching relevance, boosts productivity by automating many HR functions, and saves time for richer human-to-human interactions.

Driving Automation, Hyper-Personalization, and Inclusivity


At Phenom, AI powers our Intelligent Talent Experience platform to deliver automation, hyper-personalization, and amazing experiences at scale. Our AI is also configurable and explainable to help build transparency and trust — which is critical for inclusive-minded organizations. Here are a couple of ways this happens.

For candidates, AI can help by empowering them to find job opportunities faster and more easily. AI means that candidates no longer have to have exactly the right keywords on their resumes — or the right schools or prior employers. Instead, AI delivers the most relevant job openings, either at the moment that the candidate is browsing an employer’s career site or at a future date, when a new opportunity becomes available. This is the promise of inclusive AI: access to opportunity.


Related: Why AI is the Differentiator in Today’s Experience Market


For recruiters, AI-based automation brings efficiency and objectivity. AI means that a hiring manager’s requirements can be applied to hundreds of resumes, and that past applicants can be sourced for new opportunities. Humans are always in control of the technology, thanks to configurability and easy data access. Recruiters can sort and search through resumes (e.g., candidates that satisfy all requirements, resumes that mention relevant experience), while auditors can monitor that the AI satisfies requirements for validity and adverse impact.

Putting Guardrails on AI


We know that bias can indeed arise in recruiting, whether it’s from humans or technology. Bias in human decision-making is what initially gave rise to regulations on employment decisions. The business value of AI is in intelligent automation, which works by discovering and replicating successful patterns of data. However, bad data in means bad data out: AI may discover patterns from biased human decisions and then replicate those patterns, which can propagate human bias at scale.

Ethical AI is a business and moral imperative at Phenom. That is why we established our Governance Policy for AI Technologies. This policy establishes guardrails for AI through human-in-the-loop control, adverse impact analysis, validity, and auditor support — all of which is driven by ongoing platform innovation and development of new legislation. Here’s how we’re making this possible.

Human-in-the-loop

The keystone of trust in our AI architecture is that AI doesn’t make decisions. Only humans do. AI provides the data to help humans make more informed decisions. This is why human-in-the-loop will continue to be a critical part of our AI technology at Phenom.

The combination of human control with AI support is a good counter against bias in two ways. First, one cause of bias in human decision-making is that people often look for shortcuts to solving problems, like hiring candidates from Ivy League schools rather than investing time and effort to source and evaluate candidates from non-traditional backgrounds. AI cannot prevent a recruiter or hiring manager from taking shortcuts, but it can make shortcuts less necessary by surfacing relevant resumes that might otherwise be lost in the pile.

Second, having technology make predictions also means that we can have technology to verify these predictions, and to check the predictions for bias.


Related: Hiring with AI: Get it Right from the Start


Adverse impact analysis

Adverse impact analysis is a kind of bottom-line measurement that allows us to identify if a tool (such as AI) provides equivalent experiences to different demographic groups. Although this analysis requires experts to manually work through data, basic monitoring for signs of adverse impact can be automated — which can point the experts in the right direction. This process is not without its complexities, as it requires agreement on a standard set of metrics, clean metadata, and access to individuals’ demographics. These challenges can be overcome, and we continue to enhance Phenom’s tools for adverse impact analysis.

Validity of AI

Validity is another guardrail which means that the AI is making the right recommendations for the right reasons. For example, the AI should only use appropriate inputs, like a candidate’s skills and degree, not whether they went to an Ivy League school or used Times New Roman on their resume. There’s complexity and expense in checking if the recommendations are right. Phenom’s tools continue to evolve to help in this area as well.

Customer audit requirements

The last guardrail is educating and supporting auditors and customers. Auditors may be familiar with adverse impact and validity, but new to applying those concepts to AI. To help bridge the gap, our team provides resources and guidance to help auditors and customers better understand which standards and regulations apply to different Phenom technologies.

Transparency and Audits: Cornerstones of AI Legislation

AI legislation is a signal of the technology’s maturity and its widespread benefits. Legislation, standards, and guidelines are the next logical step to ensure organizations are using the technology ethically. Audits and keeping humans in the loop are essential.

To best support more than 500 global customers — half of which have operations in Europe — we’re ensuring that our platform is engineered to operate in a responsible way, and that our AI is ethical, defensible, auditable, and explainable. The depth of our data combined with our explainable approach to how AI works enables us to help companies understand increasing global legislation around AI in HR tech — while guaranteeing phenomenal experiences throughout the talent journey.

For more information about Phenom AI, click here.

*The information provided on this website does not, and is not intended to, constitute legal advice. All information, content, and materials available on this site are for general informational purposes only.

Get the latest talent experience insights delivered to your inbox.

Sign up to the Phenom email list for weekly updates!

Loading...

© 2024 Phenom People, Inc. All Rights Reserved.

  • ANA
  • CSA logo
  • IAF
  • ISO
  • ISO
  • ISO
  • ISO
  • ANAB