AI Standards: What HR Needs To Know About the New Executive Order
Over the past few years, the artificial intelligence (AI) revolution has been shaping and reshaping how companies work. From tools like ChatGPT to our very own Intelligent Talent Experience platform, AI can provide candidates with better application experiences, make employees and recruiters more productive, and give managers the tools they need to grow and retain their workforces.
But many are anxious about what giving too much power to AI can do — and for good reason. As a result, the White House has now entered the chat. On October 30th, 2023, the government issued a new Executive Order (EO) on safe, secure, and trustworthy AI.
But what does it mean for HR and for organizations already using AI-powered technology? Phenom’s Cliff Jurkiewicz, VP of Global Strategy, joined us on Talent Experience Live to discuss his views on the EO, its implications for AI ethics, and how it will affect the future.
This conversation is chock-full of insights, so read on for the highlights or watch the full episode right here:
What AI standards does this Executive Order outline for organizations?
The new EO operates on a federal level, distinguishing it from state-specific rules, such as New York Local Law 144 or California's upcoming AI Bill of Rights. Its primary focus is on federal agencies, making it mandatory for those working within the federal government.
That means that adherence to these AI standards is voluntary for private corporations. However, these new laws “signal a more serious approach to addressing what is becoming a very big concern both in the government and private sector. Namely, how are we monitoring, regulating, defending, and explaining AI and its use across a multitude of agencies and different verticals of business?” explained Jurkiewicz. He believes it will heavily influence how we continue to write AI laws in the future.
Additionally, the imminent passage of European Union (EU) AI laws within the next month or so will introduce a distinct regulatory landscape, differing substantially from the approach taken in the United States. This, Jurkiewicz said, will also have a huge impact on global business.
How does the Executive Order differ from the EU laws?
Jurkiewicz stresses that this EO is not only about AI standards, but it’s a conversation on how legislation gets written — and who’s influencing it. For example, there are very large corporations that have been involved in talks with the White House, and this is also influencing the EU’s AI regulations.
When it comes to the EU’s laws, however, “it’s a risk-based framework, and it looks at AI from four levels of risk.”
Level one, explained Jurkiewicz, is a low-level threat. Think AI in gaming. Gamers understand that AI is being used to create their games, not everything is real, and the AI-powered algorithm may be influencing the program.
Level four, however, is where AI can become dangerous. Larger websites (think social media platforms, for example) are using AI to access personal data and using that data without consent. These sites and platforms are at risk of being completely cut off in the EU, because level four usage is banned by EU law.
As for Human Resources? “HR sits at risk level three,” said Jurkiewicz. Because HR uses biometrics and demographic information, these practices must be audited for bias and discrimination, as well as where and how AI is exactly being used.
The crossover is that the EU’s laws will influence other laws around the world, and not all organizations are supportive.
Prominent businesses with a significant presence in the United States have exerted considerable influence to expedite the passage of AI legislation within the European Union (EU) while simultaneously resisting similar measures in the US. Why?
The desire to maintain a competitive advantage for U.S.-based companies by potentially excluding European operators of AI software. The regulatory landscape within the EU poses hindrances to the scaling and growth of software companies in the region, while the comparatively light regulation in the U.S. allows companies to operate without being bound by laws.
“This is what’s different about Biden’s Executive Order, though,” said Jurkiewicz. “It squashes all of that and it says that the government has to act. What is impressive to me is how comprehensive this Executive Order is. It covers nearly everything I would want to see in terms of… putting the human in the best interest of AI, and not the other way around.”
No matter what side of the political spectrum you’re on, “you’re a human being and AI has the ability to discriminate against all of us in really meaningful ways. It also has the opportunity to benefit us in meaningful ways. We need to regulate that behavior and keep humans at the center.”
Do you anticipate local jurisdictions will follow in the same direction as the Executive Order?
Jurkiewicz’s short answer? “I hope not.” Why? “It will be impossible to operate a multi-state national business if you have 50 different AI laws.”
Jurkiewicz believes that New York Local Law 144 is trying to do something good by:
Keeping humans at the center of AI
Allowing people to opt out of using it
Making sure AI is explainable
Giving people control of their own data
However, it was born out of lack of guidance from the government, and it still has a few hiccups.
Let’s compare New York Local Law 144 to California's upcoming AI Bill of Rights. Both laws focus on Automated Employment Decision-making Tools (AEDTs). These are tools that allow AI to make a decision to hire a candidate in place of a human.
New York Local Law 144 only audits AEDTs, while California's AI Bill of Rights will audit both the AEDT and all other parts of the decision-making process that come with hiring a candidate.
“I don’t know any tool in HR today that says ‘you must hire this person over this person.’ It might suggest someone based on fit-score, but it ultimately lets the human decide at the end.” The issue with New York Local Law 144 is that if it’s only auditing AEDTs, there is a lot of leftover room for bias that isn’t being audited.
According to Jurkiewicz, the biggest challenge with local laws is that they’re going to serve local interests, and may overlook crucial considerations in the realm of recruiting.
Job seekers, for example, often believe that their personal data is being illicitly used and sold to other companies, despite the lack of truth in such claims. Because of this, they may be hesitant to upload their resumes online, leading to a very low chance they will get an interview or be offered a job. Recruiters, on the other hand, face an overwhelming volume of resumes and rely on AI tools for effective sorting.
Jurkiewicz’s hope is that the EO's AI standards will supersede local laws that are trying to regulate AI in this way and lead to the creation of specific rules tailored to diverse use cases — like recruiting.
“Any tool has the potential to be used in a negative way. However, the big difference is that there’s an audit trail with these tools,” Jurkiewicz said. “If it's only a human being looking at paper resumes, you can’t audit that.” Audit trails not only align with the preferences of the Equal Employment Opportunity Commission (EEOC) but also provide a means to uncover and address unconscious biases.
How does AI technology help reduce bias?
According to Jurkiewicz, the average human can identify 15-20 skills and competencies, while AI can identify somewhere between 40-60.
If you look at someone who is transitioning out of the military into civilian life, for example, they may not have the exact skills on their resume that a job description includes. However, using AI-powered technology like Phenom’s that can understand the depth of skills and competencies, allows recruiting teams to pick up the myriad of experiences and qualifications this person has that would directly translate to an open job.
At the same time, AI can recommend candidates as a good fit for not just one job, but six others, for example. Then, the conversation between candidate and recruiter becomes, “what are you most interested in? What do you as an employee value most?”
Because AI can identify work patterns better than humans can, it can recognize more skills than humans can, resulting in better-matched jobs. Essentially, it levels the playing field.
Related resource: Navigating AI Compliance
Does the Executive Order discuss keeping humans in the loop?
The EO emphasizes the importance of placing humans at the forefront of technological advancements and considers labor standards within the workplace. Its goal is to ensure that humans aren’t getting “technifed” out of work, as Jurkiewicz put it.
Let’s look at an example. According to Jurkiewicz, Tesla can produce a car every 40 seconds, while with competitors like Ford or Chrysler it takes an average of 90 minutes. However, Ford and Chrysler are part of the United Auto Workers (UAQ) union, while Tesla is not. Tesla, therefore, doesn’t need to adhere to the same regulations, and this is why the EO is so important.
“If AI is replacing workers simply to produce something faster, is that really good for humanity?” asked Jurkiewicz. That’s what the EO aims to answer through its regulations. It’s considering the human. “You can’t just cut us out. Everything can’t be produced by AI. It can’t do everything by itself.”
Related conversation with EEOC commissioner Keith E. Sonderling: AI and HR: Keeping Humans at the Helm
How has the Executive Order addressed the rapid pace of technological advancements?
It’s nearly impossible to put toothpaste back into the tube, and it’s the same with AI. The rapidity with which AI is advancing cannot be reversed, which is why the EO is so important.
The EO addresses the rapid pace of technological advancements in AI by stressing the importance of AI research and sharing results of this research openly, and demands the open and fair use of competitive AI ecosystems.
Many larger companies say “go ahead and regulate us. We want it. [But] they don’t really want it. What they are counting on is that, because the government takes so slow, they will continue to operate in the objective of their shareholders.”
However, with over ten years of experience and 650 trusted customers, “a small company like Phenom values the experience that we’re able to offer and build with our clients and deliver to candidates and recruiters and employees, and that’s where our focus is.”
At the end of the day, said Jurkiewicz, you have to decide who you want to do business with, because innovation will happen regardless of regulation.
What should the HR space take away from this?
What’s most important is to work with organizations that are auditing regularly, said Jurkiewicz. And some of the most important individuals in the equation are the CHROs, CPOs, VPs of Talent Acquisition, and VPs of Talent Management. Why?
If you leave this decision to technologists, they’re usually looking for the biggest return for shareholders, which means choosing tools that may replace people. “It’s the first time in our history where these technologies have the potential to deeply impact human work. [CHROs] and the people in charge of the [employee] experience need to make these decisions with employees in mind...to scale, grow, and support a human inside of the work.”
To continue the conversation about the practical application of the new Executive Order and how these AI standards will impact future laws, feel free to reach out to Cliff on LinkedIn or dive deeper into our sessions from AI Day.
Maggie is a writer at Phenom, bringing you information on all things talent experience. In addition to writing, she enjoys traveling, painting, cooking, and spending time with her family and friends.
Get the latest talent experience insights delivered to your inbox.
Sign up to the Phenom email list for weekly updates!