Cliff JurkiewiczMarch 28, 2024
Topics: AI

More AI Laws Are Coming. Here’s How Businesses Can Prepare

If you know, you know.

If regulation or legislation is coming down the pike that could materially impact your business, you do the right thing by preparing for it and running simulation models. That’s what you do when a major enterprise is facing government intervention.

But artificial intelligence legislation is different. How does a CEO even begin to prepare for what amounts to a tangled patchwork of AI bills emerging in a number of states and countries this year? As of today, New York City is the only jurisdiction in the country with an active law regulating AI for human resources.

But that’s likely to change quickly in the months ahead. 

Six states, six laws

California, Colorado, Connecticut, Massachusetts, New Jersey and Rhode Island as well as Washington, D.C., are charging ahead with their own AI legislation in the absence of federal action. By later this year, it’s entirely possible that a half-dozen states or more and three major cities could have their own AI laws on the books.

While there are some commonalities in the regulations, not knowing what's going to happen is the real challenge for businesses right now. That’s because there's a lot of scrutiny around AI and there's no real standard that's been codified, so it's a little bit of the Wild West out there at the moment.

Meanwhile, the European Union’s AI Act just got passed — but is more broadly focused than HR. Implementation doesn’t actually begin until next year.

That measure, combined with disparate state bills on this side of the Atlantic, are likely to breed uncertainty. That is why I counsel C-suite decision makers to follow this advice: Don’t operate out of fear. Know what the landscape looks like and start building a process around that.

Wall Street needs to hear that message. So I traveled to Nasdaq’s headquarters in Times Square to talk about what companies can do to prepare for state AI laws. This is especially important for enterprises with business units or factories spread out across multiple states. 

Joining me were a pair of highly influential AI thought leaders, Frida Polli, a Harvard- and MIT-trained neuroscientist turned startup founder; and Keith Sonderling, a commissioner at the Equal Employment Opportunity Commission – the federal agency that enforces workplace laws.

Watch the panel or keep reading for the highlights.

The Mysterious Black Box

Hiring bias is a topic that comes up from time to time with AI. I find that odd given that AI systems have a transparent audit trail, making it easy to see who is and who isn’t getting hired. 

The fact is companies like mine build tools into our software to detect bias so we can understand in real time – not two months from now – what decisions are being made by human beings as they're working with the technology. So bias detection becomes a really big component, as well as testing for adverse impact.

The bigger question that the panel brought up is why aren’t people more concerned with bias in hiring decisions made by humans?

“We put so much burden on the black box of algorithms, but we haven't done a great job of understanding how humans make employment decisions and if they’re injecting their own bias,” said Sonderling. 

In fact, the human brain “is the OG of black boxes,” added Polli. That’s an important point she makes – our brains are far more lacking in transparency than algorithms. AI is also infinitely better in terms of personalizing job requirements and making sure that they are very specific, right down to the company, the location and the role.

It’s far easier to audit an AI system; auditing human behavior is an entirely different matter.

Washington and AI

Democrats and Republicans on Capitol Hill recently created a bipartisan AI Task Force. The goal is to create a report outlining AI regulatory priorities and guiding principles. Outside of the presidential executive order, this may be the closest Washington gets to federal action on AI in an election year.

That doesn’t mean there are no laws on the books to protect workers or companies developing AI tools, said Sonderling.

“That's the misconception I'm trying to change,” he said. “There are laws from the 1960s that apply to all these (hiring) decisions, whether it's made by technology or they’re made by humans.”

So employers still have to comply with the base of federal law, which is to protect workers, he added.

In fact, New York City regulators were quite aware of federal discrimination law when they created the city’s AI law, and they were careful not to contradict federal statutes, said Polli. “It makes federal law even more relevant in the sense that it essentially mandates disclosure of disparate impact or bias,” which is not something that federal law does, she added.

Sonderling also pointed out that city and state laws are actually adding additional requirements for employers to give transparency so people know that they're being subjected to AI tools — something else that federal law doesn't require.

“There are certainly added benefits for employers who operate at a national level saying ‘New York is requiring us to do this,’” he said. “‘If we do this, we’re also going to be in further compliance with federal law as almost an added benefit.’”

Commonalities in legislative texts

This is where we are seeing similarities among states’ legislative language. There are some core common elements between all of them around explainability, defensibility, auditability and configurability. If you can show compliance in those four areas you are more likely to be in good standing with whatever future laws are going to be passed.

How should businesses prepare themselves to be in compliance when that time comes?

Number one, know that the law is coming. Businesses with the foresight are starting to put together governance around AI much like information security. I don't think this is going to be any different. Companies should bring their AI vendors into the governance discussion.

Number two, put a structure around AI governance with the right people. That gives you an opportunity to set some standards and guiding principles for the organization that will fall in line with what the current regulatory environment looks like, as well as being able to adapt to that regulatory environment as it moves forward.

Final thought: don’t be scared. There are tools that make it possible to be compliant with regulations on the books. Lean into them. Because after all, if you know, you know.

Get the latest talent experience insights delivered to your inbox.

Sign up to the Phenom email list for weekly updates!

Loading...

© 2024 Phenom People, Inc. All Rights Reserved.

  • ANA
  • CSA logo
  • IAF
  • ISO
  • ISO
  • ISO
  • ISO
  • ANAB