Excerpted from a Politico Editorial by Grace Gedye and Matt Scherer
If you’ve looked for a job recently, AI may well have a hidden hand in the process. An AI recruiter might have scored your LinkedIn profile or resume. HR professionals might have used a different AI app to scrape your social media profiles and spit out scores for your cheerfulness and ability to work with others. They also might have used AI to analyze your word choice or facial expressions in a video interview, and used yet another app to run an AI background check on you.
It’s not at all clear how well these programs work. AI background check companies have been sued for making mistakes that cost people job opportunities and income. Some resume screening tools have been found to perpetuate bias. One resume screening program identified two factors as the best predictors of future job performance: having played high school lacrosse and being named Jared.
To consumer and worker advocates like us, this situation practically screams out for regulation. State legislatures are picking up the slack, with a number of AI bills currently in the works.
When the main AI bias bills under consideration appeared in state capitols across the country earlier this year, we were disappointed to see they contained industry-friendly loopholes and weak enforcement. Job applicants, patients, renters and consumers would still have a hard time finding out if discriminatory or error-prone AI was used to help make life-altering decisions about them.
In recent months, legislators in at least 10 states have introduced a spate of closely related bills that address AI and algorithmic decision-making in a wide range of settings, including hiring, education, insurance, housing, lending, government services and criminal sentencing.
Some of these bills have a real chance of passing this year. The Connecticut Senate passed a sweeping bill covering automated decision-making along with many other AI-related topics last week. The chair of the California Assembly’s Privacy Committee is sponsoring a bill that would establish similar rules for AI decision-making, and the Colorado Senate’s majority leader is sponsoring yet another version.
At first glance, these bills seem to lay out a solid foundation of transparency requirements and bias testing for AI-driven decision systems. Unfortunately, all of the bills contain loopholes that would make it too easy for companies to avoid accountability.
With that in mind, here is what legislators working on AI bias and transparency bills should do to ensure their legislation actually helps workers and consumers:
- Tighten definitions of AI decision systems. If an AI program could alter the outcome of an employment or other important decisions about our lives, the law should cover it — full stop.
- Require transparency, early. Companies should tell us how AI systems make decisions and recommendations before a key decision is made.
- Require explanations. When an AI-driven decision denies someone a job, companies shouldn’t be allowed to rely on uninterpretable AI systems or keep us in the dark. A law should require explanations of how and why an AI decided the way it did.
- Make enforcement strong enough to ensure companies will take the law seriously. Allow consumers and workers to bring their own lawsuits or, at a minimum, give enforcement agencies the resources to enforce the law vigorously and impose stiff fines on companies that violate the law.
For the full story, please click here.