
Excerpted from a Fisher Phillips Blog by Michael Greco and Karen Odash
An unsuccessful job applicant is suing Sirius XM Radio in federal court, claiming the company’s AI-powered hiring tool discriminated against him based on his race. Filed on August 4 in the Eastern District of Michigan, the plaintiff in Harper v. Sirius XM Radio, LLC alleges that the company’s AI system relied on historical hiring data that perpetuated past biases – resulting in his application being downgraded despite his qualifications.
The lawsuit accuses Sirius XM of violating federal anti-discrimination statutes by the way it leverages AI tools in hiring, an allegation that we’re seeing more and more of lately. Here’s some best practices you should follow as a result.
10 actions for employers using AI in hiring
- Establish an AI Governance Program – Develop clear systems and guardrails for AI use before deployment, consistent with the NIST AI Risk Management Framework. Include a human oversight component, specify roles and responsibilities, and regularly evaluate outcomes to ensure your governance measures are working.
- Vet and Audit Your Vendors – Require vendors to document their bias testing, data sources, and accessibility features. Build in contractual assurances for nondiscrimination, data transparency, cooperation with audits, and indemnification. If you need guidance on the right way to approach this dynamic, this Insight reviews the right questions to ask at the outset of the relationship and along the way.
- Be Transparent with Candidates – Consider clearly communicating when and how AI tools are used in the hiring process. Transparency may soon be required in some jurisdictions and is emerging as a best practice nationwide.
- Offer and Publicize Accommodation Options – Consider whether you can offer applicants a path to request alternative assessments or human review. Options might include specialized equipment, alternative test formats, or modified interview conditions. This may not be possible for all circumstances or at each step of the application process, but document any accommodation pathway you offer and make it visible in job postings and application portals.
- Align AI-Driven Questions with Job Requirements – When providing questions or criteria to an AI tool, make sure they relate directly to the role’s essential functions. Avoid irrelevant prompts or stock materials pulled from unknown sources that could introduce bias.
- Retain Human Oversight in Decision-Making – Train HR and Talent Management teams to review and, when appropriate, override AI recommendations. Regularly audit hiring outcomes to ensure fairness and compliance with applicable laws.
- Document Your Process and Rationale – Maintain detailed records of how hiring decisions are made, including the objective criteria used and any adjustments to AI-driven scores. Avoid relying on vague or opaque “fit scores” that can’t be explained.
- Conduct Regular Accessibility Audits – Test your systems for compliance with disability accommodation requirements, and correct any accessibility gaps promptly.
- Monitor for Disparate Impact and Adjust – Run periodic analyses to identify disparities across protected classes (age, race, gender, disability, etc.). Treat significant disparities as red flags and take steps to mitigate them.
- Stay Informed on Legal Developments – Track new legislation, court rulings, and agency guidance that could affect your AI practices.
For the full story, please click here.