Excerpted from a Schulte Roth & Zabel LLP Blog by Mark Brossman, Ronald Richman, Max Garfield, Scott Gold, Donna Lazarus and Ayumi Berstein
The Equal Employment Opportunity Commission (EEOC) recently issued a technical assistance document regarding the use of artificial intelligence (AI) tools in employment decisions with a focus on disability discrimination claims that may arise as a result.
AI in the employment context means the employer relies partly on the computer’s own analysis of data to determine which criteria to use when making employment decisions. The new technical assistance document provides examples of AI tools, including “resume scanners that prioritize applications using keywords; employee monitoring software that rates employees on the basis of keystrokes; ‘virtual assistants’ or ‘chatbots’ that ask job candidates about qualifications and reject those who do not meet pre-defined requirements; video interviewing software that evaluates candidates based on facial expressions and speech patterns; and software that provides scores for applicants regarding their personalities, aptitudes and cognitive skills.”
The EEOC identified three common ways an employer’s use of AI could violate the Americans with Disabilities Act (ADA):
- 1. By not providing reasonable accommodations;
2. Relying on AI tools that improperly “screen out” individuals with disabilities;
3. Adopting AI tools that pose disability-related inquiries or seek information that qualify as a medical exam.
The EEOC noted that an employer is responsible for its use of AI tools, including tools designed and administered by another entity such as a software vendor.
If an applicant communicates that a medical condition may make it difficult for them to take a test or cause an assessment result less acceptable, the employer must respond and provide an alternative testing format unless doing so would create an undue hardship for the employer. Employers must keep medical information obtained in connection with a reasonable accommodation request confidential and separate from personnel files.
An AI tool may unlawfully screen out applicants with disabilities if the disability causes a lower score or an assessment result that is less acceptable to the employer, and the applicant loses a job opportunity despite being able to perform the job with reasonable accommodations.
For example, an AI tool that analyzes an applicant’s speech patterns may improperly screen out applicants with speech impediments. Even an AI tool that has been “validated” to predict whether applicants can perform a job may unlawfully screen out applicants with disabilities who could also perform the job with reasonable accommodations.
The EEOC cautioned that employers should not rely on claims that AI tools are “bias-free.” Employers can reduce the chances of improper “screen outs” by:
- 1. Inquiring how a tool was developed with applicants with disabilities in mind;
2. Clearly indicating to applicants that alternative test formats are available, and provide clear instructions on requesting reasonable accommodations.
An employer may violate the ADA if it uses an AI tool that poses disability-related inquiries or seeks information that qualifies as a medical exam before giving an applicant an offer of employment. An assessment qualifies as a medical exam if it seeks information about the individual’s physical or mental impairments or health.
New York City employers should also be aware of a new law going into effect Jan. 1, 2023, that prohibits employers from using automated employment decision tools to promote or screen job candidates, unless certain criteria have been met.
Conclusion
Employers should be cognizant of how their usage of AI in hiring may be interpreted as disability discrimination and respond promptly to any discrimination related issues or claims.
For the full story, please click here.