Excerpted from a RetailWire Blog by Jon Houke
The Consumer Financial Protection Bureau (CFPB) cautioned businesses that use “invasive” technology to evaluate or spy on workers could face legal trouble. The increasing use of artificial intelligence (AI) tools or algorithmic scores provided by third parties prompted the agency’s warning.
According to the CFPB, businesses that use AI tools to compile data and to make decisions about current or potential employees must provide basic protections under the Fair Credit Reporting Act (FCRA). The FCRA rules state companies must obtain an employee’s consent before running a background check and must provide detailed information related to negative decisions about someone’s employment. Employees also have the right to dispute inaccurate information.
Yet, there are various AI-powered tools that can monitor and report on worker behavior that could circumvent these protections. AI can quietly surveil workers based on customer complaints as well as monitor productivity. There are even tools that can search employees’ social media accounts, predict whether an employee is going to quit, or find out if someone is organizing a union.
“Workers shouldn’t be subject to unchecked surveillance or have their careers determined by opaque third-party reports without basic protections,” said CFPB Director Rohit Chopra. “Our action today makes clear that longstanding consumer protections apply to these new domains just as they do to traditional credit reports.”
AI Background Checks
Running a background check before hiring someone is common practice. However, AI can expand search capabilities beyond the traditional background check databases and often obtain sensitive information that an employee may not be unaware of. This information can lead to adverse decisions regarding someone’s employment, such as assignments, job promotions, or not even hiring them at all. As a result, employees can lose job opportunities or face unnecessary penalties based on undisclosed data, which goes undisputed as the employee doesn’t know about it.
Just the same as any background check, the CFPB says negative information, regardless of how it’s obtained, must be disclosed to employees. The protections provided by the FCRA give employees control over personal information and prevent potential abuses of it.
A company cannot unfairly discipline an employee for mistakes in AI-generated reports. As such, the CFPB’s guidance also requires businesses to correct or delete unverified information.
Earlier this year, Walgreens settled a class action lawsuit that accused the drug store chain of failing to comply with the FCRA. The complainants alleged Walgreens decided not to hire them based on background checks and failed to properly notify them of the results.
For the full story, please click here.