Excerpted from a Fenwick Blog by Vejay Lalla, Kimberly Culp, Zach Harned and Kristen Rovai
The Federal Trade Commission (FTC) has issued a cautionary note to companies that employ artificial intelligence (AI) chatbots, signaling heightened scrutiny over their use. Through a series of blog posts in the past year, the FTC has delineated the boundaries of acceptable AI utilization, progressively adding to its guidance on the matter.
In a June Business Blog update, the FTC articulated five specific pitfalls that companies should avoid when integrating AI chatbots into their operations. Notably, the FTC has expressed concern over the potential for companies to exploit the relationships that AI chatbots may cultivate with consumers.
The Five Don’ts for AI Chatbots
1. Don’t misrepresent what the AI chatbot is, or what it can do.
The FTC has been clear in its directive to companies: be transparent about the nature of the tool users are interacting with. Specifically, the FTC advises “not to use automated tools to mislead people about what they’re seeing, hearing, or reading.” The FTC has outlined strict penalties for non-compliance, including substantial fines, mandatory refunds to consumers, and in some cases, a ban on future marketing of the products or services in question.
The FTC’s previous guidelines reinforce this prohibition. Companies must not make false or unsubstantiated claims about their AI tools in general and their capabilities or the lack thereof. Additionally, they must not make deceptive claims, which the FTC characterizes as those that “lack scientific support” or “apply only to certain types of users or under certain conditions.”
2. Don’t offer these services without adequately mitigating risks of harmful output.
The FTC urges companies to thoroughly assess and mitigate the risks associated with AI chatbots. Such risk assessment and mitigation include taking steps to ensure that the AI chatbot does not generate harmful or offensive content, especially when children are anticipated to use the chatbot. Additionally, companies need to put in place measures to promptly address the occurrence of any such content.
3. Don’t insert ads into a chat interface without clarifying that it’s paid content.
Further supporting its native advertising guidance, the FTC has highlighted the need for a clear demarcation between organic and sponsored content within AI-generated outputs. In a previous post, the FTC described the risk of “automation bias” in AI, where consumers “may be unduly trusting of answers from machines which may seem neutral or impartial.” In response, the FTC insists that companies be transparent when advertisements are presented through AI chatbots or when such chatbots are gathering data on the consumer for purposes that may not be related to the interaction at hand.
4. Don’t manipulate consumers based on their relationship with an avatar or bot.
The FTC has emphasized the importance of companies maintaining ethics and transparency when using AI avatars and chatbots to interact with consumers. The FTC’s guidance specifically warns against exploiting for commercial gain the relationships and trust that may develop between consumers and AI tools that provide “companionship, romance, therapy, or portals to dead loved ones.” This exploitation could manifest in various forms, such as using the familiarity and engagement created by chatbots to push sales, send targeted advertising, or collect data in a manner that consumers might not fully understand or to which they may not have explicitly consented, as discussed below.
5. Don’t violate consumer privacy rights.
Consumer privacy must be a paramount concern when deploying AI chatbots. Companies should not use the familiarity formed between consumers and AI chatbots to collect data in a way that the consumer has not explicitly consented to. The FTC has explicitly warned that surreptitiously adopting more permissive data policies through which the company’s AI tools can gather data from its interactions with the consumer could be unfair or deceptive. The FTC will be vigilant in ensuring that companies respect and protect the personal information of their customers in accordance with privacy laws and regulations.
For the full story, please click here.