in 2014 movie Ex Machina, the robot manipulates someone into releasing them from their restraints, causing that person to be restrained. The robot was designed to manipulate that person’s emotions, and heck, that’s exactly what it did. While this scenario is purely speculative fiction, companies are always looking for new ways, such as using generative artificial intelligence tools, to better persuade people and change their behavior. When the conduct is commercial in nature, we are under the jurisdiction of the FTC, a savvy valley where businesses should know to avoid conduct that harms consumers.
In previous blog posts, we focused on artificial intelligence-related content cheat, whether it’s exaggerated and unsubstantiated claims about AI products, or the use of generative AI to commit fraud.If the design or use of the product may also violate the Federal Trade Commission Act unfair – We have shown this in several cases and discussed it in terms of AI tools, but the results were biased or discriminatory. Under the Federal Trade Commission Act, a practice is unfair if it causes more harm than good. More specifically, conduct is unfair if it causes or is likely to cause significant harm to consumers that the consumer cannot reasonably avoid and that cannot be offset by offsetting benefits to consumers or competition.
As for the new wave of generative AI tools, companies are beginning to use them in ways that influence people’s beliefs, emotions, and behaviors. Such uses are rapidly expanding to include chatbots designed to provide information, advice, support and companionship. Many of these chatbots are designed to persuade effectively and can answer questions with confident language, even if those answers are made-up. The tendency to trust the output of these tools also comes in part from “automation bias,” in which people may place too much trust in answers given by machines that appear neutral or impartial. It also comes from the influence of anthropomorphism, which can lead to people trusting chatbots more when they are designed, such as using personal pronouns and emojis. People can easily be led to think they are talking to someone who understands them and is on their side.
Many business players are interested in these generative AI tools and their inherent advantages in leveraging unearned human trust. Concerns about their malicious use extend well beyond the FTC’s jurisdiction. But a major concern of the FTC is that companies are using them, intentionally or unintentionally, to lead people to make unfair or deceptive harmful decisions in areas such as finance, health, education, housing and employment. Companies considering novel uses of generative AI, such as tailoring ads to specific people or groups, should be aware that design elements that trick people into making harmful choices are common elements in FTC cases, such as the recent financial offer, In-game purchasesand Try to cancel the service. When manipulation causes people to take actions that are contrary to their intended goals, it can be a deceptive or unfair practice. The practice may be illegal under the FTC Act even if not all customers are harmed and even if the harmed person does not fall into a class of persons protected by anti-discrimination laws.
Another way marketers are taking advantage of these new tools and their ability to manipulate is through advertising within Generative AI capabilities, like they can place ads in search results. The FTC has repeatedly studied and provided guidance on how online ads appear in search results and elsewhere to avoid being deceptive or unfair. This includes recent work related to dark graphics and native advertising. Beyond that, it should always be clear that an ad is an ad, and search results or any generative AI output should clearly differentiate between organic and paid advertising.People should know if an AI product’s response directs them to a specific website, service provider or product because of business relationship. Of course, people should know whether they are communicating with a real person or a machine.
Given the many concerns about the use of new AI tools, now may not be the best time for companies building or deploying these tools to eliminate or lay off people dedicated to ethics and responsibility in AI and engineering. If the FTC comes to you and you want us to believe that you have adequately assessed the risks and mitigated the harm, those cuts may not look good. What would look better? We have provided guidance in previous blog posts and elsewhere. Beyond this, your risk assessment and mitigation measures should consider foreseeable downstream uses and the need to train employees and contractors, as well as monitor and address the actual use and impact of any tools ultimately deployed.
If we haven’t already made it clear, FTC staff are paying close attention to how companies choose to use artificial intelligence technologies, including new generative artificial intelligence tools, in ways that have real and significant impacts on consumers. For those who interact with chatbots or other artificial intelligence-generated content, heed Prince’s 1999 warning: “It’s cool to use computers. Don’t let computers take advantage of you.”
The FTC has more positions at Artificial Intelligence and your business series: