During a Congressional hearing on Tuesday, Federal Trade Commission (FTC) chair Lina Khan and other commissioners warned House representatives about the potential for modern AI technologies, including language models like ChatGPT, to turbocharge fraud.
The warning was given in response to an inquiry about how the FTC was working to protect Americans from unfair practices related to technological advances.
Khan agreed that AI presents new risks for the FTC to manage, despite the advantages it may present. She acknowledged that AI presents a whole set of opportunities but also a whole set of risks. She said that AI could be used to turbocharge fraud and scams and has been warning market participants that instances in which AI tools are designed to deceive people can place them on the hook for FTC action.
Khan also warned that AI’s ability to turbocharge fraud should be considered a serious concern. To combat the problem, the FTC chair noted that technologists are being embedded across the agency’s work, both on the consumer protection side and the competition side, to ensure that any issues with AI are properly identified and handled.
However, FTC commissioner Rebecca Slaughter downplayed Khan’s remarks by explaining that the FTC has adapted to new technologies over the years and has the expertise to adapt again to combat AI-powered fraud. She said that while there is a lot of noise around AI right now, the agency’s obligation is to do what it has always done, which is to apply the tools it has to changing technologies, make sure it has the expertise to do that effectively, and not be scared off by the idea that this is a new revolutionary technology, and dig right in on protecting people.