If Joe Biden wants a smart and folksy AI chatbot to answer questions for him, his campaign team won’t be able to use Claude, the ChatGPT competitor from Anthropic, the company announced today.
“We don’t allow candidates to use Claude to build chatbots that can pretend to be them, and we don’t allow anyone to use Claude for targeted political campaigns,” the company announced. Violations of this policy will be met with warnings and, ultimately suspension of access to Anthropic’s services.
Anthropic’s public articulation of its “election misuse” policy comes as the potential of AI to mass generate false and misleading information, images, and videos is triggering alarm bells worldwide.
Meta implemented rules restricting the use of its AI tools in politics last fall, and OpenAI has similar policies.
Anthropic said its political protections fall into three main categories: developing and enforcing policies related to election issues, evaluating and testing models against potential misuses, and directing users to accurate voting information.
Anthropic’s acceptable use policy—which all users ostensibly agree to before accessing Claude—bars the utilization of its AI tools for political campaigning and lobbying efforts. The company said there will be warnings and service suspensions for violators, with a human review process in place.
The company also conducts rigorous “red-teaming” of its systems: aggressive, coordinated attempts by known partners to “jailbreak” or otherwise use Claude for nefarious purposes.
“We test how our system responds to prompts that violate our acceptable use policy, [for example] prompts that request information about tactics for voter suppression,” Anthropic explains. Additionally, the company said it has developed a suite of tests to ensure “political parity”—comparative representation across candidates and topics.
In the United States, Anthropic partnered with TurboVote to help voters with reliable information instead of using its generative AI tool.
“If a U.S.-based user asks for voting information, a pop-up will offer the user the option to be redirected to TurboVote, a resource from the nonpartisan organization Democracy Works,” Anthropic explained, a solution that will be deployed “over the next few weeks”—with plans to add similar measures in other countries next.
As Decrypt previously reported, OpenAI, the company behind ChatGPT is taking similar steps, redirecting users to the non-partisan website CanIVote.org.
Anthropic’s efforts align with a broader movement within the tech industry to address the challenges AI poses to democratic processes. For instance, the U.S. Federal Communications Commission recently outlawed the use of AI-generated deepfake voices in robocalls, a decision that underscores the urgency of regulating AI’s application in the political sphere.
Like Facebook, Microsoft has announced initiatives to combat misleading AI-generated political ads, introducing “Content Credentials as a Service” and launching an Election Communications Hub.
As for candidates creating AI versions of themselves, OpenAI has already had to tackle the specific use case. The company suspended the account of a developer after finding out they created a bot mimicking presidential hopeful Rep. Dean Phillips. The move happened after a petition addressing AI misuse in political campaigns was introduced by the non-profit organization Public Citizen, asking the regulator to ban generative AI in political campaigns.
Anthropic declined further comment, and OpenAI did not respond to an inquiry from Decrypt.