Anthropic takes steps to prevent election misinformation

Share via:

Ahead of the 2024 U.S. presidential election, Anthropic, the well-funded AI startup, is testing a technology to detect when users of its GenAI chatbot ask about political topics and redirect those users to “authoritative” sources of voting information.

Called Prompt Shield, the technology, which relies on a combination of AI detection models and rules, shows a pop-up if a U.S.-based user of Claude, Anthropic’s chatbot, asks for voting information. The pop-up offers to redirect the user to TurboVote, a resource from the nonpartisan organization Democracy Works, where they can find up-to-date, accurate voting information.

Anthropic says that Prompt Shield was necessitated by Claude’s shortcomings in the area of politics- and election-related information. Claude isn’t trained frequently enough to provide real-time information about specific elections, Anthropic acknowledges, and so is prone to hallucinating — i.e. inventing facts — about those elections.

“We’ve had ‘prompt shield’ in place since we launched Claude — it flags a number of different types of harms, based on our acceptable user policy,” a spokesperson told TechCrunch via email. “We’ll be launching our election-specific prompt shield intervention in the coming weeks and we intend to monitor use and limitations … We’ve spoken to a variety of stakeholders including policymakers, other companies, civil society and nongovernmental agencies and election-specific consultants [in developing this].”

It’s seemingly a limited test at the moment. Claude didn’t present the pop-up when I asked it about how to vote in the upcoming election, instead spitting out a generic voting guide. Anthropic claims that it’s fine-tuning Prompt Shield as it prepares to expand it to more users.

Anthropic, which prohibits the use of its tools in political campaigning and lobbying, is the latest GenAI vendor to implement policies and technologies to attempt to prevent election interference.

The timing’s no coincidence. This year, globally, more voters than ever in history will head to the polls, as at least 64 countries representing a combined population of about 49% of the people in the world are meant to hold national elections.

In January, OpenAI said that it would ban people from using ChatGPT, its viral AI-powered chatbot, to create bots that impersonate real candidates or governments, misrepresent how voting works or discourage people from voting. Like Anthropic, OpenAI currently doesn’t allow users to build apps using its tools for the purposes of political campaigning or lobbying — a policy which the company reiterated last month.

In a technical approach similar to Prompt Shield, OpenAI is also employing detection systems to steer ChatGPT users who ask logistical questions about voting to a nonpartisan website, CanIVote.org, maintained by the National Association of Secretaries of State.

In the U.S., Congress has yet to pass legislation seeking to regulate the AI industry’s role in politics despite some bipartisan support. Meanwhile, more than a third of U.S. states have passed or introduced bills to address deepfakes in political campaigns as federal legislation stalls.

In lieu of legislation, some platforms — under pressure from watchdogs and regulators — are taking steps to stop GenAI from being abused to mislead or manipulate voters.

Google said last September that it would require political ads using GenAI on YouTube and its other platforms, such as Google Search, be accompanied by a prominent disclosure if the imagery or sounds were synthetically altered. Meta has also barred political campaigns from using GenAI tools — including its own — in advertising across its properties.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Anthropic takes steps to prevent election misinformation

Ahead of the 2024 U.S. presidential election, Anthropic, the well-funded AI startup, is testing a technology to detect when users of its GenAI chatbot ask about political topics and redirect those users to “authoritative” sources of voting information.

Called Prompt Shield, the technology, which relies on a combination of AI detection models and rules, shows a pop-up if a U.S.-based user of Claude, Anthropic’s chatbot, asks for voting information. The pop-up offers to redirect the user to TurboVote, a resource from the nonpartisan organization Democracy Works, where they can find up-to-date, accurate voting information.

Anthropic says that Prompt Shield was necessitated by Claude’s shortcomings in the area of politics- and election-related information. Claude isn’t trained frequently enough to provide real-time information about specific elections, Anthropic acknowledges, and so is prone to hallucinating — i.e. inventing facts — about those elections.

“We’ve had ‘prompt shield’ in place since we launched Claude — it flags a number of different types of harms, based on our acceptable user policy,” a spokesperson told TechCrunch via email. “We’ll be launching our election-specific prompt shield intervention in the coming weeks and we intend to monitor use and limitations … We’ve spoken to a variety of stakeholders including policymakers, other companies, civil society and nongovernmental agencies and election-specific consultants [in developing this].”

It’s seemingly a limited test at the moment. Claude didn’t present the pop-up when I asked it about how to vote in the upcoming election, instead spitting out a generic voting guide. Anthropic claims that it’s fine-tuning Prompt Shield as it prepares to expand it to more users.

Anthropic, which prohibits the use of its tools in political campaigning and lobbying, is the latest GenAI vendor to implement policies and technologies to attempt to prevent election interference.

The timing’s no coincidence. This year, globally, more voters than ever in history will head to the polls, as at least 64 countries representing a combined population of about 49% of the people in the world are meant to hold national elections.

In January, OpenAI said that it would ban people from using ChatGPT, its viral AI-powered chatbot, to create bots that impersonate real candidates or governments, misrepresent how voting works or discourage people from voting. Like Anthropic, OpenAI currently doesn’t allow users to build apps using its tools for the purposes of political campaigning or lobbying — a policy which the company reiterated last month.

In a technical approach similar to Prompt Shield, OpenAI is also employing detection systems to steer ChatGPT users who ask logistical questions about voting to a nonpartisan website, CanIVote.org, maintained by the National Association of Secretaries of State.

In the U.S., Congress has yet to pass legislation seeking to regulate the AI industry’s role in politics despite some bipartisan support. Meanwhile, more than a third of U.S. states have passed or introduced bills to address deepfakes in political campaigns as federal legislation stalls.

In lieu of legislation, some platforms — under pressure from watchdogs and regulators — are taking steps to stop GenAI from being abused to mislead or manipulate voters.

Google said last September that it would require political ads using GenAI on YouTube and its other platforms, such as Google Search, be accompanied by a prominent disclosure if the imagery or sounds were synthetically altered. Meta has also barred political campaigns from using GenAI tools — including its own — in advertising across its properties.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Security Bite: Most common macOS malware in 2024 so...

It is a long-standing misconception that Macs are...

Kuo: iPhone 16 Pro to add new rose titanium...

Corroborating previous reports, analyst Ming-Chi Kuo today tweeted...

Hollywood agency CAA aims to help stars manage their...

Creative Artists Agency (CAA), one of the top...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!