Meta apologies after its AI chatbot said Trump shooting didn’t happen

Share via:


Meta’s AI assistant incorrectly said that the recent attempted assassination of former President Donald Trump didn’t happen, an error a company executive is now attributing to the technology powering its chatbot and others.

In a company blog post published on Tuesday, Joel Kaplan, Meta’s global head of policy, calls its AI’s responses to questions about the shooting “unfortunate.” He says Meta AI was first programmed to not respond to questions about the attempted assassination but the company removed that restriction after people started noticing. He also acknowledges that “in a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen – which we are quickly working to address.”

“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward,” continues Kaplan, who runs Meta’s lobbying efforts. “Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”

It’s not just Meta that is caught up here: Google on Tuesday also had to refute claims that its Search autocomplete feature was censoring results about the assassination attempt. “Here we go again, another attempt at RIGGING THE ELECTION!!!” Trump said in a post on Truth Social. “GO AFTER META AND GOOGLE.”

Since ChatGPT burst on the scene, the tech industry has been grappling with how to limit generative AI’s propensity for falsehoods. Some players, like Meta, have attempted to ground their chatbots with quality data and real-time search results as a way to compensate for hallucinations. But as this particular example shows, it’s still hard to overcome what large language models are inherently designed to do: make stuff up.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Meta apologies after its AI chatbot said Trump shooting didn’t happen


Meta’s AI assistant incorrectly said that the recent attempted assassination of former President Donald Trump didn’t happen, an error a company executive is now attributing to the technology powering its chatbot and others.

In a company blog post published on Tuesday, Joel Kaplan, Meta’s global head of policy, calls its AI’s responses to questions about the shooting “unfortunate.” He says Meta AI was first programmed to not respond to questions about the attempted assassination but the company removed that restriction after people started noticing. He also acknowledges that “in a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen – which we are quickly working to address.”

“These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward,” continues Kaplan, who runs Meta’s lobbying efforts. “Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”

It’s not just Meta that is caught up here: Google on Tuesday also had to refute claims that its Search autocomplete feature was censoring results about the assassination attempt. “Here we go again, another attempt at RIGGING THE ELECTION!!!” Trump said in a post on Truth Social. “GO AFTER META AND GOOGLE.”

Since ChatGPT burst on the scene, the tech industry has been grappling with how to limit generative AI’s propensity for falsehoods. Some players, like Meta, have attempted to ground their chatbots with quality data and real-time search results as a way to compensate for hallucinations. But as this particular example shows, it’s still hard to overcome what large language models are inherently designed to do: make stuff up.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

October 23, 2024 – Apple gaming app, iPhone roadmap

Listen to a recap of the top stories...

Longtime policy researcher Miles Brundage leaves OpenAI

Miles Brundage, a longtime policy researcher at OpenAI...

Flock Safety paid over $300M for 17-month-old drone startup...

Last week, police surveillance startup Flock Safety announced...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!