AI models may hallucinate less than humans in factual tasks, says Anthropic CEO: Report

0
139


At two prominent tech events, VivaTech 2025 in Paris and Anthropic’s Code With Claude developer day, Anthropic chief executive officer Dario Amodei made a provocative claim: artificial intelligence models may now hallucinate less frequently than humans in well-defined factual scenarios.

Speaking at both events, Amodei said recent internal tests showed that the company’s latest Claude 3.5 model had outperformed humans on structured factual quizzes. This challenges a long-held criticism of generative AI, which is that models often “hallucinate”…



Source link

This site uses Akismet to reduce spam. Learn how your comment data is processed.