At two prominent tech events, VivaTech 2025 in Paris and Anthropic’s Code With Claude developer day, Anthropic chief executive officer Dario Amodei made a provocative claim: artificial intelligence models may now hallucinate less frequently than humans in well-defined factual scenarios.
Speaking at both events, Amodei said recent internal tests showed that the company’s latest Claude 3.5 model had outperformed humans on structured factual quizzes. This challenges a long-held criticism of generative AI, which is that models often “hallucinate”…