ChatGPT generates cancer treatment plans that are full of errors

Share via:

Despite the widespread popularity of ChatGPT, a recent study suggests that there’s one domain where its application might be limited. Researchers at Brigham and Women’s Hospital, affiliated with Harvard Medical School, discovered significant errors in cancer treatment plans generated by OpenAI’s chatbot.

ChatGPT Mix of Correct and Incorrect Information

The study, featured in the journal JAMA Oncology and reported by Bloomberg, revealed that ChatGPT’s responses to a range of cancer cases were riddled with errors. Disturbingly, the chatbot often combined accurate and inaccurate information in a manner that confounded the identification of reliable advice.

ChatGPT Accuracy Concerns Raised

Out of 104 queries, approximately 98% of ChatGPT’s responses included at least one treatment suggestion aligned with the National Comprehensive Cancer Network guidelines. However, the study’s authors were troubled by the prevalence of incorrect information intermingled with the correct. Even experts found it challenging to discern errors due to the convincing nature of ChatGPT’s responses.

Inadequacy for Clinical Use

Dr. Danielle Bitterman, a coauthor of the study, emphasized that although large language models might sound persuasive, they aren’t meant to offer accurate medical guidance. The study’s findings highlighted critical safety concerns associated with error rates and unstable responses, particularly when used in a clinical context. Addressing these issues is imperative before considering their deployment in the medical field.

AI’s Influence on Healthcare and Its Limits

While AI models like ChatGPT have garnered significant attention, they are susceptible to “hallucinations” that present misleading or incorrect information. Instances like Google’s Bard, which inaccurately addressed a question about the James Webb space telescope, underscore the need for caution. Although AI integration into healthcare administrative tasks is underway, issues of accuracy persist. While GPT-4 demonstrated promising clinical judgment, accuracy shortcomings of models like ChatGPT will likely delay their adoption in medical practices. OpenAI acknowledges ChatGPT’s limitations, emphasizing that it shouldn’t be used for medical diagnostic or treatment purposes. As healthcare explores AI’s potential, both advancements and constraints come to light, showcasing the ongoing evolution of the field.

Also Read The Latest News:
Swiggy resumes IPO plans targeting 2024 listing
BYJU’S faces data security concerns as glitch exposes student information

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

ChatGPT generates cancer treatment plans that are full of errors

Despite the widespread popularity of ChatGPT, a recent study suggests that there’s one domain where its application might be limited. Researchers at Brigham and Women’s Hospital, affiliated with Harvard Medical School, discovered significant errors in cancer treatment plans generated by OpenAI’s chatbot.

ChatGPT Mix of Correct and Incorrect Information

The study, featured in the journal JAMA Oncology and reported by Bloomberg, revealed that ChatGPT’s responses to a range of cancer cases were riddled with errors. Disturbingly, the chatbot often combined accurate and inaccurate information in a manner that confounded the identification of reliable advice.

ChatGPT Accuracy Concerns Raised

Out of 104 queries, approximately 98% of ChatGPT’s responses included at least one treatment suggestion aligned with the National Comprehensive Cancer Network guidelines. However, the study’s authors were troubled by the prevalence of incorrect information intermingled with the correct. Even experts found it challenging to discern errors due to the convincing nature of ChatGPT’s responses.

Inadequacy for Clinical Use

Dr. Danielle Bitterman, a coauthor of the study, emphasized that although large language models might sound persuasive, they aren’t meant to offer accurate medical guidance. The study’s findings highlighted critical safety concerns associated with error rates and unstable responses, particularly when used in a clinical context. Addressing these issues is imperative before considering their deployment in the medical field.

AI’s Influence on Healthcare and Its Limits

While AI models like ChatGPT have garnered significant attention, they are susceptible to “hallucinations” that present misleading or incorrect information. Instances like Google’s Bard, which inaccurately addressed a question about the James Webb space telescope, underscore the need for caution. Although AI integration into healthcare administrative tasks is underway, issues of accuracy persist. While GPT-4 demonstrated promising clinical judgment, accuracy shortcomings of models like ChatGPT will likely delay their adoption in medical practices. OpenAI acknowledges ChatGPT’s limitations, emphasizing that it shouldn’t be used for medical diagnostic or treatment purposes. As healthcare explores AI’s potential, both advancements and constraints come to light, showcasing the ongoing evolution of the field.

Also Read The Latest News:
Swiggy resumes IPO plans targeting 2024 listing
BYJU’S faces data security concerns as glitch exposes student information

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Alibaba adds $230m into Lazada’s coffers

The Chinese tech giant has poured a total...

Pine Labs gets court nod to shift Singapore ops...

The move is expected to help Pine Labs...

Matrimony.com, Innov8 Among Startups Backing Digital Competition Bill; Urge...

SUMMARY The Digital Competition Bill, with its focus on...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!