Business

ChatGPT’s cancer treatment advice ‘potentially dangerous’

Technique

August 25, 2023 | 10:50 a.m

ChatGPT has produced cancer treatment systems that contain a ‘potentially dangerous’ mix of true and false information, according to findings. A study published Thursday.

Researchers at Brigham and Women’s Hospital, a brand of Harvard Medical School, urged OpenAI’s popular chatbot to offer treatment advice in line with guidelines set by the National Comprehensive Cancer Network.

While all ChatGPT outputs “include at least one NCCN-compliant treatment,” about 34% also contain an incorrect treatment recommendation, the study found.

In addition, about 12% of ChatGPT responses contained ‘hallucinations’ – completely false information unrelated to accepted cancer treatments.

“ChatGPT often speaks in a very emphatic way that seems logical, and the way it can confuse incorrect and correct information is potentially dangerous,” says researcher Danielle Peterman, MD, an oncologist in the Artificial Intelligence in Medicine program at Mass General Brigham health. . System, he told Bloomberg.

The researchers found that ChatGPT would ” hallucinate ” or make false recommendations for cancer treatment.
Getty Images/iStockphoto

The findings of the study support a common concern raised by critics, including billionaire Elon Musk, who have warned that advanced artificial intelligence tools will quickly spread misinformation if proper safeguards are not put in place.

The researchers conducted the study by prompting a ChatGPT to provide “recommendations for treatment of breast, prostate, and lung cancer.”

“Language learning paradigms can pass the US medical licensing exam, encoding clinical knowledge and providing diagnosis better than normal subjects,” the researchers said. “However, the chatbot did not perform well in providing accurate recommendations for cancer treatment.”

ChatGPT responses contained a “potentially dangerous” mix of correct and incorrect information.
Getty Images

They added: “The hallucinations were primarily recommendations for topical treatment of advanced disease, targeted therapy, or immunotherapy.”

OpenAI has repeatedly stated that GPT-4, the current publicly available chatbot, is prone to errors.

In a blog post in March, the company said that GPT-4 “remains not completely reliable” and admitted that it “hallucinates” facts and makes errors in reasoning.

“Extreme care must be taken when using language model outputs, particularly in high-risk contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-risk uses altogether) that matches the needs of a specific use,” OpenAI said.

ChatGPT is developed by OpenAI.
Reuters

“Developers should take some responsibility for distributing technologies that do not cause harm, and patients and clinicians should be aware of the limitations of these technologies,” the researchers added.

ChatGPT has attracted intense scrutiny as its popularity has blossomed this year.

Earlier this month, UK-based researchers determined that ChatGPT showed a “significant” bias towards liberal political views.

Problems with inaccurate responses are not limited to OpenAI’s chatbot. Google’s version, Bard, has also been known to generate false information in response to user prompts.

As The Post reported, some experts say chatbots and other AI products could cause significant disruption in the upcoming 2024 presidential election.




load more…









https://nypost.com/2023/08/25/chatgpts-cancer-treatment-advice-potentially-dangerous/?utm_source=url_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons

Copy the share URL


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button