Oxford AI Researchers Say LLMs Pose Risk to Scientific Truth

Share via:

In November tech giant Meta unveiled a large language model called Galactica, designed to assist scientists. But instead of landing with the big bang Meta hoped for, Galactica has died with a whimper after three days of intense criticism.

A year later, the lack of dependence on these AI chatter boxes remains the same – particularly in scientific research.  

In a paper published recently in Nature Human Behaviour, AI researchers from the Oxford Internet Institute have raised concerns about the potential threat that LLMs pose to scientific integrity. Brent Mittelstadt, Chris Russell and Sandra Wachter argue that LLMs, such as those based on the GPT-3.5 architecture, are not infallible sources of truth and can produce what they term as ‘hallucinations’—untruthful responses.

The authors propose a shift in how LLMs are utilized, recommending their use as ‘zero-shot translators.’ Instead of relying on LLMs as knowledge-bases, users should provide relevant information and instruct the model to transform it into the desired output. This approach facilitates easier verification of the output’s factual accuracy and consistency with the provided input.

The core issue, as outlined in the paper, lies in the nature of the data these models are trained on. Language models designed to provide helpful and convincing responses, lack guarantees regarding their accuracy or alignment with facts. Training on vast datasets sourced from online content, which may include false statements, opinions, and creative writing, exposes LLMs to non-factual information.

Prof. Mittelstadt highlighted the risk associated with users trusting LLMs as reliable sources of information, similar to human experts. Due to their design as human-sounding agents, users may be misled into accepting responses as accurate, even when devoid of factual basis or presenting a biased version of the truth.

To safeguard scientific truth and education from the spread of inaccurate and biased information, the authors advocate for setting clear expectations around the responsible use of LLMs. The paper suggests that users, especially in tasks where accuracy is vital should provide translation prompts containing factual information.

Prof. Wachter emphasised the role of responsible LLM usage in the scientific community and the need for confidence in factual information. The authors caution against the potential serious harm that could result if LLMs are haphazardly employed in generating and circulating scientific articles.

Highlighting the need for careful consideration, Prof. Russell urges a step back from the opportunities presented by LLMs and prompts a reflection on whether the technology should be granted certain opportunities simply because it can provide them.

The post Oxford AI Researchers Say LLMs Pose Risk to Scientific Truth appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Oxford AI Researchers Say LLMs Pose Risk to Scientific Truth

In November tech giant Meta unveiled a large language model called Galactica, designed to assist scientists. But instead of landing with the big bang Meta hoped for, Galactica has died with a whimper after three days of intense criticism.

A year later, the lack of dependence on these AI chatter boxes remains the same – particularly in scientific research.  

In a paper published recently in Nature Human Behaviour, AI researchers from the Oxford Internet Institute have raised concerns about the potential threat that LLMs pose to scientific integrity. Brent Mittelstadt, Chris Russell and Sandra Wachter argue that LLMs, such as those based on the GPT-3.5 architecture, are not infallible sources of truth and can produce what they term as ‘hallucinations’—untruthful responses.

The authors propose a shift in how LLMs are utilized, recommending their use as ‘zero-shot translators.’ Instead of relying on LLMs as knowledge-bases, users should provide relevant information and instruct the model to transform it into the desired output. This approach facilitates easier verification of the output’s factual accuracy and consistency with the provided input.

The core issue, as outlined in the paper, lies in the nature of the data these models are trained on. Language models designed to provide helpful and convincing responses, lack guarantees regarding their accuracy or alignment with facts. Training on vast datasets sourced from online content, which may include false statements, opinions, and creative writing, exposes LLMs to non-factual information.

Prof. Mittelstadt highlighted the risk associated with users trusting LLMs as reliable sources of information, similar to human experts. Due to their design as human-sounding agents, users may be misled into accepting responses as accurate, even when devoid of factual basis or presenting a biased version of the truth.

To safeguard scientific truth and education from the spread of inaccurate and biased information, the authors advocate for setting clear expectations around the responsible use of LLMs. The paper suggests that users, especially in tasks where accuracy is vital should provide translation prompts containing factual information.

Prof. Wachter emphasised the role of responsible LLM usage in the scientific community and the need for confidence in factual information. The authors caution against the potential serious harm that could result if LLMs are haphazardly employed in generating and circulating scientific articles.

Highlighting the need for careful consideration, Prof. Russell urges a step back from the opportunities presented by LLMs and prompts a reflection on whether the technology should be granted certain opportunities simply because it can provide them.

The post Oxford AI Researchers Say LLMs Pose Risk to Scientific Truth appeared first on Analytics India Magazine.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Musk’s amended lawsuit against OpenAI names Microsoft as defendent

Elon Musk’s lawsuit against OpenAI accusing the company...

Mastercard reinvents checkout with password and number free payments

Mastercard today announced its vision to transform online shopping...

boAT Onboards Three Bankers For $300-500 Mn IPO

SUMMARY As per ET, the company has onboarded ICICI...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!