Detecting AI-generated text becomes challenging, raises concerns

Share via:

Distinguishing text created by humans from that generated by artificial intelligence (AI) is becoming increasingly difficult with the availability of generative AI programs. OpenAI’s ChatGPT for text and Midjourney for images are accessible to the public, leading to a rise in AI-generated content that closely resembles human-created text.

Study Finds Limitations and Biases in AI Detectors

Researchers from Stanford University conducted a study published in the journal Patterns, examining the reliability of generative artificial intelligence detectors in identifying human-written versus AI-generated text. Surprisingly, popular GPT detectors, designed to identify text generated by apps like ChatGPT, consistently misclassified writing by non-native English speakers as AI-generated. This highlights limitations and biases that users need to be aware of.

Factors Affecting Misclassification and Bias

The study found that the detectors flagged 18 out of 91 TOEFL essays as AI-generated. Further analysis revealed that lower “text perplexity,” which indicates variability or randomness in a text, likely led to misclassification. Non-native English writers tend to have a less extensive vocabulary and less sophisticated grammar, making their text appear as if it was written by an artificial intelligence. This bias has implications for non-native English speakers, potentially affecting areas such as job hiring or school exams.

ChatGPT’s Ability to Fool Detection Software

In a reversal of their initial experiment, the researchers used ChatGPT to generate responses to US college admission essay prompts. The detectors correctly identified AI-generated essays around 70% of the time. However, when prompted to “elevate the provided text by employing literary language,” ChatGPT-generated essays were only correctly classified as AI-generated 3.3% of the time. Similar results were observed with scientific abstracts. This ease of fooling the detectors raises questions about their overall effectiveness.

Addressing Limitations and Ensuring Equitable Use

To improve detectors, researchers suggest comparing multiple writings on the same topic, including both human and artificial intelligence responses, to enable clustering and enhance reliability. Additionally, involving all stakeholders affected by generative artificial intelligence models, such as ChatGPT, in discussions about their acceptable use is crucial. The study advises against using GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers. It emphasizes the need for further research and responsible development in the AI generation and detection landscape.

Also Read The Latest News:
Mark Lucovsky, Google’s Senior Director of Engineering for AR and XR devices, announces departure
Swiggy’s Genie delivery service raises safety concerns after Apple Watch theft

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Detecting AI-generated text becomes challenging, raises concerns

Distinguishing text created by humans from that generated by artificial intelligence (AI) is becoming increasingly difficult with the availability of generative AI programs. OpenAI’s ChatGPT for text and Midjourney for images are accessible to the public, leading to a rise in AI-generated content that closely resembles human-created text.

Study Finds Limitations and Biases in AI Detectors

Researchers from Stanford University conducted a study published in the journal Patterns, examining the reliability of generative artificial intelligence detectors in identifying human-written versus AI-generated text. Surprisingly, popular GPT detectors, designed to identify text generated by apps like ChatGPT, consistently misclassified writing by non-native English speakers as AI-generated. This highlights limitations and biases that users need to be aware of.

Factors Affecting Misclassification and Bias

The study found that the detectors flagged 18 out of 91 TOEFL essays as AI-generated. Further analysis revealed that lower “text perplexity,” which indicates variability or randomness in a text, likely led to misclassification. Non-native English writers tend to have a less extensive vocabulary and less sophisticated grammar, making their text appear as if it was written by an artificial intelligence. This bias has implications for non-native English speakers, potentially affecting areas such as job hiring or school exams.

ChatGPT’s Ability to Fool Detection Software

In a reversal of their initial experiment, the researchers used ChatGPT to generate responses to US college admission essay prompts. The detectors correctly identified AI-generated essays around 70% of the time. However, when prompted to “elevate the provided text by employing literary language,” ChatGPT-generated essays were only correctly classified as AI-generated 3.3% of the time. Similar results were observed with scientific abstracts. This ease of fooling the detectors raises questions about their overall effectiveness.

Addressing Limitations and Ensuring Equitable Use

To improve detectors, researchers suggest comparing multiple writings on the same topic, including both human and artificial intelligence responses, to enable clustering and enhance reliability. Additionally, involving all stakeholders affected by generative artificial intelligence models, such as ChatGPT, in discussions about their acceptable use is crucial. The study advises against using GPT detectors in evaluative or educational settings, particularly when assessing the work of non-native English speakers. It emphasizes the need for further research and responsible development in the AI generation and detection landscape.

Also Read The Latest News:
Mark Lucovsky, Google’s Senior Director of Engineering for AR and XR devices, announces departure
Swiggy’s Genie delivery service raises safety concerns after Apple Watch theft

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Lightspeed leads $24m series A funding for web content...

A Lightspeed representative will join TollBit's board as...

Ethereum blob fees soar: What does it mean for...

Ethereum blob fees briefly surged to a price...

Shopee’s YouTube tag team a retort to TikTok-Tokopedia

The tie-up with YouTube could strengthen Shopee's position...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!