ChatGPT’s ‘hallucination’ problem hit with another privacy complaint in EU

Share via:


OpenAI is facing another privacy complaint in the European Union. This one, which has been filed by privacy rights nonprofit noyb on behalf of an individual complainant, targets the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals.

The tendency of GenAI tools to produce information that’s plain wrong has been well documented. But it also sets the technology on a collision course with the bloc’s General Data Protection Regulation (GDPR) — which governs how the personal data of regional users can be processed.

Penalties for GDPR compliance failures can reach up to 4% of global annual turnover. Rather more importantly for a resource-rich giant like OpenAI: Data protection regulators can order changes to how information is processed, so GDPR enforcement could reshape how generative AI tools are able to operate in the EU.

OpenAI was already forced to make some changes after an early intervention by Italy’s data protection authority, which briefly forced a local shut down of ChatGPT back in 2023.

Now noyb is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authority on behalf of an unnamed complainant who found the AI chatbot produced an incorrect birth date for them.

Under the GDPR, people in the EU have a suite of rights attached to information about them, including a right to have erroneous data corrected. noyb contends OpenAI is failing to comply with this obligation in respect of its chatbot’s output. It said the company refused the complainant’s request to rectify the incorrect birth date, responding that it was technically impossible for it to correct.

Instead it offered to filter or block the data on certain prompts, such as the name of the complainant.

OpenAI’s privacy policy states users who notice the AI chatbot has generated “factually inaccurate information about you” can submit a “correction request” through privacy.openai.com or by emailing dsar@openai.com. However, it caveats the line by warning: “Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in every instance.”

In that case, OpenAI suggests users request that it removes their personal information from ChatGPT’s output entirely — by filling out a web form.

The problem for the AI giant is that GDPR rights are not à la carte. People in Europe have a right to request rectification. They also have a right to request deletion of their data. But, as noyb points out, it’s not for OpenAI to choose which of these rights are available.

Other elements of the complaint focus on GDPR transparency concerns, with noyb contending OpenAI is unable to say where the data it generates on individuals comes from, nor what data the chatbot stores about people.

This is important because, again, the regulation gives individuals a right to request such info by making a so-called subject access request (SAR). Per noyb, OpenAI did not adequately respond to the complainant’s SAR, failing to disclose any information about the data processed, its sources, or recipients.

Commenting on the complaint in a statement, Maartje de Graaf, data protection lawyer at noyb, said: “Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The company said it’s asking the Austrian DPA to investigate the complaint about OpenAI’s data processing, as well as urging it to impose a fine to ensure future compliance. But it added that it’s “likely” the case will be dealt with via EU cooperation.

OpenAI is facing a very similar complaint in Poland. Last September, the local data protection authority opened an investigation of ChatGPT following the complaint by a privacy and security researcher who also found he was unable to have incorrect information about him corrected by OpenAI. That complaint also accuses the AI giant of failing to comply with the regulation’s transparency requirements.

The Italian data protection authority, meanwhile, still has an open investigation into ChatGPT. In January it produced a draft decision, saying then that it believes OpenAI has violated the GDPR in a number of ways, including in relation to the chatbot’s tendency to produce misinformation about people. The findings also pertain to other crux issues, such as the lawfulness of processing.

The Italian authority gave OpenAI a month to respond to its findings. A final decision remains pending.

Now, with another GDPR complaint fired at its chatbot, the risk of OpenAI facing a string of GDPR enforcements across different Member States has dialed up.

Last fall the company opened a regional office in Dublin — in a move that looks intended to shrink its regulatory risk by having privacy complaints funneled by Ireland’s Data Protection Commission, thanks to a mechanism in the GDPR that’s intended to streamline oversight of cross-border complaints by funneling them to a single member state authority where the company is “main established.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

ChatGPT’s ‘hallucination’ problem hit with another privacy complaint in EU


OpenAI is facing another privacy complaint in the European Union. This one, which has been filed by privacy rights nonprofit noyb on behalf of an individual complainant, targets the inability of its AI chatbot ChatGPT to correct misinformation it generates about individuals.

The tendency of GenAI tools to produce information that’s plain wrong has been well documented. But it also sets the technology on a collision course with the bloc’s General Data Protection Regulation (GDPR) — which governs how the personal data of regional users can be processed.

Penalties for GDPR compliance failures can reach up to 4% of global annual turnover. Rather more importantly for a resource-rich giant like OpenAI: Data protection regulators can order changes to how information is processed, so GDPR enforcement could reshape how generative AI tools are able to operate in the EU.

OpenAI was already forced to make some changes after an early intervention by Italy’s data protection authority, which briefly forced a local shut down of ChatGPT back in 2023.

Now noyb is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authority on behalf of an unnamed complainant who found the AI chatbot produced an incorrect birth date for them.

Under the GDPR, people in the EU have a suite of rights attached to information about them, including a right to have erroneous data corrected. noyb contends OpenAI is failing to comply with this obligation in respect of its chatbot’s output. It said the company refused the complainant’s request to rectify the incorrect birth date, responding that it was technically impossible for it to correct.

Instead it offered to filter or block the data on certain prompts, such as the name of the complainant.

OpenAI’s privacy policy states users who notice the AI chatbot has generated “factually inaccurate information about you” can submit a “correction request” through privacy.openai.com or by emailing dsar@openai.com. However, it caveats the line by warning: “Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in every instance.”

In that case, OpenAI suggests users request that it removes their personal information from ChatGPT’s output entirely — by filling out a web form.

The problem for the AI giant is that GDPR rights are not à la carte. People in Europe have a right to request rectification. They also have a right to request deletion of their data. But, as noyb points out, it’s not for OpenAI to choose which of these rights are available.

Other elements of the complaint focus on GDPR transparency concerns, with noyb contending OpenAI is unable to say where the data it generates on individuals comes from, nor what data the chatbot stores about people.

This is important because, again, the regulation gives individuals a right to request such info by making a so-called subject access request (SAR). Per noyb, OpenAI did not adequately respond to the complainant’s SAR, failing to disclose any information about the data processed, its sources, or recipients.

Commenting on the complaint in a statement, Maartje de Graaf, data protection lawyer at noyb, said: “Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences. It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The company said it’s asking the Austrian DPA to investigate the complaint about OpenAI’s data processing, as well as urging it to impose a fine to ensure future compliance. But it added that it’s “likely” the case will be dealt with via EU cooperation.

OpenAI is facing a very similar complaint in Poland. Last September, the local data protection authority opened an investigation of ChatGPT following the complaint by a privacy and security researcher who also found he was unable to have incorrect information about him corrected by OpenAI. That complaint also accuses the AI giant of failing to comply with the regulation’s transparency requirements.

The Italian data protection authority, meanwhile, still has an open investigation into ChatGPT. In January it produced a draft decision, saying then that it believes OpenAI has violated the GDPR in a number of ways, including in relation to the chatbot’s tendency to produce misinformation about people. The findings also pertain to other crux issues, such as the lawfulness of processing.

The Italian authority gave OpenAI a month to respond to its findings. A final decision remains pending.

Now, with another GDPR complaint fired at its chatbot, the risk of OpenAI facing a string of GDPR enforcements across different Member States has dialed up.

Last fall the company opened a regional office in Dublin — in a move that looks intended to shrink its regulatory risk by having privacy complaints funneled by Ireland’s Data Protection Commission, thanks to a mechanism in the GDPR that’s intended to streamline oversight of cross-border complaints by funneling them to a single member state authority where the company is “main established.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Wipro acquisition: Wipro says Applied Value Technologies Pte transaction...

Wipro on Monday said that transaction pertaining to...

Solana co-founder sued by ex-wife over millions worth of...

Elisa Rossi, the ex-wife of Solana co-founder Stephen...

Grab buys equity stake in Cambodian food delivery, ecommerce...

The deal is considered one of the largest...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!