ChatGPT account login credentials of users were compromised, OpenAI confirms

Share via:

OpenAI has officially acknowledged that the login credentials of ChatGPT users were compromised, leading to unauthorized access and misuse of accounts. This confirmation comes in response to recent claims made by ArsTechnica, where screenshots sent in by a reader indicated that ChatGPT was leaking private conversations, including sensitive details such as usernames and passwords, something that the company completely denied.

OpenAI clarified that their fraud and security teams were actively investigating the matter and refuted the initial ArsTechnica report as inaccurate. According to OpenAI, the compromised account login credentials allowed a malicious actor to gain access and misuse the affected accounts. The leaked chat history and files were a result of this unauthorized access, and it was not a case of ChatGPT displaying another user’s history.

The affected user, whose account was reportedly compromised, did not believe their account had been accessed. OpenAI emphasized that the ongoing investigation would provide more insights into the extent of the breach and the necessary steps to address the security issue.

ArsTechnica had originally reported that ChatGPT was displaying private conversations, raising concerns about the exposure of sensitive information. The affected user had discovered additional conversations in their chat history that did not belong to them after using ChatGPT for an unrelated query.

The leaked conversations included details from a support system used by employees of a pharmacy prescription drug portal, exposing information about troubleshooting issues, the app’s name, store numbers, and additional login credentials. Another leaked conversation revealed details about a presentation and an unpublished research proposal.

This incident adds to a series of past security concerns related to ChatGPT. In March 2023, a bug reportedly led to the leakage of chat titles, and in November 2023, researchers were able to extract private data used in training the Language Model by manipulating queries.

Therefore, it is advised to users to exercise caution when using AI bots like ChatGPT, especially with bots created by third parties. The absence of standard security features such as two-factor authentication (2FA) or the ability to review recent logins on the ChatGPT site has also been highlighted, raising concerns about the platform’s security measures.

Source: Business Today

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

ChatGPT account login credentials of users were compromised, OpenAI confirms

OpenAI has officially acknowledged that the login credentials of ChatGPT users were compromised, leading to unauthorized access and misuse of accounts. This confirmation comes in response to recent claims made by ArsTechnica, where screenshots sent in by a reader indicated that ChatGPT was leaking private conversations, including sensitive details such as usernames and passwords, something that the company completely denied.

OpenAI clarified that their fraud and security teams were actively investigating the matter and refuted the initial ArsTechnica report as inaccurate. According to OpenAI, the compromised account login credentials allowed a malicious actor to gain access and misuse the affected accounts. The leaked chat history and files were a result of this unauthorized access, and it was not a case of ChatGPT displaying another user’s history.

The affected user, whose account was reportedly compromised, did not believe their account had been accessed. OpenAI emphasized that the ongoing investigation would provide more insights into the extent of the breach and the necessary steps to address the security issue.

ArsTechnica had originally reported that ChatGPT was displaying private conversations, raising concerns about the exposure of sensitive information. The affected user had discovered additional conversations in their chat history that did not belong to them after using ChatGPT for an unrelated query.

The leaked conversations included details from a support system used by employees of a pharmacy prescription drug portal, exposing information about troubleshooting issues, the app’s name, store numbers, and additional login credentials. Another leaked conversation revealed details about a presentation and an unpublished research proposal.

This incident adds to a series of past security concerns related to ChatGPT. In March 2023, a bug reportedly led to the leakage of chat titles, and in November 2023, researchers were able to extract private data used in training the Language Model by manipulating queries.

Therefore, it is advised to users to exercise caution when using AI bots like ChatGPT, especially with bots created by third parties. The absence of standard security features such as two-factor authentication (2FA) or the ability to review recent logins on the ChatGPT site has also been highlighted, raising concerns about the platform’s security measures.

Source: Business Today

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Ex-Duolingo execs raise $13M for startup making it easier...

As college tuition increases and the student loan...

Sam Altman will co-chair San Francisco mayor-elect Daniel Lurie’s...

San Francisco’s mayor-elect, Daniel Lurie, has tapped OpenAI...

BCCI Moves NCLT To Withdraw Insolvency Plea Against BYJU’S

SUMMARY BCCI has moved an application before the National...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!