OpenAI, Microsoft Block ChatGPT Hackers in China, North Korea

Share via:



ChatGPT developer OpenAI said it teamed up with top investor Microsoft to thwart five “state-affiliated” cyber attacks.

The cyber attacks, OpenAI said on Wednesday, came from two China-affiliated groups—Charcoal Typhoon and Salmon Typhoon—as well as from Iran-affiliated Crimson Sandstorm, North Korea-affiliated Emerald Sleet, and the Russia-affiliated Forest Blizzard.

The groups attempted to use GPT-4 for company and cybersecurity tool research, code debugging, script generation, phishing campaigns, translating technical papers, malware detection evasion, and satellite communication and radar technology research, OpenAI said. The accounts were terminated after they were identified.

“We have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities,” OpenAI said in a blog post, which also shared the firm’s “approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities.”

OpenAI and Microsoft did not immediately respond to Decrypt’s request for comment.

“The vast majority of people use our systems to help improve their daily lives, from virtual tutors for students to apps that can transcribe the world for people who are seeing impaired,” OpenAI said. “As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits.”

While OpenAI successfully stopped these instances, the company acknowledged the impossibility of stopping every misuse.

Following a surge of AI-generated deepfakes and scams after the launch of ChatGPT, policymakers stepped up scrutiny of generative AI developers. In September, OpenAI announced an initiative to beef up the cybersecurity surrounding its AI models, including turning to third-party “red teams” to find holes in OpenAI security.

Despite OpenAI’s investment in cybersecurity and implementing measures to stop ChatGPT from replying with malicious, racist, or hazardous responses, hackers have figured out ways to jailbreak the program and make the chatbot do just that. In October, researchers at Brown University discovered that using less common languages like Zulu and Gaelic could bypass ChatGPT’s restrictions.

OpenAI emphasized the need to stay ahead of evolving threats and highlighted their approach to securing its AI models, including transparency, working with other AI developers, and learning from real-world cyber attacks.

Last week, over 200 organizations, including OpenAI, Microsoft, Anthropic, and Google, joined with the Biden Administration to form the AI Safety Institute and U.S. AI Safety Institute Consortium (AISIC), aimed at developing artificial intelligence safely, fighting AI-generated deepfakes, and addressing cybersecurity concerns.

“By continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else,” OpenAI said.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

OpenAI, Microsoft Block ChatGPT Hackers in China, North Korea



ChatGPT developer OpenAI said it teamed up with top investor Microsoft to thwart five “state-affiliated” cyber attacks.

The cyber attacks, OpenAI said on Wednesday, came from two China-affiliated groups—Charcoal Typhoon and Salmon Typhoon—as well as from Iran-affiliated Crimson Sandstorm, North Korea-affiliated Emerald Sleet, and the Russia-affiliated Forest Blizzard.

The groups attempted to use GPT-4 for company and cybersecurity tool research, code debugging, script generation, phishing campaigns, translating technical papers, malware detection evasion, and satellite communication and radar technology research, OpenAI said. The accounts were terminated after they were identified.

“We have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities,” OpenAI said in a blog post, which also shared the firm’s “approach to detect and disrupt such actors in order to promote information sharing and transparency regarding their activities.”

OpenAI and Microsoft did not immediately respond to Decrypt’s request for comment.

“The vast majority of people use our systems to help improve their daily lives, from virtual tutors for students to apps that can transcribe the world for people who are seeing impaired,” OpenAI said. “As is the case with many other ecosystems, there are a handful of malicious actors that require sustained attention so that everyone else can continue to enjoy the benefits.”

While OpenAI successfully stopped these instances, the company acknowledged the impossibility of stopping every misuse.

Following a surge of AI-generated deepfakes and scams after the launch of ChatGPT, policymakers stepped up scrutiny of generative AI developers. In September, OpenAI announced an initiative to beef up the cybersecurity surrounding its AI models, including turning to third-party “red teams” to find holes in OpenAI security.

Despite OpenAI’s investment in cybersecurity and implementing measures to stop ChatGPT from replying with malicious, racist, or hazardous responses, hackers have figured out ways to jailbreak the program and make the chatbot do just that. In October, researchers at Brown University discovered that using less common languages like Zulu and Gaelic could bypass ChatGPT’s restrictions.

OpenAI emphasized the need to stay ahead of evolving threats and highlighted their approach to securing its AI models, including transparency, working with other AI developers, and learning from real-world cyber attacks.

Last week, over 200 organizations, including OpenAI, Microsoft, Anthropic, and Google, joined with the Biden Administration to form the AI Safety Institute and U.S. AI Safety Institute Consortium (AISIC), aimed at developing artificial intelligence safely, fighting AI-generated deepfakes, and addressing cybersecurity concerns.

“By continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else,” OpenAI said.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Asset manager says Trump admin may make Bitcoin strategic...

Bitcoin has a supply cap of 21 million,...

Rumor: iPhone SE 4 to feature OLED display, new...

In addition to new details about next year’s...

17-year-old Eric Zhu’s startup was built in a high...

Eric Zhu started building Aviato, an analytical platform...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!