Hackers discuss use of ChatGPT, other AI tools for illegal activities: Report |

Share via:



While tech companies look to integrate AI technology into workflows, hackers are also looking to explore ways to incorporate AI chatbots for illegal activities, a report has said based on posts on the dark web.
As per Kaspersky’s Digital Footprint Intelligence service, there have been nearly 3000 dark web posts mainly discussing use of ChatGPT and other LLMs for schemes – from creating nefarious alternatives of the chatbot to jailbreaking the original and beyond.
“Stolen ChatGPT accounts and services offering their automated creation en masse are also flooding dark web channels, reaching another 3000 posts,” the study by Russian cybersecurity company added.
In 2023, Kaspersky’s service discovered nearly 3000 posts on the dark web, and said that the chatter peaked in March.
“Threat actors are actively exploring various schemes to implement ChatGPT and AI. Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond,” said Alisa Kulishenko, digital footprint analyst at Kaspersky.
Alternatives to ChatGPT
The report said that the popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents into some cybercriminal forums. It added that hackers often share jailbreaks through various dark web channels and “devise ways to exploit legitimate tools, such as those for pentesting, based on models for malicious purposes.”

Hackers are also giving considerable attention to projects like XXXGPT, FraudGPT and others, which are marketed on the dark web as alternatives to ChatGPT. Reportedly, these alternatives offer additional functionality and do not have limitations that restrict the legitimate chatbots.
Stolen ChatGPT accounts are on sale
Another threat for users and companies is the market for accounts for the paid version of ChatGPT. In 2023, another cache of 3000 posts (in addition to the mentioned above) were advertising ChatGPT accounts for sale across the dark web and shadow Telegram-channels. These posts either distribute stolen accounts or promote auto-registration services creating accounts on request.
“The automated nature of cyberattacks often means automated defenses. Nonetheless, staying informed about attackers’ activities is crucial to being ahead of adversaries in terms of corporate cybersecurity”, says Alisa Kulishenko, digital footprint analyst at Kaspersky.
Kulishenko said that while AI tools are not inherently dangerous, cybercriminals are trying to come up with efficient ways of using them to potentially increase the number of cyberattacks. The spokesperson added that it’s unlikely that generative AI and chatbots will revolutionise the attack landscape in 2024.
The Times of India Gadgets Now awards: Cast your vote now and pick the best phones, laptops and other gadgets of 2023





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Hackers discuss use of ChatGPT, other AI tools for illegal activities: Report |



While tech companies look to integrate AI technology into workflows, hackers are also looking to explore ways to incorporate AI chatbots for illegal activities, a report has said based on posts on the dark web.
As per Kaspersky’s Digital Footprint Intelligence service, there have been nearly 3000 dark web posts mainly discussing use of ChatGPT and other LLMs for schemes – from creating nefarious alternatives of the chatbot to jailbreaking the original and beyond.
“Stolen ChatGPT accounts and services offering their automated creation en masse are also flooding dark web channels, reaching another 3000 posts,” the study by Russian cybersecurity company added.
In 2023, Kaspersky’s service discovered nearly 3000 posts on the dark web, and said that the chatter peaked in March.
“Threat actors are actively exploring various schemes to implement ChatGPT and AI. Topics frequently include the development of malware and other types of illicit use of language models, such as processing of stolen user data, parsing files from infected devices, and beyond,” said Alisa Kulishenko, digital footprint analyst at Kaspersky.
Alternatives to ChatGPT
The report said that the popularity of AI tools has led to the integration of automated responses from ChatGPT or its equivalents into some cybercriminal forums. It added that hackers often share jailbreaks through various dark web channels and “devise ways to exploit legitimate tools, such as those for pentesting, based on models for malicious purposes.”

Hackers are also giving considerable attention to projects like XXXGPT, FraudGPT and others, which are marketed on the dark web as alternatives to ChatGPT. Reportedly, these alternatives offer additional functionality and do not have limitations that restrict the legitimate chatbots.
Stolen ChatGPT accounts are on sale
Another threat for users and companies is the market for accounts for the paid version of ChatGPT. In 2023, another cache of 3000 posts (in addition to the mentioned above) were advertising ChatGPT accounts for sale across the dark web and shadow Telegram-channels. These posts either distribute stolen accounts or promote auto-registration services creating accounts on request.
“The automated nature of cyberattacks often means automated defenses. Nonetheless, staying informed about attackers’ activities is crucial to being ahead of adversaries in terms of corporate cybersecurity”, says Alisa Kulishenko, digital footprint analyst at Kaspersky.
Kulishenko said that while AI tools are not inherently dangerous, cybercriminals are trying to come up with efficient ways of using them to potentially increase the number of cyberattacks. The spokesperson added that it’s unlikely that generative AI and chatbots will revolutionise the attack landscape in 2024.
The Times of India Gadgets Now awards: Cast your vote now and pick the best phones, laptops and other gadgets of 2023





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

TRAI To ‘Soon’ Release Satcom Spectrum Recommendations

SUMMARY TRAI chairman Anil Kumar Lahoti said that discussions...

Ola Expands Network To 4K Stores, Launches S1 Pro...

SUMMARY EV startup Ola Electric announced the opening of...

Bad Year For Honasa, But Who Gained The Most?

It was an eventful year for new-age tech...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!