AI Has Already Broken Privacy. What’s Next?

Share via:



AI Has Already Broken Privacy. What’s Next?

As AI continues to evolve, its impact on privacy and security has become a concern for many. A recent survey revealed that 69% of Indians cannot distinguish between an AI agent and a real human voice, highlighting a critical vulnerability in digital interactions. 

Coincidentally, Pushkar Shanbhag, associate research director at IDC, noted the implications of this finding and said, “In India, 69% of the population can’t differentiate whether they are interacting with AI or a human online and this presents unique challenges for both privacy and security.”  

Further, Johanne Ulloa, the director of solutions consulting at LexisNexis Risk Solutions, warned of the dangers AI poses in phishing attacks. “Chatbots can still be misused to enhance the effectiveness of phishing emails,” he said. 

There are a few safeguards to prevent it from generating convincing phishing messages that deceive users into sharing sensitive information. The recent advancements in text-to-speech technology, like OpenAI’s GPT-40 model, have added to this anxiety, with realistic AI voices making it harder to identify fraud.

These AI systems can now replicate any voice, posing a unique threat to security. Ulloa recounted instances where individuals received messages appearing to be from family members, only to discover their voices had been spoofed. 

Interestingly, a majority of people now prefer interacting with AI agents over humans. There is a growing trend of individuals seeking to resolve issues through machine or AI-driven interactions.

This ability to manipulate voice data increases the potential for AI-related crimes, such as impersonation scams, and is a significant concern as AI continues to integrate into daily life.

So, with AI agents becoming integrated across telephony, WhatsApp, and mobile applications, the need for solutions to mitigate the misuse of this technology is urgent. Now the question is, are the existing solutions enough to address the growing number of AI-related crimes?

Story of India and Privacy

India has seen a sharp 46% year-over-year increase in cyberattacks, demonstrating the pressing need for stronger cybersecurity measures. The Digital Personal Data Protection (DPDP) Act of 2023 is a step in the right direction, emphasising the importance of robust data management. 

However, long-term success will depend on how well organisations adapt to the evolving landscape.

Glenn Gore, the CEO of Affinidi, stressed, “Cybersecurity must evolve with the times, and in a world driven by AI and automation, companies that can create real-time, scalable solutions to prevent data breaches will be the ones that succeed.”

As India becomes increasingly digitised, issues like consent fatigue also come to the fore. According to Na.Vijayashankar (Naavi), the chairman of the Foundation of Data Protection Professionals in India, “Consent fatigue is a growing problem in India’s digital space, and privacy laws like the DPDP Act aim to give more control back to the consumer while ensuring companies remain compliant.”

One area where these privacy concerns are particularly pronounced is facial recognition technology (FRT) used in the Digi Yatra initiative for streamlined airport experiences. While the Ministry of Civil Aviation (MoCA) has assured the public that data collected is purged within 24 hours, the policy still allows government agencies access to passenger data, creating potential privacy loopholes.

Evan Selinger and Woodrow Hartzog, experts in privacy law, argue that consent for FRT is inherently flawed because individuals cannot fully understand the risks to their autonomy. They explain that facial recognition compromises the concept of “obscurity”, which is essential to maintaining privacy. Without a robust legal framework in India to protect citizens, the introduction of FRT leaves the public vulnerable to data misuse.

Speaking at Cypher 2024, India’s biggest AI conference, Ram Kunchur, former head of product innovation at Digi Yatra Foundation, said their consent management system ensures nobody can access the data after 24 hours of flight departure. 

Companies like Affinidi are working to offer solutions too. Their Trust Network integrates data from multiple authoritative sources while maintaining its integrity, ensuring that users can securely share information. By giving users control over consent, Affinidi helps businesses verify identities, thereby securing transactions in a world where AI is rapidly gaining influence.

Privacy in the AI Era

The rise of AI, especially LLMs and chatbots, brings new privacy concerns. Are our personal details being used in AI training or could chatbots piece together information from our online lives and share it?

These pressing issues are explored in the white paper ‘Rethinking Privacy in the AI Era’ by Stanford HAI’s Jennifer King and Caroline Meinhardt, which highlighted how AI systems intensify privacy risks. 

It mentioned a few key movements and tech solutions to address data privacy concerns. One notable example is Apple’s App Tracking Transparency (ATT), launched in 2021. Now, iPhone users are asked if they want apps to track their activity across other apps and websites, and an estimated 80-90% choose to say no.

Another solution is Global Privacy Control, a browser feature that automatically prevents third-party cookies and data sales. While some browsers like Firefox and Brave offer this, larger ones like Microsoft Edge and Google Chrome don’t. 

A recent proposal in California aims to make this opt-out feature mandatory across all browsers, ensuring better control over personal data.

India, now the most targeted country for cyber crimes, accounts for 13.7% of global attacks. With evolving threats, stricter data privacy regulations and growing demands for consumer trust, Indian enterprises are turning to AI-driven cybersecurity solutions to safeguard sensitive data and maintain privacy in this new digital era.

“The cost of not prioritising privacy and cybersecurity will soon outweigh the cost of implementing them. Heavy penalties and reputational damage are just a few of the risks enterprises face today,” said Gore. 

A key aspect of the DPDP Act is its penalty clause, which imposes significant fines for non-compliance by data fiduciaries, reaching up to INR 250 crore. 

The post AI Has Already Broken Privacy. What’s Next? appeared first on AIM.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

AI Has Already Broken Privacy. What’s Next?



AI Has Already Broken Privacy. What’s Next?

As AI continues to evolve, its impact on privacy and security has become a concern for many. A recent survey revealed that 69% of Indians cannot distinguish between an AI agent and a real human voice, highlighting a critical vulnerability in digital interactions. 

Coincidentally, Pushkar Shanbhag, associate research director at IDC, noted the implications of this finding and said, “In India, 69% of the population can’t differentiate whether they are interacting with AI or a human online and this presents unique challenges for both privacy and security.”  

Further, Johanne Ulloa, the director of solutions consulting at LexisNexis Risk Solutions, warned of the dangers AI poses in phishing attacks. “Chatbots can still be misused to enhance the effectiveness of phishing emails,” he said. 

There are a few safeguards to prevent it from generating convincing phishing messages that deceive users into sharing sensitive information. The recent advancements in text-to-speech technology, like OpenAI’s GPT-40 model, have added to this anxiety, with realistic AI voices making it harder to identify fraud.

These AI systems can now replicate any voice, posing a unique threat to security. Ulloa recounted instances where individuals received messages appearing to be from family members, only to discover their voices had been spoofed. 

Interestingly, a majority of people now prefer interacting with AI agents over humans. There is a growing trend of individuals seeking to resolve issues through machine or AI-driven interactions.

This ability to manipulate voice data increases the potential for AI-related crimes, such as impersonation scams, and is a significant concern as AI continues to integrate into daily life.

So, with AI agents becoming integrated across telephony, WhatsApp, and mobile applications, the need for solutions to mitigate the misuse of this technology is urgent. Now the question is, are the existing solutions enough to address the growing number of AI-related crimes?

Story of India and Privacy

India has seen a sharp 46% year-over-year increase in cyberattacks, demonstrating the pressing need for stronger cybersecurity measures. The Digital Personal Data Protection (DPDP) Act of 2023 is a step in the right direction, emphasising the importance of robust data management. 

However, long-term success will depend on how well organisations adapt to the evolving landscape.

Glenn Gore, the CEO of Affinidi, stressed, “Cybersecurity must evolve with the times, and in a world driven by AI and automation, companies that can create real-time, scalable solutions to prevent data breaches will be the ones that succeed.”

As India becomes increasingly digitised, issues like consent fatigue also come to the fore. According to Na.Vijayashankar (Naavi), the chairman of the Foundation of Data Protection Professionals in India, “Consent fatigue is a growing problem in India’s digital space, and privacy laws like the DPDP Act aim to give more control back to the consumer while ensuring companies remain compliant.”

One area where these privacy concerns are particularly pronounced is facial recognition technology (FRT) used in the Digi Yatra initiative for streamlined airport experiences. While the Ministry of Civil Aviation (MoCA) has assured the public that data collected is purged within 24 hours, the policy still allows government agencies access to passenger data, creating potential privacy loopholes.

Evan Selinger and Woodrow Hartzog, experts in privacy law, argue that consent for FRT is inherently flawed because individuals cannot fully understand the risks to their autonomy. They explain that facial recognition compromises the concept of “obscurity”, which is essential to maintaining privacy. Without a robust legal framework in India to protect citizens, the introduction of FRT leaves the public vulnerable to data misuse.

Speaking at Cypher 2024, India’s biggest AI conference, Ram Kunchur, former head of product innovation at Digi Yatra Foundation, said their consent management system ensures nobody can access the data after 24 hours of flight departure. 

Companies like Affinidi are working to offer solutions too. Their Trust Network integrates data from multiple authoritative sources while maintaining its integrity, ensuring that users can securely share information. By giving users control over consent, Affinidi helps businesses verify identities, thereby securing transactions in a world where AI is rapidly gaining influence.

Privacy in the AI Era

The rise of AI, especially LLMs and chatbots, brings new privacy concerns. Are our personal details being used in AI training or could chatbots piece together information from our online lives and share it?

These pressing issues are explored in the white paper ‘Rethinking Privacy in the AI Era’ by Stanford HAI’s Jennifer King and Caroline Meinhardt, which highlighted how AI systems intensify privacy risks. 

It mentioned a few key movements and tech solutions to address data privacy concerns. One notable example is Apple’s App Tracking Transparency (ATT), launched in 2021. Now, iPhone users are asked if they want apps to track their activity across other apps and websites, and an estimated 80-90% choose to say no.

Another solution is Global Privacy Control, a browser feature that automatically prevents third-party cookies and data sales. While some browsers like Firefox and Brave offer this, larger ones like Microsoft Edge and Google Chrome don’t. 

A recent proposal in California aims to make this opt-out feature mandatory across all browsers, ensuring better control over personal data.

India, now the most targeted country for cyber crimes, accounts for 13.7% of global attacks. With evolving threats, stricter data privacy regulations and growing demands for consumer trust, Indian enterprises are turning to AI-driven cybersecurity solutions to safeguard sensitive data and maintain privacy in this new digital era.

“The cost of not prioritising privacy and cybersecurity will soon outweigh the cost of implementing them. Heavy penalties and reputational damage are just a few of the risks enterprises face today,” said Gore. 

A key aspect of the DPDP Act is its penalty clause, which imposes significant fines for non-compliance by data fiduciaries, reaching up to INR 250 crore. 

The post AI Has Already Broken Privacy. What’s Next? appeared first on AIM.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Ninja Van tackles overdue salaries at VN subsidiary

Ninja Van announced on September 19 the suspension...

Scammers use memecoin ‘trending’ list to lure victims —...

Roffet.eth found that some coins contained obscure, difficult-to-read...

Stolen Ferrari worth $575,000 was found by tracking the...

Time and again, Apple’s Find My technology turns...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!