AI compliance: ‘Algorithmic’ rider in draft data law trips firms deploying AI

Share via:


The obligation of due diligence in “algorithmic software” under the new data rules is likely to impact organisations employing artificial intelligence in their products or services, especially those building models, experts said. This mandate — even though softer than that of the European Union which has barred AI companies from releasing new AI models in the region without state approval — could be a difficult one to comply with, they added.“Data protection laws and AI have several zones of friction,” said Supratim Chakraborty, partner at law firm Khaitan & Co. “Cardinal principles of data protection laws such as ‘purpose limitation’, ‘data minimisation’, etc., and important rights such as ‘data subject rights’ mostly come in conflict with the way AI systems function,” he said.

Compliance with erasure requests from data subjects without impacting the functionality of the model is a highly intricate task, Chakraborty added.

Even more challenging is the fact that the layered nature of data management in AI makes it difficult to establish how personal data is processed. Because once trained, AI models can generate new content autonomously, thereby making it difficult to establish relation to training data.

The draft DPDP rules, released Friday, said data fiduciaries must observe due diligence to verify “algorithmic software” deployed by them.


This is an important step for AI governance as so far the Ministry of Electronics and Information Technology has only issued advisories to intermediaries and platforms to test their models and algorithms to ensure they do not permit discrimination, give information to users on unreliability of outputs from models under testing and prohibit users from using such models in contravention of the IT Act and Rules.

Discover the stories of your interest


“India has signalled its intent to align with global standards like the EU’s AI Act; however, the Indian approach appears broader and less defined,” said Ankit Sahni, partner at law firm Ajay Sahni & Associates. “Its open-ended nature raises questions about implementation clarity. What constitutes adequate ‘due diligence’ and the extent of scrutiny for AI systems remains undefined, potentially creating operational ambiguities.”The EU AI Act enacted in August 2024 mandated a risk-based approach for AI companies to rate models as ‘unacceptable’, ‘high’, ‘limited’, and ‘minimal’ risk, forcing companies like OpenAI, Meta, Anthropic and Alibaba to release new models in the region.

For instance, a new “LLM checker” by LatticeFlow showed that OpenAI’s GPT-3.5 Turbo model received a concerning score of 0.46 for its performance in preventing discriminatory output. Alibaba’s Qwen1.5 72B Chat model fared even worse, with a score of 0.37 in the same category. Meta’s Llama 2 13B Chat model scored poorly on a cybersecurity threat called prompt hijacking.

“Major AI companies like Google, Meta and OpenAI face a balancing act between compliance and innovation in these divergent regulatory landscapes,” said Anandaday Misshra, founder and managing partner of AMLEGALS.

“The draft DPDP Rules, despite their flexibility, still necessitate enhanced data management practices,” he added.

However, it must be highlighted that there is a distinction between the intent of India’s DPDP Rules and EU’s AI Act.

The scope of the former is limited to significant data fiduciaries in the context of ensuring that rights of Data Principals (such as the right to access personal information, right to correction and erasure) are not at risk, explained Nakul Batra, partner of DSK Legal. “Conversely, the EU AI Act prescribes for a much broader assessment of the risks of the AI model in relation to the fundamental rights, health and safety of the general public,” he said.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

admin
admin
Hi! This is Admin.

Popular

More Like this

AI compliance: ‘Algorithmic’ rider in draft data law trips firms deploying AI


The obligation of due diligence in “algorithmic software” under the new data rules is likely to impact organisations employing artificial intelligence in their products or services, especially those building models, experts said. This mandate — even though softer than that of the European Union which has barred AI companies from releasing new AI models in the region without state approval — could be a difficult one to comply with, they added.“Data protection laws and AI have several zones of friction,” said Supratim Chakraborty, partner at law firm Khaitan & Co. “Cardinal principles of data protection laws such as ‘purpose limitation’, ‘data minimisation’, etc., and important rights such as ‘data subject rights’ mostly come in conflict with the way AI systems function,” he said.

Compliance with erasure requests from data subjects without impacting the functionality of the model is a highly intricate task, Chakraborty added.

Even more challenging is the fact that the layered nature of data management in AI makes it difficult to establish how personal data is processed. Because once trained, AI models can generate new content autonomously, thereby making it difficult to establish relation to training data.

The draft DPDP rules, released Friday, said data fiduciaries must observe due diligence to verify “algorithmic software” deployed by them.


This is an important step for AI governance as so far the Ministry of Electronics and Information Technology has only issued advisories to intermediaries and platforms to test their models and algorithms to ensure they do not permit discrimination, give information to users on unreliability of outputs from models under testing and prohibit users from using such models in contravention of the IT Act and Rules.

Discover the stories of your interest


“India has signalled its intent to align with global standards like the EU’s AI Act; however, the Indian approach appears broader and less defined,” said Ankit Sahni, partner at law firm Ajay Sahni & Associates. “Its open-ended nature raises questions about implementation clarity. What constitutes adequate ‘due diligence’ and the extent of scrutiny for AI systems remains undefined, potentially creating operational ambiguities.”The EU AI Act enacted in August 2024 mandated a risk-based approach for AI companies to rate models as ‘unacceptable’, ‘high’, ‘limited’, and ‘minimal’ risk, forcing companies like OpenAI, Meta, Anthropic and Alibaba to release new models in the region.

For instance, a new “LLM checker” by LatticeFlow showed that OpenAI’s GPT-3.5 Turbo model received a concerning score of 0.46 for its performance in preventing discriminatory output. Alibaba’s Qwen1.5 72B Chat model fared even worse, with a score of 0.37 in the same category. Meta’s Llama 2 13B Chat model scored poorly on a cybersecurity threat called prompt hijacking.

“Major AI companies like Google, Meta and OpenAI face a balancing act between compliance and innovation in these divergent regulatory landscapes,” said Anandaday Misshra, founder and managing partner of AMLEGALS.

“The draft DPDP Rules, despite their flexibility, still necessitate enhanced data management practices,” he added.

However, it must be highlighted that there is a distinction between the intent of India’s DPDP Rules and EU’s AI Act.

The scope of the former is limited to significant data fiduciaries in the context of ensuring that rights of Data Principals (such as the right to access personal information, right to correction and erasure) are not at risk, explained Nakul Batra, partner of DSK Legal. “Conversely, the EU AI Act prescribes for a much broader assessment of the risks of the AI model in relation to the fundamental rights, health and safety of the general public,” he said.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

admin
admin
Hi! This is Admin.

More like this

Infosys cofounder Nandan Nilekani on AI commoditisation, data, and...

Infosys cofounder Nandan Nilekani on Saturday said that...

Empowering Hygiene: Mental Switch Donates Napkin Incinerator Machines

Bengaluru (Karnataka) , March 8: On the occasion...

Phantom Digital Effects Announces Its Growth Trajectory

Mumbai (Maharashtra) , March 7: Phantom Digital Effects...

Popular

Upcoming Events

Infosys cofounder Nandan Nilekani on AI commoditisation, data, and...

Infosys cofounder Nandan Nilekani on Saturday said that...

Samsung One UI 7 rolls out to more Galaxy...

Samsung’s One UI 7 beta will be available...

Gurman: Apple smart home hub ‘postponed’, employees participating in...

According to Mark Gurman from Bloomberg, Apple has...
d.er.sdfdsa d.fesdfrwa.dfa d.er.ssdfdfdsa d.fesdfdfrwa.dfa