IT Ministry Removes Approval Need For New AI Models Rollout

Share via:


SUMMARY

The IT ministry announced that the advisory issued on March 15 supersedes the previous one from March 1

In the revised advisory, the government said that under-tested or unreliable AI products should be labelled with a disclaimer indicating that outputs generated by such products may be unreliable

As per the March 1 advisory, digital platforms were mandated to seek prior approval before launching any AI product in India

The Ministry of Electronics and Information Technology (MeitY) has revised its earlier advisory to the largest social media companies in the country on the use of artificial intelligence, changing a provision that mandated intermediaries and platforms to get government approval before launching “under-tested” or “unreliable” AI models.

The IT ministry announced that the advisory issued on March 15 supersedes the previous one from March 1. The earlier requirement for platforms to seek “explicit permission” from the Centre before deploying “under-testing/unreliable” AI models has been removed from the new advisory.

As per Moneycontrol, who reviewed a copy of the revised advisory issued by the IT ministry, “The advisory is issued in suppression of advisory.. dated March 1, 2024.” 

In its updated version, intermediaries are no longer obligated to submit an action taken-cum-status report. However, compliance is still required with immediate effect. While the obligations in the revised advisory remain unchanged, the language has been softened.

As per the March 1 advisory, digital platforms were mandated to seek prior approval before launching any AI product in India. Sent to digital intermediaries on March 1, the directive required platforms to label under-trial AI models and ensure that no unlawful content was hosted on their sites. The MeitY advisory also warned of penal consequences for non-compliance with the directive.

However, in the revised advisory, the government stated that under-tested or unreliable AI products should be labelled with a disclaimer indicating that outputs generated by such products may be unreliable.

The advisory, as cited by Moneycontrol, said, “Under-tested/unreliable Artificial Intelligence foundational model(s)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated.” 

The new advisory stresses that AI models shouldn’t share content breaking Indian laws. Platforms must ensure their AI algorithms are unbiased and don’t threaten electoral integrity. They should use consent pop-ups to warn users about unreliable output. 

Reportedly, the updated advisory also focuses on identifying deepfakes and misinformation, instructing platforms to label or embed content with unique identifiers. This applies to audio, visual, text, or audio-visual content, allowing easier identification of potential misinformation or deepfakes, even without defining “deepfake.”

MeitY also requires labels, metadata, or unique identifiers to indicate if content is artificially generated or modified, attributing it to the intermediary’s computer resource. Additionally, if users make changes, the metadata should identify them or their computer resource. 

The communication was sent to eight major social media intermediaries, including Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), Twitter, Snap, Microsoft/LinkedIn (for OpenAI), and ShareChat. Adobe did not receive the advisory, nor did Sarvam AI and Ola’s Krutrim AI.

The March 1 advisory faced criticism from AI startups, with many founders expressing concerns on X about its potential impact on generative AI startups.

“Bad move by India,” remarked Aravind Srinivas, CEO of Perplexity.AI.

Similarly, Pratik Desai, founder of Kissan AI, which offers AI-backed agricultural assistance, labeled the move as demotivating.

Addressing the criticism over the advisory at that time, minister of state for electronics and information technology Rajeev Chandrasekhar, said, “[the] advise to those deploying lab level /undertested  AI platforms onto public Internet and that cause harm or enable unlawful content – to be aware that platforms have clear existing obligations under IT and criminal law. So best way to protect yourself is to use labelling and explicit consent and if your a major platform take permission from govt before you deploy error prone platforms.”

The minister reassured that the country is supportive of AI given its potential of expanding India’s digital and innovation ecosystem. 





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

IT Ministry Removes Approval Need For New AI Models Rollout


SUMMARY

The IT ministry announced that the advisory issued on March 15 supersedes the previous one from March 1

In the revised advisory, the government said that under-tested or unreliable AI products should be labelled with a disclaimer indicating that outputs generated by such products may be unreliable

As per the March 1 advisory, digital platforms were mandated to seek prior approval before launching any AI product in India

The Ministry of Electronics and Information Technology (MeitY) has revised its earlier advisory to the largest social media companies in the country on the use of artificial intelligence, changing a provision that mandated intermediaries and platforms to get government approval before launching “under-tested” or “unreliable” AI models.

The IT ministry announced that the advisory issued on March 15 supersedes the previous one from March 1. The earlier requirement for platforms to seek “explicit permission” from the Centre before deploying “under-testing/unreliable” AI models has been removed from the new advisory.

As per Moneycontrol, who reviewed a copy of the revised advisory issued by the IT ministry, “The advisory is issued in suppression of advisory.. dated March 1, 2024.” 

In its updated version, intermediaries are no longer obligated to submit an action taken-cum-status report. However, compliance is still required with immediate effect. While the obligations in the revised advisory remain unchanged, the language has been softened.

As per the March 1 advisory, digital platforms were mandated to seek prior approval before launching any AI product in India. Sent to digital intermediaries on March 1, the directive required platforms to label under-trial AI models and ensure that no unlawful content was hosted on their sites. The MeitY advisory also warned of penal consequences for non-compliance with the directive.

However, in the revised advisory, the government stated that under-tested or unreliable AI products should be labelled with a disclaimer indicating that outputs generated by such products may be unreliable.

The advisory, as cited by Moneycontrol, said, “Under-tested/unreliable Artificial Intelligence foundational model(s)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated.” 

The new advisory stresses that AI models shouldn’t share content breaking Indian laws. Platforms must ensure their AI algorithms are unbiased and don’t threaten electoral integrity. They should use consent pop-ups to warn users about unreliable output. 

Reportedly, the updated advisory also focuses on identifying deepfakes and misinformation, instructing platforms to label or embed content with unique identifiers. This applies to audio, visual, text, or audio-visual content, allowing easier identification of potential misinformation or deepfakes, even without defining “deepfake.”

MeitY also requires labels, metadata, or unique identifiers to indicate if content is artificially generated or modified, attributing it to the intermediary’s computer resource. Additionally, if users make changes, the metadata should identify them or their computer resource. 

The communication was sent to eight major social media intermediaries, including Facebook, Instagram, WhatsApp, Google/YouTube (for Gemini), Twitter, Snap, Microsoft/LinkedIn (for OpenAI), and ShareChat. Adobe did not receive the advisory, nor did Sarvam AI and Ola’s Krutrim AI.

The March 1 advisory faced criticism from AI startups, with many founders expressing concerns on X about its potential impact on generative AI startups.

“Bad move by India,” remarked Aravind Srinivas, CEO of Perplexity.AI.

Similarly, Pratik Desai, founder of Kissan AI, which offers AI-backed agricultural assistance, labeled the move as demotivating.

Addressing the criticism over the advisory at that time, minister of state for electronics and information technology Rajeev Chandrasekhar, said, “[the] advise to those deploying lab level /undertested  AI platforms onto public Internet and that cause harm or enable unlawful content – to be aware that platforms have clear existing obligations under IT and criminal law. So best way to protect yourself is to use labelling and explicit consent and if your a major platform take permission from govt before you deploy error prone platforms.”

The minister reassured that the country is supportive of AI given its potential of expanding India’s digital and innovation ecosystem. 





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Bacancy Systems Invests USD 7M in Railway Electronics Research,...

New Delhi November 7: Bacancy Systems, a...

BitPips Launches High-Return Investment Plans for Everyone

Dubai , November 7: BitPips, an investment company...

Simplilearn’s Total Revenue Jumps 10.2% To INR 773 In...

SUMMARY The Bengaluru-based company also claimed to have trimmed...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!