Google Bolsters Checks On AI-Generated Content Before Elections

Share via:


SUMMARY

Google has rolled out a slew of initiatives, which include preventing the misuse of false information by helping voters navigate AI-generated content and to safeguard its platforms from abuse

The tech giant is building faster and more adaptable enforcement systems with recent advances in its LLMs, which will enable the company to remain nimble and take action more quickly when new threats emerge

With more people using AI to create content, the company is building on the ways in which it helps audiences identify AI-generated content through several new tools and policies, said Google

In an effort to ensure a secure electoral process during the upcoming general elections, Alphabet Inc-owned Google on Tuesday (March 12) rolled out a slew of initiatives, which include preventing misuse of false information by helping voters navigate AI-generated content and safeguarding its platforms from abuse.

“Protecting the integrity of elections also means keeping our products and services safe from abuse. Across Google, we have long-standing policies to keep our products and platforms safe. Our policies are enforced consistently and apply to all users, regardless of content type,” said Google in a blog post.

The tech giant said that it is building faster and more adaptable enforcement systems with recent advances in its Large Language Models (LLMs), which would enable the company to remain nimble and take action even more quickly when new threats emerge.

“We rely on a combination of human reviewers and machine learning to identify and remove content that violates our policies. Our AI models are enhancing our abuse-fighting efforts, while a dedicated team of local experts across all major Indian languages are working 24X7 to provide relevant context,” it said.

It is pertinent to note that Google India’s blog post emerged days after the Centre issued an advisory against the large firms, bringing in certain restrictions on the launch of new AI models after Google’s Gemini generated some synthetic answers on Prime Minister Narendra Modi that were allegedly in “violation of IT Rules”.

In its blog post Google said that with more people using AI to create content, the company is building on the ways in which it helps audiences identify AI-generated content through several new tools and policies. This includes ads disclosures, content labels on YouTube, digital watermarking, and more.

As per the US-based tech major, its ads policies already prohibit the use of manipulated media to mislead people, like deepfakes or doctored content.

Besides, it has also started displaying labels for content created with YouTube GenAI features, like Dream Screen and the platform will soon begin to require creators to disclose when they have created realistic altered or synthetic content. 

The company said that it is also committed to finding ways to ensure every image generated through its products has embedded watermarking with Google DeepMind’s SynthID.

“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses. We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protection,” Google said.

Besides, Google is also collaborating with the Election Commission of India (ECI) to help people easily discover critical voting information on Google Search, such as how to register and how to vote, in both English and Hindi. 

For news and information related to the election, YouTube’s recommendation system prominently surfaces content from authoritative sources on the video-sharing platform’s homepage, in search results, and the “Up Next” panel, it said.

“Additionally, ahead of the General Election, Google is supporting Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India, working together to aid the early detection of online misinformation, including deepfakes, and to create a common repository that news publishers can use to tackle the challenges of misinformation at scale,” Google’s blog post said.

In its attempt to stop misinformation, the company has recently joined the C2PA coalition and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content, the company said.

We must note that Google’s days in India haven’t been smooth enough over the last few years, like most other tech giants. From facing multiple penalties for allegedly misusing its market dominance to its clashes with the startup ecosystem over the last few months, the tech giant has constantly been under the government’s scanner. 





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Google Bolsters Checks On AI-Generated Content Before Elections


SUMMARY

Google has rolled out a slew of initiatives, which include preventing the misuse of false information by helping voters navigate AI-generated content and to safeguard its platforms from abuse

The tech giant is building faster and more adaptable enforcement systems with recent advances in its LLMs, which will enable the company to remain nimble and take action more quickly when new threats emerge

With more people using AI to create content, the company is building on the ways in which it helps audiences identify AI-generated content through several new tools and policies, said Google

In an effort to ensure a secure electoral process during the upcoming general elections, Alphabet Inc-owned Google on Tuesday (March 12) rolled out a slew of initiatives, which include preventing misuse of false information by helping voters navigate AI-generated content and safeguarding its platforms from abuse.

“Protecting the integrity of elections also means keeping our products and services safe from abuse. Across Google, we have long-standing policies to keep our products and platforms safe. Our policies are enforced consistently and apply to all users, regardless of content type,” said Google in a blog post.

The tech giant said that it is building faster and more adaptable enforcement systems with recent advances in its Large Language Models (LLMs), which would enable the company to remain nimble and take action even more quickly when new threats emerge.

“We rely on a combination of human reviewers and machine learning to identify and remove content that violates our policies. Our AI models are enhancing our abuse-fighting efforts, while a dedicated team of local experts across all major Indian languages are working 24X7 to provide relevant context,” it said.

It is pertinent to note that Google India’s blog post emerged days after the Centre issued an advisory against the large firms, bringing in certain restrictions on the launch of new AI models after Google’s Gemini generated some synthetic answers on Prime Minister Narendra Modi that were allegedly in “violation of IT Rules”.

In its blog post Google said that with more people using AI to create content, the company is building on the ways in which it helps audiences identify AI-generated content through several new tools and policies. This includes ads disclosures, content labels on YouTube, digital watermarking, and more.

As per the US-based tech major, its ads policies already prohibit the use of manipulated media to mislead people, like deepfakes or doctored content.

Besides, it has also started displaying labels for content created with YouTube GenAI features, like Dream Screen and the platform will soon begin to require creators to disclose when they have created realistic altered or synthetic content. 

The company said that it is also committed to finding ways to ensure every image generated through its products has embedded watermarking with Google DeepMind’s SynthID.

“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses. We take our responsibility for providing high-quality information for these types of queries seriously, and are continuously working to improve our protection,” Google said.

Besides, Google is also collaborating with the Election Commission of India (ECI) to help people easily discover critical voting information on Google Search, such as how to register and how to vote, in both English and Hindi. 

For news and information related to the election, YouTube’s recommendation system prominently surfaces content from authoritative sources on the video-sharing platform’s homepage, in search results, and the “Up Next” panel, it said.

“Additionally, ahead of the General Election, Google is supporting Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India, working together to aid the early detection of online misinformation, including deepfakes, and to create a common repository that news publishers can use to tackle the challenges of misinformation at scale,” Google’s blog post said.

In its attempt to stop misinformation, the company has recently joined the C2PA coalition and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content, the company said.

We must note that Google’s days in India haven’t been smooth enough over the last few years, like most other tech giants. From facing multiple penalties for allegedly misusing its market dominance to its clashes with the startup ecosystem over the last few months, the tech giant has constantly been under the government’s scanner. 





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

OpenAI’s o3 suggests AI models are scaling in new...

Last month, AI founders and investors told TechCrunch...

The FTC orders Marriott and Starwood to beef up...

The Federal Trade Commission announced on Friday it...

Aave mulls Chainlink integration to return MEV fees to...

The DeFi protocol aims to capture around 40%...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!