AI is becoming the new focus of tech giants. Google, Microsoft, OpenAI, and other big tech firms are working on developing their AI and ML models to revolutionise how we work in our digital space. However, the ease of AI is also bringing its own kind of challenges, such as spreading misinformation through AI-generated content. To combat this, Meta has recently collaborated with Misinformation Combat Alliance (MCA) to launch a dedicated fact-checking helpline on WhatsApp to fight deepfakes and AI-generated misinformation.
In an official announcement on social media, MCA said that the new helpline will be available for the public next month, i.e., March 2024, and will help them report and flag misleading AI-generated media that can deceive people on matters of public importance, commonly known as deepfakes, by sending it to the WhatsApp chatbot that will offer multilingual support in English and three regional languages. In addition to reporting the AI deepfakes, the helpline will also help people connect with verified and credible information.
According to MCA, users can send messages to the helpline, which will then be checked and validated by fact-checkers, industry partners, and digital labs. They will determine the authenticity of the content and expose it if it is false or manipulated. “We will work closely with member fact-checking organizations as well as industry partners and digital labs to assess and verify the content and respond to the messages accordingly, debunking false claims and misinformation, ” said MCA.
“The focus of the program is to implement a four-pillar approach – detection, prevention, reporting and driving awareness around the escalating spread of deepfakes along with building a critical instrument that allows citizens to access reliable information to fight the spread of such misinformation,” MCA further quotes.
Notably, while Meta is developing a WhatsApp chatbot, the Misinformation Combat Alliance is creating a central unit for deepfake analysis. This unit will handle all the messages from the helpline. Meta said it collaborated with 11 independent organisations that check facts to find, validate, and examine misinformation on the platform.
“The Deepfakes Analysis Unit (DAU) will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation among social media and internet users,” said Bharat Gupta, president of Misinformation Combat Alliance.
Notably, social media platforms like WhatsApp have been facing a challenge of misinformation, as people often share false or misleading messages with others. Meta has taken some steps to address this issue, such as reducing the number of forwards and removing fake accounts.
Recently, Meta also announced that it will label AI-generated images on its platforms, such as Facebook, Instagram and Threads. This will help users distinguish between real and synthetic photos, which may look very realistic. The labels will be applied to any content that has industry standard indicators of being created by AI
Source: Business Today