Meta thinks social media can protect us from deep fakes

Share via:


Deep fakes are arguably the most dangerous aspect of AI. It’s now relatively trivial to create fake photos, audio, and even video. See below for deep fakes of Morgan Freeman and Tom Cruise, for example.

But while social media has so far been used as a mechanism for distributing deep fakes, Instagram head Adam Mosseri thinks it can actually play a key role in debunking them …

How deep fakes are created

The main method used to create deep fake videos to date has been an approach known as generative adversarial networks (GAN).

One AI model either generates a fake video clip or displays a genuine one. A second AI model attempts to identify the fakes. Repeatedly running this process trains the first model to generate increasingly convincing fakes.

However, diffusion models like DALL-E 2 are now taking over. These take real video footage, then make various changes to create a large number of variations of them. Text prompts can be used to instruct the AI model on the results we want, making them easier to use – and the more people who use them, the better trained they become.

Examples of deep fake videos

Here’s a well-known example of a Morgan Freeman deep fake, created a full three years ago, when the technology was much less sophisticated than it is today:

And another, of Tom Cruise as Iron Man:

Brits may also recognise Martin Lewis, who is well-known for offering financial advice, here in a deep fake to promote a crypto scam:

Meta exec Adam Mosseri thinks that social media can actually make things better rather than worse, by helping to flag fake content – though he does note that it isn’t perfect at doing this, and we each need to consider sources.

Over the years we’ve become increasingly capable of fabricating realistic images, both still and moving. Jurassic Park blew my mind at age ten, but that was a $63M movie. Golden Eye for N64 was even more impressive to me four years later because it was live. We look back on these media now and they seem crude at best. Whether or not you’re a bull or a bear in the technology, generative AI is clearly producing content that is difficult to discern from recordings of reality, and improving rapidly.

A friend, @lessin, pushed me maybe ten years ago on the idea that any claim should be assessed not just on its content, but the credibility of the person or institution making that claim. Maybe this happened years ago, but it feels like now is when we are collectively appreciating that it has become more important to consider who is saying a thing than what they are saying when assessing a statement’s validity.

Our role as internet platforms is to label content generated as AI as best we can. But some content will inevitably slip through the cracks, and not all misrepresentations will be generated with AI, so we must also provide context about who is sharing so you can assess for yourself how much you want to trust their content.

It’s going to be increasingly critical that the viewer, or reader, brings a discerning mind when they consume content purporting to be an account or a recording of reality. My advice is to *always* consider who it is that is speaking.

Image: Shamook

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Meta thinks social media can protect us from deep fakes


Deep fakes are arguably the most dangerous aspect of AI. It’s now relatively trivial to create fake photos, audio, and even video. See below for deep fakes of Morgan Freeman and Tom Cruise, for example.

But while social media has so far been used as a mechanism for distributing deep fakes, Instagram head Adam Mosseri thinks it can actually play a key role in debunking them …

How deep fakes are created

The main method used to create deep fake videos to date has been an approach known as generative adversarial networks (GAN).

One AI model either generates a fake video clip or displays a genuine one. A second AI model attempts to identify the fakes. Repeatedly running this process trains the first model to generate increasingly convincing fakes.

However, diffusion models like DALL-E 2 are now taking over. These take real video footage, then make various changes to create a large number of variations of them. Text prompts can be used to instruct the AI model on the results we want, making them easier to use – and the more people who use them, the better trained they become.

Examples of deep fake videos

Here’s a well-known example of a Morgan Freeman deep fake, created a full three years ago, when the technology was much less sophisticated than it is today:

And another, of Tom Cruise as Iron Man:

Brits may also recognise Martin Lewis, who is well-known for offering financial advice, here in a deep fake to promote a crypto scam:

Meta exec Adam Mosseri thinks that social media can actually make things better rather than worse, by helping to flag fake content – though he does note that it isn’t perfect at doing this, and we each need to consider sources.

Over the years we’ve become increasingly capable of fabricating realistic images, both still and moving. Jurassic Park blew my mind at age ten, but that was a $63M movie. Golden Eye for N64 was even more impressive to me four years later because it was live. We look back on these media now and they seem crude at best. Whether or not you’re a bull or a bear in the technology, generative AI is clearly producing content that is difficult to discern from recordings of reality, and improving rapidly.

A friend, @lessin, pushed me maybe ten years ago on the idea that any claim should be assessed not just on its content, but the credibility of the person or institution making that claim. Maybe this happened years ago, but it feels like now is when we are collectively appreciating that it has become more important to consider who is saying a thing than what they are saying when assessing a statement’s validity.

Our role as internet platforms is to label content generated as AI as best we can. But some content will inevitably slip through the cracks, and not all misrepresentations will be generated with AI, so we must also provide context about who is sharing so you can assess for yourself how much you want to trust their content.

It’s going to be increasingly critical that the viewer, or reader, brings a discerning mind when they consume content purporting to be an account or a recording of reality. My advice is to *always* consider who it is that is speaking.

Image: Shamook

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Meta put on watch over terrorism content in the...

Ireland’s media regulator has put social media giant...

SC In Favour Of Transferring Pleas To K’taka HC

SUMMARY The Supreme Court on Monday observed that prima...

Skims co-founder Jens Grede addresses those IPO rumors

Skims co-founder Jens Grede has confirmed that plans...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!