Snap previews its real-time image model that can generate AR experiences

Share via:

At the Augmented World Expo on Tuesday, Snap teased an early version of its real-time, on-device image diffusion model that can generate vivid AR experiences. The company also unveiled generative AI tools for AR creators.

Snap co-founder and CTO Bobby Murphy said onstage that the model is small enough to run on a smartphone and fast enough to re-render frames in real time, guided by a text prompt.

Murphy said that while the emergence of generative AI image diffusion models has been exciting, these models need to be significantly faster for them to be impactful for augmented reality, which is why its teams have been working to accelerate machine learning models.

Snapchat users will start to see Lenses with this generative model in the coming months, and Snap plans to bring it to creators by the end of the year.

Image Credits: Snap

“This and future real time on device generative ML models speak to an exciting new direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether,” Murphy said.

Murphy also announced that Lens Studio 5.0 is launching today for developers with access to new generative AI tools that will help them create AR effects much faster than currently possible, saving them weeks and even months.

AR creators can create selfie Lenses by generating highly realistic ML face effects. Plus, they can generate custom stylization effects that apply a realistic transformation over the user’s face, body and surroundings in real time. Creators can also generate a 3D asset in minutes and include it in their Lenses.

In addition, AR creators can generate characters like aliens or wizards with a text or image prompt using the company’s Face Mesh technology. They can also generate face masks, texture and materials within minutes.

The latest version of Lens Studio also includes an AI assistant that can answer questions that AR creators may have.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Snap previews its real-time image model that can generate AR experiences

At the Augmented World Expo on Tuesday, Snap teased an early version of its real-time, on-device image diffusion model that can generate vivid AR experiences. The company also unveiled generative AI tools for AR creators.

Snap co-founder and CTO Bobby Murphy said onstage that the model is small enough to run on a smartphone and fast enough to re-render frames in real time, guided by a text prompt.

Murphy said that while the emergence of generative AI image diffusion models has been exciting, these models need to be significantly faster for them to be impactful for augmented reality, which is why its teams have been working to accelerate machine learning models.

Snapchat users will start to see Lenses with this generative model in the coming months, and Snap plans to bring it to creators by the end of the year.

Image Credits: Snap

“This and future real time on device generative ML models speak to an exciting new direction for augmented reality, and is giving us space to reconsider how we imagine rendering and creating AR experiences altogether,” Murphy said.

Murphy also announced that Lens Studio 5.0 is launching today for developers with access to new generative AI tools that will help them create AR effects much faster than currently possible, saving them weeks and even months.

AR creators can create selfie Lenses by generating highly realistic ML face effects. Plus, they can generate custom stylization effects that apply a realistic transformation over the user’s face, body and surroundings in real time. Creators can also generate a 3D asset in minutes and include it in their Lenses.

In addition, AR creators can generate characters like aliens or wizards with a text or image prompt using the company’s Face Mesh technology. They can also generate face masks, texture and materials within minutes.

The latest version of Lens Studio also includes an AI assistant that can answer questions that AR creators may have.

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

FInd your lost wallet with your iPhone using SwitchBot...

I have an AirTag on my keychain to...

M4 MacBook Pro teardown reveals nearly identical internals to...

While the new M4 MacBook Pros got a...

Apple @ Work: Understanding Apple’s Private Wi-Fi Address feature

Apple @ Work is exclusively brought to you...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!