Google open-sources tools to support AI model development

Share via:


In a typical year, Cloud Next — one of Google’s two major annual developer conferences, the other being I/O — almost exclusively features managed and otherwise closed-source, gated-behind-locked-down-APIs products and services. But this year, whether to foster developer goodwill or advance its ecosystem ambitions (or both), Google debuted a number of open-source tools primarily aimed at supporting generative AI projects and infrastructure.

The first, MaxDiffusion, which Google actually quietly released in February, is a collection of reference implementations of various diffusion models — models like the image generator Stable Diffusion — that run on XLA devices. “XLA” stands for Accelerated Linear Algebra, an admittedly awkward acronym referring to a technique that optimizes and speeds up specific types of AI workloads including fine-tuning and serving.

Google’s own tensor processing units (TPUs) are XLA devices, as are recent Nvidia GPUs.

Beyond MaxDiffusion, Google’s launching Jetstream, a new engine to run generative AI models — specifically text-generating models (so not Stable Diffusion). Currently limited to supporting TPUs with GPU compatibility supposedly coming in the future, Jetstream offers up to 3x higher “performance per dollar” for models like Google’s own Gemma 7B and Meta’s Llama 2, Google claims.

“As customers bring their AI workloads to production, there’s an increasing demand for a cost-efficient inference stack that delivers high performance,” Mark Lohmeyer, Google Cloud’s GM of compute and machine learning infrastructure, wrote in a blog post shared with TechCrunch. “JetStream helps with this need … and includes optimizations for popular open models such as Llama 2 and Gemma.”

Now, “3x” improvement is quite a claim to make, and it’s not exactly clear how Google arrived at that figure. Using which generation of TPU? Compared to which baseline engine? And how’s “performance” being defined here, anyway?

I’ve asked Google all these questions and will update this post if I hear back.

Second-to-last on the list of Google’s open-source contributions are new additions to MaxText, Google’s collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. MaxText now includes Gemma 7B, OpenAI’s GPT-3 (the predecessor to GPT-4), Llama 2 and models from AI startup Mistral — all of which Google says can be customized and fine-tuned to developers’ needs.

We’ve heavily optimized [the models’] performance on TPUs and also partnered closely with Nvidia to optimize performance on large GPU clusters,” Lohmeyer said. “These improvements maximize GPU and TPU utilization, leading to higher energy efficiency and cost optimization.”

Finally, Google’s collaborated with Hugging Face, the AI startup, to create Optimum TPU, which provides tooling to bring certain AI workloads to TPUs. The goal is to reduce the barrier to entry for getting generative AI models onto TPU hardware, according to Google — in particular text-generating models.

But at present, Optimum TPU is a bit bare bones. The only model it works with is Gemma 7B. And Optimum TPU doesn’t yet support training generative models on TPUs — only running them.

Google’s promising improvements down the line.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Google open-sources tools to support AI model development


In a typical year, Cloud Next — one of Google’s two major annual developer conferences, the other being I/O — almost exclusively features managed and otherwise closed-source, gated-behind-locked-down-APIs products and services. But this year, whether to foster developer goodwill or advance its ecosystem ambitions (or both), Google debuted a number of open-source tools primarily aimed at supporting generative AI projects and infrastructure.

The first, MaxDiffusion, which Google actually quietly released in February, is a collection of reference implementations of various diffusion models — models like the image generator Stable Diffusion — that run on XLA devices. “XLA” stands for Accelerated Linear Algebra, an admittedly awkward acronym referring to a technique that optimizes and speeds up specific types of AI workloads including fine-tuning and serving.

Google’s own tensor processing units (TPUs) are XLA devices, as are recent Nvidia GPUs.

Beyond MaxDiffusion, Google’s launching Jetstream, a new engine to run generative AI models — specifically text-generating models (so not Stable Diffusion). Currently limited to supporting TPUs with GPU compatibility supposedly coming in the future, Jetstream offers up to 3x higher “performance per dollar” for models like Google’s own Gemma 7B and Meta’s Llama 2, Google claims.

“As customers bring their AI workloads to production, there’s an increasing demand for a cost-efficient inference stack that delivers high performance,” Mark Lohmeyer, Google Cloud’s GM of compute and machine learning infrastructure, wrote in a blog post shared with TechCrunch. “JetStream helps with this need … and includes optimizations for popular open models such as Llama 2 and Gemma.”

Now, “3x” improvement is quite a claim to make, and it’s not exactly clear how Google arrived at that figure. Using which generation of TPU? Compared to which baseline engine? And how’s “performance” being defined here, anyway?

I’ve asked Google all these questions and will update this post if I hear back.

Second-to-last on the list of Google’s open-source contributions are new additions to MaxText, Google’s collection of text-generating AI models targeting TPUs and Nvidia GPUs in the cloud. MaxText now includes Gemma 7B, OpenAI’s GPT-3 (the predecessor to GPT-4), Llama 2 and models from AI startup Mistral — all of which Google says can be customized and fine-tuned to developers’ needs.

We’ve heavily optimized [the models’] performance on TPUs and also partnered closely with Nvidia to optimize performance on large GPU clusters,” Lohmeyer said. “These improvements maximize GPU and TPU utilization, leading to higher energy efficiency and cost optimization.”

Finally, Google’s collaborated with Hugging Face, the AI startup, to create Optimum TPU, which provides tooling to bring certain AI workloads to TPUs. The goal is to reduce the barrier to entry for getting generative AI models onto TPU hardware, according to Google — in particular text-generating models.

But at present, Optimum TPU is a bit bare bones. The only model it works with is Gemma 7B. And Optimum TPU doesn’t yet support training generative models on TPUs — only running them.

Google’s promising improvements down the line.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Tech leaders line up to flatter Trump’s ego

If former President Donald Trump is to be...

Singapore AR startup raises $1.16m for tourism in MENA

This financing is intended for the company's growth...

Binance founder CZ sees positive shift in crypto regulation...

Developing crypto regulations, along with a political shift...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!