Amazon wants to host companies’ custom generative AI models

Share via:


AWS, Amazon’s cloud computing business, wants to be the go-to place companies host and fine-tune their custom generative AI models.

Today, AWS announced the launch of Custom Model Import (in preview), a new feature in Bedrock, AWS’ enterprise-focused suite of generative AI services, that allows organizations to import and access their in-house generative AI models as fully managed APIs.

Companies’ proprietary models, once imported, benefit from the same infrastructure as other generative AI models in Bedrock’s library (e.g. Meta’s Llama 3, Anthropic’s Claude 3), including tools to expand their knowledge, fine-tune them and implement safeguards to mitigate their biases.

“There have been AWS customers that have been fine-tuning or building their own models outside of Bedrock using other tools,” Vasi Philomin, VP of generative AI at AWS, told TechCrunch in an interview. “This Custom Model Import capability allows them to bring their own proprietary models to Bedrock and see them right next to all of the other models that are already on Bedrock — and use them with all of the workflows that are also already on Bedrock, as well.”

Importing custom models

According to a recent poll from Cnvrg, Intel’s AI-focused subsidiary, the majority of enterprises are approaching generative AI by building their own models and refining them to their applications. Those same enterprises say that they see infrastructure, including cloud compute infrastructure, as their greatest barrier to deployment, per the poll.

With Custom Model Import, AWS aims to rush in to fill the need while maintaining pace with cloud rivals. (Amazon CEO Andy Jassy foreshadowed as much in his recent annual letter to shareholders.)

For some time, Vertex AI, Google’s analog to Bedrock, has allowed customers to upload generative AI models, tailor them and serve them through APIs. Databricks, too, has long provided toolsets to host and tweak custom models, including its own recently released DBRX.

Asked what sets Custom Model Import apart, Philomin asserted that it — and by extension Bedrock — offer a wider breadth and depth of model customization options than the competition, adding that “tens of thousands” of customers today are using Bedrock.

“Number one, Bedrock provides several ways for customers to deal with serving models,” Philomin said. “Number two, we have a whole bunch of workflows around these models — and now customers’ can stand right next to all of the other models that we have already available. A key thing that most people like about this is the ability to be able to experiment across multiple different models using the same workflows, and then actually take them to production from the same place.”

So what are the alluded-to model customization options?

Philomin points to Guardrails, which lets Bedrock users configure thresholds to filter — or at least attempt to filter — models’ outputs for things like hate speech, violence and private personal or corporate information. (Generative AI models are notorious for going off the rails in problematic ways, including leaking sensitive info; AWS’ have been no exception.) He also highlighted Model Evaluation, a Bedrock tool customers can use to test how well a model — or several — perform across a given set of criteria.

Both Guardrails and Model Evaluation are now generally available following a several-months-long preview.

I feel compelled to note here that Custom Model Import only supports three model architectures at the moment — Hugging Face’s Flan-T5, Meta’s Llama and Mistral’s models — and that Vertex AI and other Bedrock-rivaling services, including Microsoft’s AI development tools on Azure, offer more or less comparable safety and evaluation features (see Azure AI Content Safety, model evaluation in Vertex and so on).

What is unique to Bedrock, though, are AWS’ Titan family of generative AI models. And — coinciding with the release of Custom Model Import — there’s several noteworthy developments on that front.

Upgraded Titan models

Titan Image Generator, AWS’ text-to-image model, is now generally available after launching in preview last November. As before, Titan Image Generator can create new images given a text description or customize existing images, for example swapping out an image background while retaining the subjects in the image.

Compared to the preview version, Titan Image Generator in GA can generate images with more “creativity,” said Philomin, without going into detail. (Your guess as to what that means is as good as mine.)

I asked Philomin if he had any more details to share about how Titan Image Generator was trained.

At the model’s debut last November, AWS was vague about which data, exactly, it used in training Titan Image Generator. Few vendors readily reveal such information; they see training data as a competitive advantage and thus keep it and info relating to it close to the chest.

Training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much. Several cases making their way through the courts reject vendors’ fair use defenses, arguing that text-to-image tools replicate artists’ styles without the artists’ explicit permission and allow users to generate new works resembling artists’ originals for which artists receive no payment.

Philomin would only tell me that AWS uses a combination of first-party and licensed data.

“We have a combination of proprietary data sources, but also we license a lot of data,” he said. “We actually pay copyright owners licensing fees in order to be able to use their data, and we do have contracts with several of them.”

It’s more detail than from November. But I have a feeling that Philomin’s answer won’t satisfy everyone, particularly the content creators and AI ethicists arguing for greater transparency where it concerns generative AI model training.

In lieu of transparency, AWS says it’ll continue to offer an indemnification policy that covers customers in the event a Titan model like Titan Image Generator regurgitates (i.e. spits out a mirror copy of) a potentially copyrighted training example. (Several rivals, including Microsoft and Google, offer similar policies covering their image generation models.)

To address another pressing ethical threat — deepfakes — AWS says that images created with Titan Image Generator will, as during the preview, come with a “tamper-resistant” invisible watermark. Philomin says that the watermark has been made more resistant in the GA release to compression and other image edits and manipulations.

Segueing into less controversial territory, I asked Philomin whether AWS — like Google, OpenAI and others — is exploring video generation given the excitement around (and investment in) the tech. Philomin didn’t say that AWS wasn’t… but he wouldn’t hint at any more than that.

“Obviously, we’re constantly looking to see what new capabilities customers want to have, and video generation definitely comes up in conversations with customers,” Philomin said. “I’d ask you to stay tuned.”

In one last piece of Titan-related news, AWS released the second generation of its Titan Embeddings model, Titan Text Embeddings V2. Titan Text Embeddings V2 converts text to numerical representations called embeddings to power search and personalization applications. So did the first-generation Embeddings model — but AWS claims that Titan Text Embeddings V2 is overall more efficient, cost-effective and accurate.

“What the Embeddings V2 model does is reduce the overall storage [necessary to use the model] by up to four times while retaining 97% of the accuracy,” Philomin claimed, “outperforming other models that are comparable.”

We’ll see if real-world testing bears that out.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Amazon wants to host companies’ custom generative AI models


AWS, Amazon’s cloud computing business, wants to be the go-to place companies host and fine-tune their custom generative AI models.

Today, AWS announced the launch of Custom Model Import (in preview), a new feature in Bedrock, AWS’ enterprise-focused suite of generative AI services, that allows organizations to import and access their in-house generative AI models as fully managed APIs.

Companies’ proprietary models, once imported, benefit from the same infrastructure as other generative AI models in Bedrock’s library (e.g. Meta’s Llama 3, Anthropic’s Claude 3), including tools to expand their knowledge, fine-tune them and implement safeguards to mitigate their biases.

“There have been AWS customers that have been fine-tuning or building their own models outside of Bedrock using other tools,” Vasi Philomin, VP of generative AI at AWS, told TechCrunch in an interview. “This Custom Model Import capability allows them to bring their own proprietary models to Bedrock and see them right next to all of the other models that are already on Bedrock — and use them with all of the workflows that are also already on Bedrock, as well.”

Importing custom models

According to a recent poll from Cnvrg, Intel’s AI-focused subsidiary, the majority of enterprises are approaching generative AI by building their own models and refining them to their applications. Those same enterprises say that they see infrastructure, including cloud compute infrastructure, as their greatest barrier to deployment, per the poll.

With Custom Model Import, AWS aims to rush in to fill the need while maintaining pace with cloud rivals. (Amazon CEO Andy Jassy foreshadowed as much in his recent annual letter to shareholders.)

For some time, Vertex AI, Google’s analog to Bedrock, has allowed customers to upload generative AI models, tailor them and serve them through APIs. Databricks, too, has long provided toolsets to host and tweak custom models, including its own recently released DBRX.

Asked what sets Custom Model Import apart, Philomin asserted that it — and by extension Bedrock — offer a wider breadth and depth of model customization options than the competition, adding that “tens of thousands” of customers today are using Bedrock.

“Number one, Bedrock provides several ways for customers to deal with serving models,” Philomin said. “Number two, we have a whole bunch of workflows around these models — and now customers’ can stand right next to all of the other models that we have already available. A key thing that most people like about this is the ability to be able to experiment across multiple different models using the same workflows, and then actually take them to production from the same place.”

So what are the alluded-to model customization options?

Philomin points to Guardrails, which lets Bedrock users configure thresholds to filter — or at least attempt to filter — models’ outputs for things like hate speech, violence and private personal or corporate information. (Generative AI models are notorious for going off the rails in problematic ways, including leaking sensitive info; AWS’ have been no exception.) He also highlighted Model Evaluation, a Bedrock tool customers can use to test how well a model — or several — perform across a given set of criteria.

Both Guardrails and Model Evaluation are now generally available following a several-months-long preview.

I feel compelled to note here that Custom Model Import only supports three model architectures at the moment — Hugging Face’s Flan-T5, Meta’s Llama and Mistral’s models — and that Vertex AI and other Bedrock-rivaling services, including Microsoft’s AI development tools on Azure, offer more or less comparable safety and evaluation features (see Azure AI Content Safety, model evaluation in Vertex and so on).

What is unique to Bedrock, though, are AWS’ Titan family of generative AI models. And — coinciding with the release of Custom Model Import — there’s several noteworthy developments on that front.

Upgraded Titan models

Titan Image Generator, AWS’ text-to-image model, is now generally available after launching in preview last November. As before, Titan Image Generator can create new images given a text description or customize existing images, for example swapping out an image background while retaining the subjects in the image.

Compared to the preview version, Titan Image Generator in GA can generate images with more “creativity,” said Philomin, without going into detail. (Your guess as to what that means is as good as mine.)

I asked Philomin if he had any more details to share about how Titan Image Generator was trained.

At the model’s debut last November, AWS was vague about which data, exactly, it used in training Titan Image Generator. Few vendors readily reveal such information; they see training data as a competitive advantage and thus keep it and info relating to it close to the chest.

Training data details are also a potential source of IP-related lawsuits, another disincentive to reveal much. Several cases making their way through the courts reject vendors’ fair use defenses, arguing that text-to-image tools replicate artists’ styles without the artists’ explicit permission and allow users to generate new works resembling artists’ originals for which artists receive no payment.

Philomin would only tell me that AWS uses a combination of first-party and licensed data.

“We have a combination of proprietary data sources, but also we license a lot of data,” he said. “We actually pay copyright owners licensing fees in order to be able to use their data, and we do have contracts with several of them.”

It’s more detail than from November. But I have a feeling that Philomin’s answer won’t satisfy everyone, particularly the content creators and AI ethicists arguing for greater transparency where it concerns generative AI model training.

In lieu of transparency, AWS says it’ll continue to offer an indemnification policy that covers customers in the event a Titan model like Titan Image Generator regurgitates (i.e. spits out a mirror copy of) a potentially copyrighted training example. (Several rivals, including Microsoft and Google, offer similar policies covering their image generation models.)

To address another pressing ethical threat — deepfakes — AWS says that images created with Titan Image Generator will, as during the preview, come with a “tamper-resistant” invisible watermark. Philomin says that the watermark has been made more resistant in the GA release to compression and other image edits and manipulations.

Segueing into less controversial territory, I asked Philomin whether AWS — like Google, OpenAI and others — is exploring video generation given the excitement around (and investment in) the tech. Philomin didn’t say that AWS wasn’t… but he wouldn’t hint at any more than that.

“Obviously, we’re constantly looking to see what new capabilities customers want to have, and video generation definitely comes up in conversations with customers,” Philomin said. “I’d ask you to stay tuned.”

In one last piece of Titan-related news, AWS released the second generation of its Titan Embeddings model, Titan Text Embeddings V2. Titan Text Embeddings V2 converts text to numerical representations called embeddings to power search and personalization applications. So did the first-generation Embeddings model — but AWS claims that Titan Text Embeddings V2 is overall more efficient, cost-effective and accurate.

“What the Embeddings V2 model does is reduce the overall storage [necessary to use the model] by up to four times while retaining 97% of the accuracy,” Philomin claimed, “outperforming other models that are comparable.”

We’ll see if real-world testing bears that out.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Meta’s Threads app launches custom feeds amid Bluesky surge

Meta’s Instagram Threads began rolling out a new custom...

Indian Startup Funding — Startups Raised $580 Mn This...

SUMMARY Indian startups cumulatively $579.5 Mn across 18 deals,...

Why UNIQLO confidently says no to e-comm marketplaces in...

The fashion retail landscape in India is quite competitive...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!