Dropbox, Figma CEOs back Lamini, a startup building a generative AI platform for enterprises

Share via:

Lamini, a Palo Alto-based startup building a platform to help enterprises deploy generative AI tech, has raised $25 million from investors including Stanford computer science professor Andrew Ng.

Lamini, co-founded several years ago by Sharon Zhou and Greg Diamos, has an interesting sales pitch.

Many generative AI platforms are far too general-purpose, Zhou and Diamos argue, and don’t have solutions and infrastructure geared to meet the needs of corporations. In contrast, Lamini was built from the ground up with enterprises in mind, and is focused on delivering high generative AI accuracy and scalability.

“The top priority of nearly every CEO, CIO and CTO is to take advantage of generative AI within their organization with maximal ROI,” Zhou, Lamini’s CEO, told TechCrunch. “But while it’s easy to get a working demo on a laptop for an individual developer, the path to production is strewn with failures left and right.”

To Zhou’s point, many companies have expressed frustration with the hurdles to meaningfully embracing generative AI across their business functions.

According to a March poll from MIT Insights, only 9% of organizations have widely adopted generative AI despite 75% having experimented with it. Top hurdles run the gamut from a lack of IT infrastructure and capabilities to poor governance structures, insufficient skills and high implementation costs. Security is a major factor, too — in a recent survey by Insight Enterprises, 38% of companies said security was impacting their ability to leverage generative AI tech.

So what’s Lamini’s answer?

Zhou says that “every piece” of Lamini’s tech stack has been optimized for enterprise-scale generative AI workloads, from the hardware to the software, including the engines used to support model orchestration, fine-tuning, running and training. “Optimized” is a vague word, granted, but Lamini is pioneering one step that Zhou calls “memory tuning,” which is a technique to train a model on data such that it recalls parts of that data exactly.

Memory tuning can potentially reduce hallucinations, Zhou claims, or instances when a model makes up facts in response to a request.

“Memory tuning is a training paradigm — as efficient as fine-tuning, but goes beyond it — to train a model on proprietary data that includes key facts, numbers and figures so that the model has high precision,” Nina Wei, an AI designer at Lamini, told me via email, “and can memorize and recall the exact match of any key information instead of generalizing or hallucinating.”

I’m not sure I buy that. “Memory tuning” appears to be more a marketing term than an academic one; there aren’t any research papers about it — none that I managed to turn up, at least. I’ll leave Lamini to show evidence that its “memory tuning” is better than the other hallucination-reducing techniques that are being/have been attempted.

Fortunately for Lamini, memory tuning isn’t its only differentiator.

Zhou says the platform can operate in highly secured environments, including air-gapped ones. Lamini lets companies run, fine tune, and train models on a range of configurations, from on-premises data centers to public and private clouds. And it scales workloads “elastically,” reaching over 1,000 GPUs if the application or use case demands it, Zhou says.

“Incentives are currently misaligned in the market with closed source models,” Zhou said. “We aim to put control back into the hands of more people, not just a few, starting with enterprises who care most about control and have the most to lose from their proprietary data owned by someone else.”

Lamini’s co-founders are, for what it’s worth, quite accomplished in the AI space. They’ve also separately brushed shoulders with Ng, which no doubt explains his investment.

Zhou was previously faculty at Stanford, where she headed a group that was researching generative AI. Prior to receiving her doctorate in computer science under Ng, she was a machine learning product manager at Google Cloud.

Diamos, for his part, co-founded MLCommons, the engineering consortium dedicated to creating standard benchmarks for AI models and hardware, as well as the MLCommons benchmarking suite, MLPerf. He also led AI research at Baidu, where he worked with Ng while the latter was chief scientist there. Diamos was also a software architect on Nvidia’s CUDA team.

The co-founders’ industry connections appear to have given Lamini a leg up on the fundraising front. In addition to Ng, Figma CEO Dylan Field, Dropbox CEO Drew Houston, OpenAI co-founder Andrej Karpathy, and — strangely enough — Bernard Arnault, the CEO of luxury goods giant LVMH, have all invested in Lamini.

AMD Ventures is also an investor (a bit ironic considering Diamos’ Nvidia roots), as are First Round Capital and Amplify Partners. AMD got involved early, supplying Lamini with data center hardware, and today, Lamini runs many of its models on AMD Instinct GPUs, bucking the industry trend.

Lamini makes the lofty claim that its model training and running performance is on par with Nvidia equivalent GPUs, depending on the workload. Since we’re not equipped to test that claim, we’ll leave it to third parties.

To date, Lamini has raised $25 million across seed and Series A rounds (Amplify led the Series A). Zhou says the money is being put toward tripling the company’s 10-person team, expanding its compute infrastructure, and kicking off development into “deeper technical optimizations.”

There are a number of enterprise-oriented, generative AI vendors that could compete with aspects of Lamini’s platform, including tech giants like Google, AWS and Microsoft (via its OpenAI partnership). Google, AWS and OpenAI, in particular, have been aggressively courting the enterprise in recent months, introducing features like streamlined fine-tuning, private fine-tuning on private data, and more.

I asked Zhou about Lamini’s customers, revenue and overall go-to-market momentum. She wasn’t willing to reveal much at this somewhat early juncture, but said that AMD (via the AMD Ventures tie-in), AngelList and NordicTrack are among Lamini’s early (paying) users, along with several undisclosed government agencies.

“We’re growing quickly,” she added. “The number one challenge is serving customers. We’ve only handled inbound demand because we’ve been inundated. Given the interest in generative AI, we’re not representative in the overall tech slowdown — unlike our peers in the hyped AI world, we have gross margins and burn that look more like a regular tech company.”

Amplify general partner Mike Dauber said, “We believe there’s a massive opportunity for generative AI in enterprises. While there are a number of AI infrastructure companies, Lamini is the first one I’ve seen that is taking the problems of the enterprise seriously and creating a solution that helps enterprises unlock the tremendous value of their private data while satisfying even the most stringent compliance and security requirements.”


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Dropbox, Figma CEOs back Lamini, a startup building a generative AI platform for enterprises

Lamini, a Palo Alto-based startup building a platform to help enterprises deploy generative AI tech, has raised $25 million from investors including Stanford computer science professor Andrew Ng.

Lamini, co-founded several years ago by Sharon Zhou and Greg Diamos, has an interesting sales pitch.

Many generative AI platforms are far too general-purpose, Zhou and Diamos argue, and don’t have solutions and infrastructure geared to meet the needs of corporations. In contrast, Lamini was built from the ground up with enterprises in mind, and is focused on delivering high generative AI accuracy and scalability.

“The top priority of nearly every CEO, CIO and CTO is to take advantage of generative AI within their organization with maximal ROI,” Zhou, Lamini’s CEO, told TechCrunch. “But while it’s easy to get a working demo on a laptop for an individual developer, the path to production is strewn with failures left and right.”

To Zhou’s point, many companies have expressed frustration with the hurdles to meaningfully embracing generative AI across their business functions.

According to a March poll from MIT Insights, only 9% of organizations have widely adopted generative AI despite 75% having experimented with it. Top hurdles run the gamut from a lack of IT infrastructure and capabilities to poor governance structures, insufficient skills and high implementation costs. Security is a major factor, too — in a recent survey by Insight Enterprises, 38% of companies said security was impacting their ability to leverage generative AI tech.

So what’s Lamini’s answer?

Zhou says that “every piece” of Lamini’s tech stack has been optimized for enterprise-scale generative AI workloads, from the hardware to the software, including the engines used to support model orchestration, fine-tuning, running and training. “Optimized” is a vague word, granted, but Lamini is pioneering one step that Zhou calls “memory tuning,” which is a technique to train a model on data such that it recalls parts of that data exactly.

Memory tuning can potentially reduce hallucinations, Zhou claims, or instances when a model makes up facts in response to a request.

“Memory tuning is a training paradigm — as efficient as fine-tuning, but goes beyond it — to train a model on proprietary data that includes key facts, numbers and figures so that the model has high precision,” Nina Wei, an AI designer at Lamini, told me via email, “and can memorize and recall the exact match of any key information instead of generalizing or hallucinating.”

I’m not sure I buy that. “Memory tuning” appears to be more a marketing term than an academic one; there aren’t any research papers about it — none that I managed to turn up, at least. I’ll leave Lamini to show evidence that its “memory tuning” is better than the other hallucination-reducing techniques that are being/have been attempted.

Fortunately for Lamini, memory tuning isn’t its only differentiator.

Zhou says the platform can operate in highly secured environments, including air-gapped ones. Lamini lets companies run, fine tune, and train models on a range of configurations, from on-premises data centers to public and private clouds. And it scales workloads “elastically,” reaching over 1,000 GPUs if the application or use case demands it, Zhou says.

“Incentives are currently misaligned in the market with closed source models,” Zhou said. “We aim to put control back into the hands of more people, not just a few, starting with enterprises who care most about control and have the most to lose from their proprietary data owned by someone else.”

Lamini’s co-founders are, for what it’s worth, quite accomplished in the AI space. They’ve also separately brushed shoulders with Ng, which no doubt explains his investment.

Zhou was previously faculty at Stanford, where she headed a group that was researching generative AI. Prior to receiving her doctorate in computer science under Ng, she was a machine learning product manager at Google Cloud.

Diamos, for his part, co-founded MLCommons, the engineering consortium dedicated to creating standard benchmarks for AI models and hardware, as well as the MLCommons benchmarking suite, MLPerf. He also led AI research at Baidu, where he worked with Ng while the latter was chief scientist there. Diamos was also a software architect on Nvidia’s CUDA team.

The co-founders’ industry connections appear to have given Lamini a leg up on the fundraising front. In addition to Ng, Figma CEO Dylan Field, Dropbox CEO Drew Houston, OpenAI co-founder Andrej Karpathy, and — strangely enough — Bernard Arnault, the CEO of luxury goods giant LVMH, have all invested in Lamini.

AMD Ventures is also an investor (a bit ironic considering Diamos’ Nvidia roots), as are First Round Capital and Amplify Partners. AMD got involved early, supplying Lamini with data center hardware, and today, Lamini runs many of its models on AMD Instinct GPUs, bucking the industry trend.

Lamini makes the lofty claim that its model training and running performance is on par with Nvidia equivalent GPUs, depending on the workload. Since we’re not equipped to test that claim, we’ll leave it to third parties.

To date, Lamini has raised $25 million across seed and Series A rounds (Amplify led the Series A). Zhou says the money is being put toward tripling the company’s 10-person team, expanding its compute infrastructure, and kicking off development into “deeper technical optimizations.”

There are a number of enterprise-oriented, generative AI vendors that could compete with aspects of Lamini’s platform, including tech giants like Google, AWS and Microsoft (via its OpenAI partnership). Google, AWS and OpenAI, in particular, have been aggressively courting the enterprise in recent months, introducing features like streamlined fine-tuning, private fine-tuning on private data, and more.

I asked Zhou about Lamini’s customers, revenue and overall go-to-market momentum. She wasn’t willing to reveal much at this somewhat early juncture, but said that AMD (via the AMD Ventures tie-in), AngelList and NordicTrack are among Lamini’s early (paying) users, along with several undisclosed government agencies.

“We’re growing quickly,” she added. “The number one challenge is serving customers. We’ve only handled inbound demand because we’ve been inundated. Given the interest in generative AI, we’re not representative in the overall tech slowdown — unlike our peers in the hyped AI world, we have gross margins and burn that look more like a regular tech company.”

Amplify general partner Mike Dauber said, “We believe there’s a massive opportunity for generative AI in enterprises. While there are a number of AI infrastructure companies, Lamini is the first one I’ve seen that is taking the problems of the enterprise seriously and creating a solution that helps enterprises unlock the tremendous value of their private data while satisfying even the most stringent compliance and security requirements.”


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Tracing India’s Telecom Journey This World Telecom Day

SUMMARY Data costs in India plummeted from a staggering...

Twitter is officially X.com now – The Verge

The social network formerly known as Twitter has...

OTPless Bags $3.5 Mn For Seamless And Secured Authentication...

SUMMARY OTPless plans to use the new funds to...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!