The era of AI PCs has begun, say an increasing number of industry experts — and it’s growing more common to see online content that hawks the benefits of these advanced devices. What organization doesn’t want faster computing performance, longer battery life, and an intelligent onboard assistant to summarize brainstorming sessions and schedule meetings in near real time?
While the business world has certainly embraced the fact that AI will play a revolutionary role in how we work, recent studies from analysts, including Forrester, have found that many organizations are still hesitant to go “all in” regarding AI PCs. One of the main reasons cited in the study was difficulty understanding how technology drives business outcomes. In other words, how does the AI PC investment increase your bottom line?
To fully understand the nuances of this wait-and-see approach, it’s essential to start with some definitions.
What Is an AI PC and What Can You Do With It?
From a hardware perspective, an AI PC is essentially a standard PC with more compute power and 16 GB to 32 GB (or more) of memory. On the software side, the AI model has to fit in the allotted memory capacity while still leaving enough free space to run your programs.
Standard AI PC laptops can cost up to $2000, but once you have the hardware and software pieces in place, you now have a tool that can perform an AI-enabled task in a few minutes, which would typically take a few hours manually. For instance, you could:
- Produce a list of common customer issues and simplified solutions. Review it and send an email blast to your customers, saving them time and money while increasing their satisfaction.
- Summarize competitive products and compare them to your solutions, identifying gaps and advantages. Product marketing can now better position the product while the development team can make adjustments.
- Review white papers, medical literature, or court cases, and talk to your data to find a novel approach to solving the problem. This requires expert review, but collecting and reviewing dozens or hundreds of documents can take weeks. An AI model trained with an industry’s specific content knowledge can compress this cycle to a few hours.
With an AI PC, your IT department can send you daily updated model files trained on your own industry-specific or company data. A good toolchain makes this process transparent to the end user. Using an AI PC means you can offload the servers, enabling users to have AI tuned for their work rather than generic cloud AI — with no monthly bill, no token limits, and no lag in the office or while traveling.
Barriers to AI PC Adoption in Business
For many organizations, the value proposition of an AI PC needs to be clarified, and a couple of significant challenges need to be resolved.
Insufficient Domain-Specific Knowledge in Public AI Models
Foundational off-the-shelf AI models don’t have unique and specialized knowledge about your specific products, services, customer support cases, or standard industry practices. These models were trained on public information and often were inadequate for the specialized tasks and content you need. They deliver generic or irrelevant outputs — which limit their usefulness.
Data Security Concerns with Custom AI Models
AI is great at generating or summarizing content, but most of the processing is done on a high-speed cloud server you don’t control. Data privacy is important to everyone, but the standard AI PC solution sidesteps the whole issue, especially regarding custom AI content that uses your business’s proprietary data. No organization wants to deal with data leakage or possible noncompliance with data security and privacy regulations.
Overcoming Adoption Roadblocks: The Closed-Loop AI Ecosystem
Organizations can overcome the inherent challenges of using AI PCs in their projects in several ways. The quick answer is to train the AI model on your data on-premises, where it’s under your control. That means creating what is called a “closed-loop AI ecosystem.” It’s closed-loop because your data stays within your infrastructure and never leaves the premises. It’s also closed-loop because you train the model with your data, distributed to employees and their internal devices. They use that data to do their work, generating more company-specific data fed back into the training flow.
With a closed loop, you can train your AI models more effectively. You can create a solid foundation by training publicly available large language models (LLMs), such as Llama-3, with up to 70 billion parameters. Then, you can go several steps further by customizing and fine-tuning your models using your internal domain- or industry-specific data — so outputs will be more specific and relevant.
Closed loops also make it easy to update AI models, which typically needs to happen periodically as the business generates new data. Having a closed-loop system using your on-premises servers and laptops allows your company to maximize the benefits of AI while keeping full control:
- Train models for your teams on your data using your servers.
- Your IT department sends updated AI models to each team in the background as they become available.
- Innovative tools wait a few seconds until an active task finishes before seamlessly swapping to the new models with zero downtime.
- The tools are up-to-date, and no sensitive information ever leaves your systems.
Once the value proposition is clear and intuitive tools are available, organizations will be able to see the benefits of embracing AI PCs more easily.
Challenges in Building a Traditional Closed-Loop AI Ecosystem
While the closed-loop AI ecosystem gives you the customized training models you want and keeps your data on-premises where others can’t access it, the biggest reason more businesses don’t simply start there is cost. Building the infrastructure to train LLMs with 70 billion parameters efficiently and beyond has always required massive investment in high-performance hardware, such as state-of-the-art GPUs.
A super-fast, all-GPU server cluster can complete complex tasks in just minutes. But that cluster needs at least four servers, each with eight high-end GPUs—and can cost between $500,000 and $1 million. Even for large enterprises, it’s hard to justify that expense for a pilot program or before knowing whether the AI tools can help your business. And for many other small and medium businesses, those costs are entirely out of reach.
Not only does an organization have to afford the high-end equipment to create a closed-loop ecosystem, but it also must have skilled personnel to manage and maintain it.
Now There’s Another Way for Businesses of All Sizes
Several years ago, Phison engineers wanted to build a closed-loop AI ecosystem for internal use but were told by CEO K.S. Pua that they would need to find another, less expensive route to AI training. The innovative engineering team devised a revolutionary approach that allowed the SSD to act as a memory extension to reduce the reliance on expensive GPUs. So, where it once might have taken four servers and 32 high-performance GPUs to train an AI model (at the cost of up to $1 million), Phison’s solution made it possible to get the same valuable results with a single workstation and 8 GPUs — at a much-reduced cost of $30,000 to $50,000. The tradeoff for the cost reduction is time. Instead of 40 minutes, the training session is completed in four hours. For most organizations starting in AI, this is a reasonable compromise that moves the pilot project from a high capital expense to a low departmental expense.
The solution, now called aiDAPTIV+, worked so well at Phison that the company realized it could change the AI training landscape and serve as a transformative complement to AI PCs in business environments of all sizes. With the ability to affordably achieve a private closed-loop AI ecosystem with a workstation and AI PCs, more organizations than ever can begin to reap the incredible benefits of emerging AI technologies.
This article is part of The New Stack’s contributor network. Have insights on the latest challenges and innovations affecting developers? We’d love to hear from you. Become a contributor and share your expertise by filling out this form or emailing Matt Burns at mattburns@thenewstack.io.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.