As generative AI advancements continue transforming operations and processes at breakneck speed, organizations are at a pivotal moment.
A recent McKinsey report from May 2024 revealed that 65% of respondents now use generative AI (GenAI) regularly in at least one business function — almost double the figure from ten months prior.
Yet, while some companies are reaping early rewards, others are wrestling with implementation complexity where the rules are still being written. Given this Wild West environment, the challenge is clear: With GenAI poised to accelerate the work done by people across every industry, business leaders must choose how to harness its potential effectively.
According to IBM’s May 2024 findings, 62% of CEOs are willing to take greater risks in AI adoption than their competitors, with half (51%) admitting that the fear of falling behind is driving them to invest in some technologies before they even have a clear understanding of their value. Given this context, businesses are facing a deluge of decisions on how best to adopt and operationalize GenAI — from selecting the right infrastructure providers and models to managing organizational change within their organizations.
In this rapidly shifting environment, here are key lessons we have accrued on how to help leaders make critical decisions about leveraging their GenAI investments in the most effective way forward.
Generative AI Adoption Trends
Once a budget has been allocated, CTOs and application developers must consider the many GenAI providers available. Decision-makers must evaluate each model or infrastructure’s benefits and risks and price performance, as GenAI/LLM tools are not created equal. Furthermore, industry giants like Google and Amazon are constantly updating and innovating their models. Each model has different strengths, capabilities, and specialties, with varying levels of scalability and customizability. Companies in different sectors will have unique needs, and different departments within companies will have specific requirements.
Gartner’s 2024 survey found that utilizing GenAI embedded in existing applications (e.g., Microsoft’s Copilot for 365 or Adobe Firefly) is the primary method to fulfill GenAI use cases. This is followed by customizing GenAI models with prompt engineering (25%), training or fine-tuning bespoke GenAI models (21%), or using standalone GenAI tools, like ChatGPT or Gemini (19%). But how does one decide, for example, between OpenAI, Anthropic, or Meta’s open-source Llama?
Evaluating Generative AI Models for Business Use
We recommend moving methodically — but not so methodically that you become paralyzed. A model evaluation team for an enterprise or an individual for a smaller organization should explore which models are better optimized for specific applications and queries. Ideally, the team will gain a practical understanding of the different models’ efficacies and efficiencies. The model evaluation team should also anticipate missteps and remain open to trying different models after a trial since GenAI models continue improving.
How Much Prompt Engineering Is Necessary?
When selecting an approach to integrating GenAI tools, leaders must consider how much time and resources they must devote to training their models. Pre-trained models designed for general-purpose tasks need minimal prompt engineering. Organizations in highly regulated sectors like financial, energy, or healthcare will find that their use cases require significant prompt engineering to create guardrails for their models. Those involved in content creation, marketing, and customer service will require lighter training to ensure responses are professional, unbiased, and aligned with branding strategies. Financial services or healthcare firms must ensure technology works for their use cases and customer needs.
Most companies will need at least some level of prompt engineering to fine-tune the tool to their domains. Simply put, the better you train the GenAI, the better the results. It is entirely possible that the majority of industries will opt for more bespoke models tailored to their needs rather than off-the-shelf solutions. Already, energy companies’ primary strategy (60%) for developing GenAI is to make their own or significantly customize their models. In financial services and healthcare, 47% are doing the same.
Balancing Domain Expertise With AI Technology
At Clearwater Analytics, we work with global financial institutions, constructing prompts rigorously to ensure they grasp the granular context of queries. Dedicated prompt analysts are invaluable at this stage for companies attempting to employ complex use cases in sectors like finance, technology, or energy. These analysts evaluate model responses and refine prompts to ensure they produce the proper outcomes. Human oversight still plays an essential role in deploying AI tools.
We have solved two or three strong cases for our customers using GenAI. However, we recognize the journey is ongoing. Companies must continually build more use cases for their operations and customers to fully extract value from GenAI agents. In our early experiments with multi-agent workflows, the agents were slow and underperformed, but we kept iterating on them. We found that the wider the scope, the more problematic these agents become.
Advantages of Small Language Models (SLMs)
If you narrow the scope of the models, they become more responsive and competent in responding to specific questions. For example, if I am building an investment management GenAI workflow or customer use case, I don’t need 90% on what large language models are trained on. I require narrower agents that deeply understand the domain relevant to investment managers. I will need an AI professional with domain expertise in financial services to get there. The biggest reasons small language models (SLM) will proliferate are affordability and resource savings.
Highly tuned LLMs require billions of parameters, so massive they’re running out of data on the internet. Astonishing, right? Training these LLMs demands extensive time and substantial energy consumption, as evidenced by Microsoft resuscitating the infamous Three Mile Island nuclear power plant and the soaring market cap of NVIDIA and its GPUs. Most companies cannot afford the massive infrastructure investments needed for such models. This is why venture capitalists are funding SLMs, such as Arcee’s recent $24 million Series A funding.
Integrating Synthetic and Curated Data for SLMs
Even though SLMs are trained on much less data, providers are struggling with it because models are only as good as their underlying datasets. Consequently, providers spend significant time figuring out which datasets to use for training. However, startups are stepping in to create narrowly tuned synthetic datasets that companies can train the models on with pinpoint precision for their unique needs.
A suite of companies is developing tools across every single layer of the GenAI stack to enhance its efficiency and performance. This further supports the idea that organizations should remain flexible and not become overly reliant on any specific language model provider, whether it’s ChatGPT, Claude, or Anthropic. Each model provider will evolve independently, so it’s essential to ensure your infrastructure allows for the easy swapping of models as needed.
Keeping Flexibility in GenAI Adoption
Now is the time for organizations to embrace the generative AI boom. By actively engaging with this transformative technology while remaining vigilant about potential pitfalls, companies can effectively navigate the complexities of AI adoption.
Start small, pilot use cases, and embrace a culture of iteration and learning — the journey to maximize GenAI’s potential is just beginning and unfolding before our very eyes.
This article is part of The New Stack’s contributor network. Have insights on the latest challenges and innovations affecting developers? We’d love to hear from you. Become a contributor and share your expertise by filling out this form or emailing Matt Burns at mattburns@thenewstack.io.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.

![[CITYPNG.COM]White Google Play PlayStore Logo – 1500×1500](https://startupnews.fyi/wp-content/uploads/2025/08/CITYPNG.COMWhite-Google-Play-PlayStore-Logo-1500x1500-1-630x630.png)