Sometimes starting small is the best course of action when implementing AI. As the AI market and enterprise AI plans continue to change, that is what many IT leaders are discovering lately.
Over the course of the last 18 months, numerous businesses have explored and created solutions utilizing large language models (LLMs) like GPT-4 through Azure OpenAI, all the while evaluating the advantages of digital assistants like Microsoft Copilot. However, IT leaders are now discovering that purpose-built small language models (SLMs) and other AI technologies have a role, offering advantages including reduced deployment costs and fewer hallucinations.
Both Apple and Microsoft are recognizing the promise of little artificial intelligence (AI). In April, Apple released eight small language models designed for mobile devices, while Microsoft debuted its Phi-3 small language models.
According to Dave Bullock, CTO of UJET, a contact center as a service provider testing short language model artificial intelligence (ALLM) technology, SLMs, and other conventional non-LLM AI technologies have a wide range of applications, especially for businesses with unique needs. With a small amount of data, SLMs can be trained to do a specific task, providing organizations with total control over the data’s utilization.
minimal entrance barriers
Even better, Bullock argues, you may trial a tiny language model AI for almost nothing instead of paying millions of dollars to construct your own or paying an LLM’s monthly licensing fees.
Hugging Face provides dozens of free and open-source AIs that businesses may customize to suit their requirements, either by renting GPU power from a supplier or by utilizing GPUs they already own. According to him, most software engineers can train or fine-tune their own tiny language models using easily accessible tools, even though AI competence in LLMs is still uncommon.
He states, “You might want to just spin up some GPUs in the cloud and have them long enough to train, or you might already have a GPU in your video game machine.” “There might be an extremely low entry barrier.”
About 90% of Insight Enterprises’ clients use LLMs for their AI projects, but the company’s chief data officer and CTO, Carm Taglienti, believes that a shift toward smaller, more specialized models is imminent.
When a client wants to experiment with AI, Taglienti suggests LLMs; nevertheless, for certain workloads, he occasionally suggests traditional AI tools. Although LLMs are more effective than tiny AIs at activities like document summarization and marketing material creation, they are frequently more costly and time-consuming to configure for specific use cases.