Is prompting the only way to use LLMs? A closer look at how we use LLMs to build intelligent applications for clients.

I use LLMs regularly. But I use prompting occasionally. 

So, is prompting the only way to leverage the power of LLMs?

When I speak to marketers, journalists, podcast interviewers, and people who have used ChatGPT in some capacity, they tend to think that prompting large language models (LLMs) is the primary way to use AI to accomplish various automation tasks.   

However, in my work with clients, when it comes to designing and building out scalable, intelligent applications, many of the projects we’ve tackled use LLMs…but often not through prompt engineering. 

Even if we do use prompts, it’s often not for the purpose of completing the final task.

Let me tell you three ways we use LLMs in our work with clients and how you, too, can think about ways to leverage LLMs that go beyond prompting.

1/ Training Data Generation

Many of the startup clients we work with have little to no data, and the larger companies often have data in an unusable format or have not started collecting data for their intended AI application.

However, to build scalable domain-specific applications, we need data…lots of good quality data. 

As a simple example, to train a model that can perform company-specific categorization, such as if the given user question fits in one of 10 different company-specific topics, we need hundreds of examples of question-topic pairs.

What is the budget for cancer research for the next fiscal year?Research budget
What are the company’s 2024 business goals?Business strategy
When do I get put on a “performance improvement plan?”Danger zone

Example of question-topic pairs

Although for these questions, you can quickly prompt LLMs to obtain accurate categorization, the more specific the topics become, or the more sophisticated and domain-centric the questions become, the harder it will be for LLMs to get things right without adequate domain knowledge. As is, we often see 60-70% accuracy on domain-specific categorization problems. 

Prompting ChatGPT to classify questions

So what do we do? We fine-tune the LLMs to improve their accuracy. 

This is where training data becomes crucial. 

Unfortunately, we may not have access to a large number of human-curated datasets. 

To address this data scarcity problem, we use LLMs to increase the number of examples through question expansion, rephrasing, and other creative ways to increase the number of training examples. At times, from 100 examples, we can go all the way to 500 to 1000 rows of data, depending on the domain. 

Why this works

Although this approach can sometimes overproduce and the examples can seem repetitive, it works for us because we have the luxury of reviewing the generated training data and eliminating poor-quality examples before using it to build our downstream models. 

Over to you: How you can leverage LLMs to speed up AI development

Similarly, if you have highly domain-specific tasks and LLMs don’t seem to be cutting it, instead of using LLMs to complete the task, use LLMs to generate training data, which you can then use to fine-tune suitable LLMs to complete the intended tasks accurately or build simpler ML models to achieve the same. 

2/ Computing Similarities

Many of the “AI models” that our clients require are more “AI solutions” rather than a single AI model. By that, I mean its a combination approaches (ML, traditional NLP, intelligent software engineering) coming together to solve a specific problem.

For example, if you take a recruiting tool that will recommend potential candidates given a job description, you will often need more than a single AI model under the hood. 

There will be some sophisticated software engineering paired with AI components in different areas, such as in scoring potential candidates and augmenting the input data with more information. In this context, one common area where we often leverage LLMs is computing similarities or relatedness. 

For example, if you take the problem of assessing how related these two texts are:

  • “What is Lena’s work experience?” 
  • “Lena is a machine learning engineer with over 15 years of experience in python,..” 

This requires us to compute how close the answer is to the question. We have used numerous LLMs for similar tasks with varying degrees of success depending on the domain. 

Why this works

LLMs are excellent for this task because they understand the “semantics” of the texts used for comparisons. Further, you can leverage many smaller, open-source LLMs to effectively accomplish the task without relying on paid APIs or the billion-parameter LLMs. We often get comparable results with smaller LLMs with much less hassle when it comes to productionization.

Over to you: How to use LLMs strategically and increase the chances of automation success

If you have complex intelligent automation tasks that you’re looking to accomplish, you can leverage LLMs strategically in different areas to make your intelligent automation shine. 

For example, in creating a solution that crafts customized responses to emails from customers, you may be able to categorize emails with labels that LLMs can handle, so you can customize how each group of emails is processed rather than applying vanilla logic to ALL emails. Then, within each pipeline, you could retrieve the relevant data needed to address the issues within the email (non-AI approach) and then use LLMs to generate a beautiful email response that encapsulates the relevant extracted data ready to be reviewed by your service agents.

The bottom line is that instead of expecting LLMs to complete the entire automation task, use them only where needed for maximum control of your automation and a much higher chance of success in production systems. 

3/ Search 

Building powerful search applications is an area where we get a lot of interest. While many companies rely on simple keyword searches to answer customer questions, find company-specific documents, etc. there’s a lot we can do to enhance the quality of search results by leveraging the power of AI, including LLMs. 

For example, we can better understand the intent of the user’s query using LLMs. Are users searching for “apple,” the electronic item, or “apple,” the fruit? Are users looking to “book,” as in “book travel,” or “book,” as in “a book to read?” Understanding the user’s intent helps us get users the right answers or the right navigation paths for their tasks.

Why this works

As LLMs have excellent language capabilities, they can help disambiguate the meanings of queries, improve user queries, augment queries, and retrieve semantically related documents. The best part is that you can use smaller open-source LLMs and get excellent results, keeping things fairly easy to productionize. 

Over to you: How you can use LLMs to improve search

Look into areas where search or any type of retrieval or question answering is crucial to your customers or employees. Are you able to find what you need most of the time? 

Do you see obvious areas where the search engine or question-answering system misses the mark? Perhaps it did not understand your intent accurately, or a particular intent is not currently supported. It could also be that the retrieved results are irrelevant to the query. These are clear opportunities for improvement with more intelligence. Of course, these opportunities need further investigation through quantitative approaches, but it’s a starting point. 


As you’ve seen in this article, contrary to how most people think we use LLMs, in our development work where we architect comprehensive AI solutions, we don’t typically prompt and re-prompt LLMs to solve an entire automation problem. 

We use LLMs as needed, strategically placed to maximize the quality of the final results and to increase the chances of solutions being put into production. 

I’ve given you three areas to consider for how you can do the same. Let me know if you’ve found an area where you think one of the above suggestions can work.

That’s all for now!

Keep Learning & Succeed With AI

  • JOIN OUR NEWSLETTER, AI Integrated, which teaches you how to successfully integrate AI into your business to attain growth and profitability for years to come.
  • GET 3 FREE CHAPTERS of our book, The Business Case for AI, to learn practical AI applications, immediately usable strategies, and best practices to be successful with AI. Available as: audiobook, print, and eBook.
  • GET A 1:1 INITIAL CONSULT to learn how to move your AI initiatives forward, develop a strategic roadmap, educate leaders, and more. Use strategies you could apply immediately.

Not Sure Where AI Can Be Used in Your Business? Start With Our Bestseller.

The Business Case for AI: A Leader’s Guide to AI Strategies, Best Practices & Real-World Applications. By: Founder, Kavita Ganesan

In this practical guide for business leaders, Kavita Ganesan, our CEO, takes the mystery out of implementing AI, showing you how to launch AI initiatives that get results. With real-world AI examples to spark your own ideas, you’ll learn how to identify high-impact AI opportunities, prepare for AI transitions, and measure your AI performance.

Scroll to Top