How to bring LLMs into your customer service support (and should you)?

Natalie Smithson
AI enthusiast | Tea addict | Focused on using AI assistants to win the working week
A woman sits at her computer and a speech bubble shows the text

You’ve likely played around with the most popular LLMs (Large Language Models) like Open AI’s GPT-4, Microsoft’s Bing, Google’s Bard, and Inflection AI’s Pi, asking questions, generating content, understanding their behaviour and the kinds of things they can do to help you. That’s fun, but how exactly can you use LLMs to level up your customer service support?

Are LLMs safe to use?

We explain all of this and more, so you can break into LLMs and advanced AI without a hitch.

TL;DR

  • There are a few ways you can start using LLMs for customer service, depending on your goal and the resources you have to create and maintain an AI assistant alongside it.
  • Some of the largest corporations are trying to build their own technology system that includes LLM, but this isn’t necessary for customer service leaders simply looking to up their game.
  • You can pay for an AI chatbot that’s linked to a specific LLM, but there’s often no way to check or edit information it sends out to customers, which could result in misinformation, ruin your customer experience, and damage your brand.
  • You can create an advanced AI assistant with LLM capability built in, plus natural language processing, a human in the loop, and integration with your business systems.
  • If you choose a platform with LLM built in, ask for a demo and to see evidence that LLM responses can be edited and controlled, and take advantage of a free trial to check it suits.

What is an LLM?

A Large Language Model (LLM) is a specific form of AI that can understand and generate text, images, video, code, and more. LLMs learn from gigantic sets of data and are able to show impressive understanding of human language in different contexts, languages and subject matters, but, like humans, aren’t perfect and can make mistakes.

LLMs are being used in all sorts of ways to help all kinds of businesses across different industries work faster and smarter. Marketers can use LLMs to help create content and images, HR officers can use them to create learning materials and documents, and software engineers to create code.

Next, let’s explore why they’re such a fantastic help for customer service teams too.

How you can benefit from using LLMs for customer service support

LLMs are being used in conjunction with advanced AI chatbots to deliver instant, automated support to customers 24 hours a day.

For a while now, customer service leaders have used AI assistants to give consistent, accurate, personalised support to customers on a scale that’s not possible when you rely solely on human capability. That’s not a criticism of support agents, there are many opportunities for customer service teams to benefit from using AI by reducing stress and increasing productivity. The most successful outcome comes from agents and AI working together and the most advanced AI assistants of today are helping us in ways we’ve never seen before. Adding LLM capability supercharges the entire support setup.

An AI assistant sits alongside a human to support a customer (illustration) alongside the text

LLMs are transforming customer service thanks to their superior capacity to sift through unfathomable amounts of data and still pick up on intricate details ― in seconds. They make the AI we use today smarter, faster, and the most responsive it’s ever been. For customer service leaders, it’s a quick way to get ahead of competitors. At least, the ones still offering customer support the old-fashioned way.

Linking up your support service with LLMs

There are a few different ways you can start using LLMs, but which option you choose is dependent on what you want to achieve and the resources you have to create and maintain an AI assistant alongside it. You can think of the AI assistant as the customer-facing helper and the LLM out back as one of the tools an AI assistant can use to help respond to a customer’s request.

DIFFICULT: Create a complete technology system

Technology companies who’ve been working in AI for many years are building their own LLMs, but this takes decades of experience, a master group of tech leaders working as one, and investment in the millions. This route isn’t for customer service leaders simply looking to up their game, although the biggest corporations are starting to give it a go.

Bloomberg, the finance experts, built their own LLM using 40 years worth of their own data. If you’re curious about what it takes to do this, you can watch their Head of Machine Learning Strategy, David Rosenberg, talk about the “staggering amount of data”, millions of hours of AI training, and millions of dollars it took to create a relatively small LLM in his talk at the Toronto Machine Learning Summit.

And don’t forget, once you have your LLM, you’ll also need an AI assistant, and if you want it to work seamlessly with your customer support team and genuinely help customers, you’ll need more tools for it to use alongside the LLM:

RISKY: Use a chatbot with an LLM link (”wrapper”)

You can pay for an AI chatbot that’s linked to a specific LLM (usually via ChatGPT) to make the experience appear more conversational, but it’s simply two different systems working side by side. The chatbot is a wrapper around the LLM, so it takes questions from customers, but is checking for answers via the LLM.

The problem is, there’s often no way to check or edit the information the LLM comes back with before it goes out to your customers and that’s not a sensible long-term option for support leaders. We’re already seeing PR disasters for companies likely getting bad advice and plugging their AI chatbot directly into an LLM, so with a little prompting, it can be made to say anything.

DPD sadly suffered a case like this when they upgraded their AI chatbot and a customer was able to encourage it to criticise them. Since LLMs have artificial intelligence and no concept of what’s right and wrong, a disgruntled customer was able to get the chatbot to say the courier company is “slow, unreliable, and their customer service is terrible.” This kind of thing can be avoided when there’s a human in the loop (HITL) approving content, and if you’re using an AI chatbot platform that stores consistent responses for when they’re needed and has strict prompt engineering processes in place for when an LLM takes over.

EASY: Launch an AI assistant with LLM capability securely built in

You can create an advanced AI assistant with LLM capability built in where use of the technology is managed by an experienced AI provider. Expert in the technology, they’ll make sure your AI assistant is only ever powered by an LLM resource where it’s most beneficial, and you’ll have the option to approve all content it generates before it goes out to your customers.

DIFFICULT: Create a technology system RISKY: Use an LLM wrapper EASY: Use secure built in LLM

Calculate your savings

Calculate the cost savings that will be achieved by adding an AI assistant to your website

£0.00
Monthly savings
0%
Return on investment (ROI)
0 hours
Equivalent saved resource
0 /month
Total calls, emails & live chats
0 /month
Total calls, emails & live chats
0
25,000+

Move the slider above to match your total customer service load per month (including calls, emails and live chats).
To calculate accurate savings for your business, put your real figures in the boxes below to replace these benchmarks.

your current average cost per enquiry (calls, emails, and live chats)
of enquiries you wish to be handled by your AI assistant (without the need for a call, email or live chat)
your average hourly rate for customer service rep / agent (including costs such as training, equipment, recruitment)

How to know if LLMs are suitable for your needs and safe to use

There’s been a lot of hype around the sudden emergence of LLMs, sparked largely by the explosion of interest in ChatGPT, which uses GPT-4. The technology itself has undeniably impressed, shocked, and excited people, but despite its remarkable promise has also scared and worried them.

It didn’t take long for people to realise LLMs hallucinate, which means they simply make things up if they don’t know the answer. This caught out a couple of New York lawyers who’ve since been fined for putting together a flawed legal brief using information they gathered from ChatGPT.

Naturally, stories like this provoke fear, but like any new technology, it entirely depends on how you use it and what you use it for as to how much you get out of it.

Model LLMs on your best customer support agents

LLMs are modelling what they learn from us, so whatever you put into your AI assistant should be what you want to come out when it responds to customers. Where the general LLMs you’ve already played around with have learnt largely from the internet, organisations are quickly (and sometimes painfully) realising AI is most useful when it’s trained by your own industry experts and securely locked down.

You want an AI assistant that represents your company in the way your best agents do every time a customer leaves a conversation happy. Not only that, it has to be genuinely useful. An LLM might quickly find the answer to a question, but to truly support your customer service team as well as your customers, you want the AI assistant to be able to answer queries from start to finish, so agents can be getting on with more unusual, sensitive, or high-value work.

Chat conversation with LLM only: Can I book my child in for swimming lessons? LLM finds the answer. Yes, we offer swimming lessons for children aged 5-10. Conversation with LLM + HITL + API: Can I book my child in for swimming lessons? Human in the loop has added API integration with the booking system. Yes, we offer swimming lessons for children aged 5-10. Would you like to book in now? No, thanks or Yes please.

Mastering secure LLM integration

LLMs are new to public use and constantly getting better, but you’ll need to pay close attention to who supplies you with the technology and what guardrails they have in place to protect your data ― meeting robust security and compliance standards is non-negotiable.

Consider how all of the technologies you use hang together too to enhance the customer experience, and check whoever’s responsible for updating the AI assistant at your organisation (your AI administrator) is regularly reporting on progress. Get that right and you’re instantly ahead of the curve.

“Using NLP, LLM and API together results in trustworthy, reliable and easy to manage AI assistants. NLP does a great job at identifying AI requests, LLM is the AI administrator’s copilot, helping to train it and produce content from lots of different sources, and APIs allow your AI assistant to do stuff, like book an appointment or access information. Where LLMs do pick up much of the leg work for the AI administrator, we keep a human in the loop to prevent misinformation through hallucinations. Using reliable tech like NLP alongside LLM, and APIs to add the real value, your AI assistant is then secure and scalable.”
~ Matthew Doel, CEO at EBI.AI

The do’s and don’ts of using LLMs for customer support

If nothing else, remember this:

DO use NLP

Always use NLP alongside LLM, so your AI assistant understands everything your customers want and only uses LLM to fill a gap where it can’t find an answer.

DO use guardrails

Use an AI provider that uses guardrails as standard for enterprise grade security throughout to protect your data and customer information.

DON’T rely solely on LLM

Never rely solely on general LLMs because they don’t provide everything you need to make your AI assistant successful and aren’t tailored to your business.

DON’T leave LLM content unchecked

Never plug your AI assistant directly into an LLM; make sure you have full control over any content it produces and sends out to your customers.

You can use LLMs to transform your customer service, but use them safely, and if you don’t know exactly what you’re doing, find someone who does to guide you or do it for you.

Finding the right platform

A few years ago, it would have been a long-winded, expensive and highly technical process to launch an AI assistant, but all that pain is now taken away.

If you’re looking to improve customer service, reduce call volumes, increase customer satisfaction and enhance the customer experience, you don’t need to attempt to build an LLM or even look for the right one, you just need the right platform with LLM built in that’s both easy to use and affordable, so when you’re shopping around, be sure to:

  • Ask for a demo to see exactly how it works
  • Ask to see evidence that LLM responses can be edited and controlled
  • Look at all the other features available to help you create the most reliable, helpful AI assistant
  • Take advantage of a free trial, so you can try before you buy
  • Check you can leave the platform if it’s not right for you

Launch an AI assistant with LLM + NLP + HITL + API ― in minutes

When you create an AI assistant on AI Studio, it just works. You don’t need to configure different NLP and LLM systems. HITL and API are baked in too. Just sign up for an account, add your website url, and ten minutes later your AI assistant is ready to start learning, preparing to handle more and more customer queries from start to finish.

  • Launch in minutes
  • Train your AI assistant quickly
  • Reduce contact queries and up customer satisfaction

FAQs

What is an LLM?

An LLM, or Large Language Model, is a type of AI that learns from vast amounts of data to understand and generate human-like text, imagery, video, code, and more. This technology, such as OpenAI’s GPT-4, possesses deep language understanding, allowing for complex tasks ranging from general knowledge queries to generating creative content.

How does an LLM help with customer service?

LLMs enhance customer service by instantly providing automated, personalised responses to customer enquiries. They reduce response times, relieve manual workload, and offer scalable support while maintaining a consistent quality of service, leading to improved customer satisfaction and enhanced operational efficiency.

What’s an hallucination?

In the context of language models, an hallucination refers to the generation of information or data that wasn’t in the LLM training set or isn’t verifiable. Essentially, the AI ‘makes up’ an answer when it doesn’t know the correct response, which can lead to incorrect or misleading information. A good guess isn’t always helpful.

Can you create your own LLM?

Creating your own LLM is feasible, but it requires considerable resources — high amounts of data, extensive AI expertise, and significant investment. While technology companies with a long history in AI may undertake this task, for most businesses, integrating existing LLMs into their systems is the more practical route.