There are myriad, and unfortunately evergreen, customer engagement challenges for sales and marketing professionals, from providing personalized experiences to responding in a timely way to customer inquiries, consistency across touch points and more. But generative AI has emerged as an effective way to mitigate these challenges, allowing companies to build connections that satisfy and delight customers, says Shailesh Nalawadi, head of product at Sendbird.
“Generative AI lets you conversationally engage with customers, and offer intelligent, personalized and helpful responses anywhere and anytime a customer needs answers or support,” Nalawadi says.
The conversational AI of the past primarily relied on rule-based systems and predefined responses, which limited the flexibility and usefulness of customer-facing solutions. Customers were forced to figure out the right way to word a question, because bots only responded to the queries they were programmed to expect. And too often, customers would get irritated, give up, and request a human instead.
Generative AI, powered by LLMs like ChatGPT, are a significant step forward. They can grasp the semantic meaning of a question, rather than just looking for keywords, generate human-sounded responses, and dynamically adapt to conversational contexts, making conversational AI substantially more effective. The technology isn’t a silver bullet, Nalawadi warns, but it’s evolving rapidly.
One of the most effective features of an LLM is its ability to digest and accurately summarize large amounts of text data. For example, Sendbird’s customer support feature, which summarizes all the conversations in a customer’s ticket, helps make agent handoff seamless. Instead of having to read through weeks of troubleshooting, or ignoring the backstory and frustrating the customer by asking them to repeat their story, the information is at hand, in plain English.
“That’s a very simple example, but it’s a huge productivity savings for the agent who receives a new ticket,” Nalawadi explains.
Scheduling is another example. For a busy doctor’s office, appointment scheduling can be a tremendous time suck for the administrative assistants on the front line. An LLM can power a chat-based self-service experience for a customer. In a very human kind of conversation, the patient can explain their needs and availability, and the AI can surface a time, day and doctor that meets the client’s requirements.
In fintech, instead of the customer having to filter and search through a long transaction history, an LLM solution can summarize that history and surface the answer they’re looking for – or even explain the state of their finances.
There are broader societal issues around LLMs, Nalawadi says, and every company should be aware of the ethical considerations around the technology, including data privacy, the potential for inherent bias in AI-generated content, and hallucinations — the AI jumping to incorrect conclusions and returning false results.
“It’s important that these models are trained on diverse and representative data sets to avoid biased outputs,” he explains. “And it’s not implement-and-done — you need to monitor and fine tune these models regularly and on an ongoing basis, to maintain accuracy and relevance.”
That includes ensuring your LLM is trained on data that’s as recent as possible, because even the best LLMs are currently working with data only as current as 18 months ago, due to the costs involved in training.
It’s also crucial to be transparent with customers when you integrate AI into their experiences when contacting a company, he adds, and have an escape option for the customers who aren’t comfortable conversing with an AI assistant.
“There are segments of the population, such as seniors, who may not type, or don’t have the comfort level to deal with an automated system,” he explains. “If you’re a brand that wants to be inclusive, you have to respect that some customers don’t want that option. On the flip side, there are plenty of consumers who are perfectly happy taking an asynchronous chat-based approach to getting what they need from their favorite brands. It won’t be a one-size-fits-all. It’s going to be a blend and most brands will continue to have to cater to both.”
Another essential element is human moderation. A human will always need to regularly monitor customer-AI interactions, in order to make sure these conversations are still meeting expectations, and be available to provide backup in any case a customer wants to escalate.
“Human communication is very nuanced, and every generation of AI will continue to get more sophisticated in its understanding of what people are saying and what people expect from them,” Nalawadi says. “It will be a continual evolution, and as that continues to happen, other capabilities will come.”
That includes major advances in multi-turn dialogues, a sophisticated conversational capability that lets a bot hold longer and more complex conversations with multiple exchanges between the participants. It requires understanding the context of each response throughout the conversation, as well as remembering what information has already been gathered. It’s fundamental to human conversations, but has been a challenge for natural language AI.
“As these capabilities evolve, it will mean improved customer experiences for brands that are interested in customer engagement, increased automation of routine tasks, and maybe further integration across more and more industries,” he explains.
But that will continue to raise more ethical questions, and conversations about responsible deployment will be necessary, particularly around what kind of data is considered public domain, where the borderline between copyright and fair use sits when machines start to ingest and recontexualize information.
“LLMs raise a bunch of questions, and it’s for the broader community of not just technologists and builders, but also government and policy folks to weigh in,” he says. “But one of the heartening things I see right now is very proactive engagement between the community developing LLMs and the regulatory authorities and the wider society.”
To learn more about the growing number of use cases for generative AI, how companies can implement solutions safely and effectively to realize productivity gains and more, don’t miss this VB Spotlight event!
(Copyright: VentureBeat How customer engagement will evolve along with generative AI | VentureBeat)