Organizations across industries are looking to simplify the discussion of how AI can be adopted and implemented for true operational efficiency gains. Early successes with machine learning and other forms of AI have accelerated the availability of Large Language Models (LLMs) and Generative AI (GenAI). With that, AI-driven digital workers – referred to as autonomous worker agents (AWAs) by Forrester and AI Digital Workers by WorkFusion – make possible multiple use cases only imagined until recently. The potential impact on operational efficiency is tremendous.
Recently, Peter Cousins, CTO at WorkFusion, joined Craig Le Clair, Vice President and Principal Analyst at Forrester to discuss how the combination of people, LLMs and GenAI enables higher degrees of automation of internal processes and improved operational efficiency and effectiveness. They also discussed the risks of improperly using GenAI and how to avoid doing so.
Here are some highlights from the conversation.
How GenAI is being applied to automation
Automations, until recently, have been run in a deterministic manner. That is, software drives an automated process in a step-by-step linear fashion whereby software code dictates what will happen. A good example of this is an RPA (robotic process automation) bot used in customer service. It imitates a typically human function based on the bot’s basic abilities to identify key words and deliver a ‘canned response’ based on those key words. Used this way, RPA bots focus on performing low-value operations that don’t require any thinking.
Advancing up the automation value chain, GenAI moves beyond deterministic to non-deterministic automations. The latter is characterized by a GenAI-based automation having a broad context of the environment in which it is operating. “Suddenly, it is absorbing data from the running process and making decisions on it. Unlike an RPA bot, GenAI knows how to not get stuck if everything does not go as planned,” said Craig. “So, it really starts to look like something different, like your robot friend that you’re at work with.”
How GenAI is changing the customer experience
Peter and Craig used the example of customer service agent support to demonstrate the impact of GenAI. They pointed out that, in most regulated environments, organizations have to capture and archive conversations with customers. “So why not take that conversation and have natural language processing make it machine readable?” asked Peter rhetorically. Answering his own question, he said it’s because you can put that conversation into an LLM and obtain a really nice summary of what was discussed. Then, using an RPA bot, that information gets input into an environment which the next agent can access. “So, all of a sudden, you advance from traditionally having a few cryptic notes to an LLM-summarized conversation that’s giving much better context.” In this way, the customer experience is vastly improved, and the workload has been reduced for agents.
Another benefit of GenAI on the customer experience highlighted by Peter and Craig relates to the tone used by an organization when communicating with customers. They discussed the use case of receiving and responding to incoming customer emails.
Unlike deterministic automation in which response bots often fail (or don’t attempt) to understand sentiment, GenAI enables both understanding of the sentiment and responding to it appropriately. No customer enjoys an ultra-happy robotic reply to an angry email, but that is what they often receive from menu-driven customer service bots. Using GenAI, an automated email system would understand the customer’s tone and sentiment, then reply with a custom response that takes into account the customer’s likely mood and desire for a certain, acceptable response tone.
They discussed how this type of GenAI changes the customer service response activity in ways not before imagined. Since the LLMs behind the GenAI have so much data, it can be used to help an organization devise outbound messages with proper tones and sentiments – appropriate to the wide range of communications topics. For example, some insurance companies have traditionally and inadvertently used the same tone in their letters to vendors as they use in their letters regarding death claims. That makes little sense and makes the insurer appear uncaring. But using GenAI, a team can receive a range of appropriate communications options suited to the context of any situation. “There’s a lot of opportunity here to get out of canned responses and get into a real human and more insightful type of correspondence,” said Craig.
AWAs tie it all together and keep it secure
Craig noted that most organizations have captured a lot of internal data over the years and now can use LLMs and GenAI to shape that enterprise data for multiple uses. The good news is that, unlike with standard AI prompting, GenAI allows all enterprise data to be kept within your organization’s firewall. “It’s not taking your enterprise data and training the world with it,” said Craig. “It’s staying on your side.”
As for applying the enterprise data for specific uses, he recommends three things:
- Perform output shaping
- Leverage a narrow model, like an LLM specific to the intended use and industry
- Use human-in-the-loop fact checking to review results for accuracy
“The autonomous worker automation is really pulling all of this together to make it all work,” concluded Craig.
Applying it all to the financial crime space
Peter provided an explanation for a popular GenAI use case already at work within multiple financial institutions – fending off financial crime.
Banks often conduct ‘KYC reviews’ (know your customer reviews) as part of their customer onboarding process and as part of an ongoing KYC process to understand who they’re dealing with. This helps the bank understand whether there’s a reputational risk or a legal risk of dealing with a certain customer entity.
To reach a determination, the bank’s KYC process involves curating information from news sources, open sources on the web and other databases to look for signs that someone’s been convicted of financial crimes or is involved in one or more situations that make them a risk for the bank to take on or keep as a customer. Key to a proper customer review is the ability to comprehensively gather data and make determinations of risk based on that data.
This is a perfect use case for GenAI.
An AI Digital Worker can gather that information, look at those articles and read them, and understand whether they are actually the person in question or the entity in question and whether they are the specifically researched target of the problem. The AI Digital Worker can also determine if the target in the news article is a third party, like a witness, a prosecutor, or someone else who is not the target. The intelligence of the AI Digital Worker also enables it to determine the severity of a discovered issue, such as understanding the difference between a conviction for fraud versus a conviction for speeding.
Peter pointed out that AI Digital Workers not only reduce by 60-80% the amount of time humans are needed for an investigation, but they also can be supplemented by LLMs to reach a straight-through processing rate (STP) of up to 95%. That’s a groundbreaking automation rate when one considers the highly regulated nature of banking and how many resources such STP rates can save a bank. “This is one of the products that people really love in this automated worker market,” added Peter.
In addition to the above topics, Peter and Craig also discussed more granular details of how to properly label items via AI, the ease of no-code tools and how they make the customization of Digital Workers and autonomous workforce agents easy today, and how your organization can embrace the massive operational efficiency benefits presented by LLMs and GenAI.
Click here to view the webinar.