The ACAMS January 2024 meeting of the Boston chapter was held last week and provided a lot of great insights around financial crime fighting in the age of AI. In fact, the event title was Financial Crimes Modeling in the AI Era: Explainability, Optimization, and Governance.
The most significant takeaways arose from four areas:
- How to get started with AI
- Preparing to explain your AI to regulators
- Caring for and feeding your AI model
- Only AI innovation will reduce exposure to fraud
The panel-led and highly interactive discussion included thought leaders from management consulting, technology companies, and multiple banks of all sizes.
How to get started with AI
This was the overwhelmingly burning question on people’s minds at the event, and it took up nearly half the session.
One panelist told the audience that, like them, many clients ask about GenAI and how to start. He highlighted that the use case is the starting point. While scenario-based rules and machine learning (ML) have reduced the false positives count, GenAI with large language models (LLMs) has a different use case. It translates the language people use on a normal basis, runs it through a bank’s list, and “saves so much time when combined into a package.”
A second panelist added that “the AI conversation normally starts with a simple process like adverse media monitoring and nightly batch of name screening.” For such simple processes, GenAI can easily distinguish between hits and non-hits, positives and false positives. Banks gain a simple, automated step that delivers strong results. He noted that banks with a domestic customer population and domestic transactions should already be using AI in this way. “You can free up analysts because 97-99 percent of your stuff is false positives. You’re wasting L1 analyst’s time by making them click ‘false-false-false’ like a drinking bird desk toy.”
One bank’s head of AML compliance recommended that people “think about Gen AI to do sentiment analysis. Is the adverse media really adverse or not?” This person noted that machine intelligence cannot translate a news article to a decision for an investigator. Instead, banks should combine GenAI sentiment analysis and natural language processing (NLP) to figure out what parts of that can be used to make decisions. “It’s not one or the other, but a combination of different aspects to make your ecosystem work.”
A panelist closed out the ‘getting started’ commentary: “Identify where you can get the most efficiency, and then ask, ‘do we have the right data to do what we want?’”
Preparing to explain your AI to regulators
A panelist cautioned compliance leaders to keep their explanation simple for both senior bank management and regulators. “Tell senior management what the problems are which you’re trying to solve,” the panelist advised. “For example, I’m solving for daily back screening – screening for A, B and C and putting answers back into the system.”
According to the panel, specific steps to explain your AI to senior management and regulators include the following:
- Obtain regulator support by telling them your plans and how they are different from what your bank has always done
- Follow the MRM framework and document everything
- Tell regulators about your risk management and risk mitigation
- Lay out a plan to back-test your model
One panelist provided the bottom line on explainability: “Let regulators know it’s different from what you’ve always done, and here’s how it changes the output.”
Caring for and feeding your AI model
The topic of caring for and feeding an AI model arose from a question that asked, “From an IT and business perspective, how often do I validate the model?”
Panelists advised that practitioners look at three key drivers to determine how often they should validate their model:
- How their data changes over time
- How the macro environment is changing
- How regulations are changing over time
As an example, one panelist discussed a legal entity bringing in a new business relationship type prior to the sanctions surrounding the Russia invasion of Ukraine. Suddenly, there were 125 sanctions updates in the first year. They were initially specific to individuals, then evolved to ownership and other factors. Yet, the model expectations had stayed the same. “We never thought there’d be such levels of rules such that we even had the data to figure it out. How have regulations, the macro environment, and data changed? Ask that and then look at your model to see how it must change,” the panelist stated.
Two panelists noted how such changes inevitably force changes to the care and feeding of the model. Things that will change include:
- Thresholds
- Data feeds
- Reference data
- Where IT contributes
- KPIs
- Risk indicators
- Targeted controls
One panelist advised the audience that their models cannot remain stagnant: “Your model degrades over time, so ask yourself how to keep changing it.”
Only AI innovation will reduce exposure to fraud
All panelists gave the audience a strong dose of reality around fraud. One commented, “You can’t go head-to-head against people [fraudsters]. They can hack into any system and have no ethical boundaries.”
Another stated that the only way any company, large or small, can win against dark forces is via innovation. “We automate, replicate, and eliminate. We figure out bottlenecks and get efficient. But that has to change. Instead, we must innovate,” he stated.
As an example, a panelist noted a bank that attempted to find sanctioned Russian oligarchs in their system in 2022, but found none. “Automate-replicate-eliminate uncovered nothing. But looking at every org chart change since Feb 2022 and searching out new leaders in their organizations revealed who they were and where.”
Panelists noted that money launderers learned banks’ topologies and how they monitor them, resulting in a move from structuring to micro-structuring. “So, we need to look at more data and color points,” they advised. “It used to be countries and where money was going. But now we need KYC and CDD to identify money launderers and bad actors.”
All panelists extolled the virtue of AI’s ability to provide a 30,000-foot view to identify patterns. “Think about reading SARs in a human loop, but with AI you can develop those patterns and links much more easily. Those big patterns are important because we are only capturing a fraction of the fraud.”
In the end, all the panelists agreed that AI will be common within bank compliance teams within five years. Just as sanctions screening tools are not a mystery to anyone, AI will soon be just another common, albeit far more effective, tool to optimize BSA/AML compliance.