Why is AI deployment not more prevalent in AML?

Artificial Intelligence is seen by some as the holy grail when it comes to improving anti money laundering operations, but to date it has only made limited inroads. Onesimo Hernandez and Gilles Hilary, co-founders of BankCollab, explain how to go about making AI work.

The deployment of artificial intelligence (AI) in anti-money laundering (AML) departments has created a paradox. Common wisdom stresses that humans make mistakes and cost money, and hence that they should be taken out of the equation as much as possible. Further, models are viewed as clunky in many ways and are believed to generate headaches rather than solutions. In this context, AI represents a new holy grail that relies on data and algorithms to replace both. Yet, AI has made limited inroads in AML operations.

We believe that there are several reasons for this. There is no doubt, in our minds, that AI offers powerful capabilities of which financial institutions should take advantage. For example, AML departments have to deal with false positives routinely. Static rules are ill-equipped to minimize this cost, but AI can learn from human decisions and refine the alerts portfolio of alerts. Unsupervised techniques can identify networks of customers and evaluate their riskiness as a group. In contrast, humans have a hard time solving high dimensionality problems such as this one.

However, like any new technology, AI faces some challenges and suffers from limitations. Some are well-known, but they take a particular significance in an AML context. Others are more specific to this domain.

  • AI has a long way before being fully explainable. This remark is usually made in the context of models making a particular decision, but the concepts themselves remain alien to many decision-makers. Explaining the difference between isolation forests and neural networks to a board of directors or an auditor remains a daunting task.
  • Data is rarer in AML than in other contexts. For example, solution developers can find free and public datasets to train their models when they tackle fraud detection problems and even more so when they consider general interest domains such a natural language processing. This is not the case for AML applications, where it is often illegal to share documents and information with outside parties. Thus, training may be less effective but, perhaps more importantly, it is necessarily biased by this lack of generalizable data. If an algorithm only sees a tropical island, it will conclude that Quebec is also tropical.
  • AI can create a fear among AML analysts that their expertise is no longer valued or even welcome. Naturally, they push back when, in fact, their expertise is extremely valuable to anticipate structural shifts before they are visible in the data.
  • Financial institutions have invested many resources in developing analytical frameworks. Regulators have invested an equal amount in validating them. Both sides are hesitant to forego these tools for some relatively new approaches.
  • AI deployment entails costs, and induces complexity. Vendors of legacy systems have limited incentives to facilitate the transition, given their entrenched position.

..

So, what are the solutions?

  • Fail early, fail often, fail cheaply. AI projects can be hard to deploy. By some metric,85% of AI projects fail but naturally, waiting until AI has become a plug-and-play commodity will put any institution behind its peers. The solution is not to avoid early deployment but to set up the conditions to mitigate the consequences of failure and maximize the learning from it. Work incrementally and integrate your existing models in your new framework. Work with startups that are cheaper and more flexible than legacy vendors.
  • Make sure that humans and machines can collaborate. Indeed, thecombination is often more effective than the exclusive use of one of the two approaches. This integration between humans and machines will also remove the need for costly process re-engineering and increase acceptance by frontline analysts. This approach will make better use of humans as kill-switches when necessary. In 1983, a soviet colonel prevented World War III when he understood that the American nuclear missile seemingly flying toward Moscow was the by-product of a soviet computer failure. Humans can put information in context, algorithms not so much.
  • One day, AI may be mature enough to be powerful, autonomous, and transparent. This is not today or even tomorrow. In the meantime, the solution to AI problems may not be more AI. Streamline, use “light AI” (i.e.,AI that needs less data), take advantage of robust models for the bulk of the processing, and use AI to deal with the problematic exceptions that are difficult for humans to navigate efficiently..

.

Make sure you consider the cost over the life cycle, the cost of process re-engineering, or the need for support associated with complex systems.

  • Offer transparency. This is not only true in terms of data governance but also in terms of outcomes that laypeople can understand. As we were told at school, show your steps. For example, allow outcomes to be audited repeatedly in a robust way. Make sure that decisions are explained and documented.
  • Consider full costing. Unraveling system deployment can be hard for organizations. The cost of a product is not just what you pay on the first contract. Make sure that your vendor is not locking you in a long-term relationship with a technology you cannot easily exit. For example, while a fully integrated solution may offer some additional efficiency in the short run, it leaves you completely in the hands of your solution provider in the long run. A modular approach gives you more flexibility, now and in the future. Make sure you consider the cost over the life cycle, the cost of process re-engineering, or the need for support associated with complex systems. Simpler approaches that can be turned on and off easily will offer a lower break-even point.

.

The long-term future may belong to fully automatic AML systems, but hoping for full blown implementation of this approach right away will slow down its implementation. For now, humans and algorithms need to co-exist.

.

By Onesimo Hernandez and Gilles Hilary, October 28, 2020, Published on FinCrime Report

Recent Posts