AI: The Double-Edged Sword in AML/CTF Compliance

Artificial intelligence (AI) has enhanced financial services, improved regulatory compliance and more. However, AI poses an equally dangerous threat to ethical governance, data protection and cybersecurity, as well as fundamental human rights.

There are over 70 proposed definitions for AI, one of which describes it as a suite of autonomous self-learning and adaptively predictive technology that enhances the ability to perform tasks.

Several financial institutions (FIs) in South Africa have already experienced the transformative potential of some form of AI in their organisations, while others have applied a ‘wait-and-see’ strategy. As the value proposition of this technology gradually emerges, the risks, as well as the legal and governance challenges AI may cause (e.g. lack of accountability, transparency and potentially arbitrary and/or discriminatory results), should be considered.

.

The Need for a Different Approach

Year-on-year, a significant amount of market research is conducted, and white papers are published on the increasingly high cost of compliance, specifically, anti-money laundering (AML) and counter-terrorist financing (CTF) compliance. One of the main causes for this high cost is the level of manual, repetitive and data-intensive tasks. These tasks often lead to a low morale in the workforce, while the sophistication and volume of financial crime threats continue to increase.

The traditional way of tackling AML/CTF involved highly manual processes, reactive solutions to detect risk and did not provide a real time view, proving to be ineffective against modern financial crime threats. Therefore, compliance should innovate and adopt the latest technology, not only to drive efficiency and reduce cost, but more importantly, to identify new and creative ways to tackle financial crime effectively. The COVID-19 pandemic has been a catalyst for many FIs’ automation of compliance processes through robotic process automation (RPA) and machine learning.

.

The Potential of AI in AML/CTF Compliance

In recent years, FIs have been testing AI to assist analysts in highly repetitive AML/CTF compliance tasks and to improve the performance of AML/CTF controls and processes to fight crime. Key areas where AI and cognitive solutions are having an impact on current AML/CTF processes include automating data collection, enhancing the client risk scoring and the alert prioritisation processes, customer due diligence (CDD), leveraging linkage analysis, improving segmentation, and improving anomaly detection either through identifying known suspicious patterns or by discovering new patterns.

The number of digital transactions is growing at an estimated 12.7% annually. By 2022, an estimated 60% of global gross domestic products (GDP) will be digitised. In a digital context, traditional verification tools within CDD do not apply. AI is making a significant contribution to digital identity, including biometric technology, digital device identifiers, high definition scanners and high resolution video transmission (‘live’ remote identification and verification). There are 1.7 billion unbanked adults worldwide and 26% of them claim that lack of identification is the primary barrier to accessing financial services. Digital identification offers another important benefit: a robust digital identification for individuals without traditional identification to have access to financial services and improve financial inclusion.

.

AI Challenges in AML/CTF Compliance

FIs must assess how AI may increase efficiencies in AML/CTF compliance processes while considering how AI will create or amplify new and existing threats. AI models must be designed and implemented responsibly and with transparency in mind to ensure its capabilities and limitations are clearly understood. The following are often cited as the most common challenges faced: a perceived threat of ‘redundancy’ of humans in favour of machines, bias and discrimination in decision-making and profiling of customers, cybersecurity data privacy and transparency. These challenges often entail the risk of regulatory non-compliance with the General Data Protection Regulation (GDPR) in Europe and the Protection of Personal Information Act (POPIA) in South Africa—especially the provisions related to automated decision-making, section 71(1) of POPIA.

.

Current Regulations Governing ‘Ethical’ AI

Governments worldwide are investigating and implementing policies and strategies to support innovation in AI technologies. In May 2019, the Organisation for Economic Co-operation and Development (OECD) adopted its Principles on AI , the first international standards agreed by governments for the responsible stewardship of trustworthy AI. The OECD Principles on AI include concrete recommendations for public policy and strategy, that inter alia, advocate the use of AI that is innovative, trustworthy and respects human rights and democratic values. It further provides mechanisms for governments and policy makers to create a ‘human-centric’ approach to AI, which entails protecting the rule of law, human rights and democratic values throughout the AI lifecycle. The UK Information Commissioner’s Office (ICO) has recently produced guidance on AI and data protection touching on, among other things, the importance of ensuring human oversight of automated decision-making in circumstances where employees are making important decisions using significant AI tools. In addition, South Africa has a number of initiatives for assessing the opportunities and risks posed by AI, such as the Intergovernmental Fintech Working Group (IFWG) and the Centre for Artificial Intelligence Research.

.

Conclusion

It is hard to argue against technology that enhances financial crime detection and prevention. However, where such technology is applied in a manner that lacks transparency, reduces human accountability and infringes on fundamental human rights, it should be reassessed within an FI’s risk appetite framework. Regulators expect FIs to explain how the technology they use helps them make risk-based decisions, detect and prevent financial crime.

Successful implementation of AI in AML/CTF compliance requires commitment and collaboration across multiple stakeholders: FIs, vendors, regulators and government. Collaborative efforts can reinforce wider adoption, and identification of further benefits, but also set standards for appropriate governance and controls to manage the safe development and deployment of AI-enabled solutions.

The nature, level and applicability of AI regulation in AML/CTF compliance will need to balance various interests and considerations. An unregulated approach might have catastrophic consequences, while over-regulation may stifle innovation and prevent the detection and prevention of sophisticated financial crime schemes. With the right governance and guidelines in place, there is potential to use this technology not only to improve lives across the continent, but also to tackle financial crime in the modern age.

.

By Ilze Calitz,January 27, 2021, published on ACAMS TODAY

Recent Posts