Cybersecurity expert says AI risks transposing human bias into automated systems for AML – warns of ‘real mess’ if automation not done properly
USING ARTIFICIAL INTELLIGENCE to help fight financial crime carries the risk of incorporating human prejudice if not done properly, one expert has warned.
Head of Financial Crime and Cybersecurity at Finance Finland Mika Linna said that as humans continue to develop AI systems in AML, they could unintentionally include subtle human biases in programming, which would then surface once those systems are active.
This bias could affect both financial decision-making and risk management, which could then automatically prejudge based on race, ethnic origin, sex or religion.
“If you use technology to discriminate, it’s not going to alleviate anything,” Linna said. “If some of the problems in manual processes are automated, you may end up creating a real mess.”
He spoke at the virtual Nordics Anti-Financial Crime Symposium Thursday, at a seminar examining AI as the ‘silver bullet’ in the fight against dirty money.
AI is a rapidly emerging solution for AML, notable for its ability to turn around huge volumes of work in short timeframes, and its uses in know-your-customer protocols.
While AI means different things to different people – some verging into science fiction territory – Linna spoke of under in the context of systems which merely imitate human behaviour in screening and risk assessment.
Its effectiveness has seen it rolled out as a standard tool in organisations as large as Nasdaq, which announced their own AI-based software earlier this year to analyse human decisions and screen for fraud.
“It’s a sort of inbuilt intelligence that can help us spot things we may not otherwise spot,” said Kieran Holland, Head of Technical Solutions for AML Compliance at Innovative Systems.
“It also enables us to be a lot more efficient,” he told the symposium.
Holland – alongside his fellow speakers – acknowledged the vast amount of benefits that AI can bring to the fight against dirty money. But he also warned of challenges surrounding the information it processes.
“An AI system is only as good as what you feed it, so data is extremely important,” he said. “The whole concept of ‘junk in – junk out’ is as true for AI as it is for any other system.”
Holland joined Linna in urging organisations using AI to aim for transparent, explainable and clean data so that their systems would produce quality results.
.
By Dan Byrne, November 12, 2020, published on AMLi