As new technologies offer a world of opportunities and benefits in many sectors, so too do they offer new avenues and for organized crime. It was true at the advent of the internet, and it’s true for the growing field of artificial intelligence and machine learning, according to a new joint report by Europol and the United Nations Interregional Crime and Justice Research Center.
At its simplest, artificial intelligences are human designed systems that, within a defined set of rules can absorb data, recognize patterns, and duplicate or alter them. In effect they are “learning” so that they can automate more and more complex tasks which in the past required human input.
However, “the promise of more efficient automation and autonomy is inseparable from the different schemes that malicious actors are capable of,” the document warned. “Criminals and organized crime groups (OCGs) have been swiftly integrating new technologies into their modi operandi.”
AI is particularly useful in the increasingly digitised world of organized crime that has unfolded due to the novel coronavirus pandemic.
“AI-supported or AI-enhanced cyberattack techniques that have been studied are proof that criminals are already taking steps to broaden the use of AI,” the report said.
One such example is procedurally generated fishing emails designed to bypass spam filters.
Despite the proliferation of new and powerful technologies, a cybercriminal’s greatest asset is still his mark’s propensity for human error and the most common types of cyber scams are still based around so-called social engineering, i.e taking advantage of empathy, trust or naivete.
While in the past social engineering scams had to be somewhat tailored to specific targets or audiences, through artificial intelligence they can be deployed en masse and use machine learning to tailor themselves to new audiences.
“Unfortunately, criminals already have enough experience and sample texts to build their operations on,” the report said. An innovative scammer can introduce AI systems to automate and speed up the detection rate at which the victims fall in or out of the scam. This allows them to focus only on those potential victims who are easy to deceive. Whatever false pretense a scammer chooses to persuade the target to participate in, an ML algorithm would be able to anticipate a target’s most common replies to the chosen pretense, the report explained.
Most terrifying of all however, is the concept of the so-called “deepfakes.“ Through deepfakes, with little source material, machine learning can be used to generate incredibly realistic human faces or voices and impose them into any video.
“The technology has been lauded as a powerful weapon in today’s disinformation wars, whereby one can no longer rely on what one sees or hears.” the report said. “One side effect of the use of deepfakes for disinformation is the diminished trust of citizens in authority and information media.”
Flooded with increasingly AI-generated spam and fake news that build on bigoted text, fake videos, and a plethora of conspiracy theories, people might feel that a considerable amount of information, including videos, simply cannot be trusted. The result is a phenomenon termed as ‘information apocalypse’ or ‘reality apathy.’”
One of the most infamous uses of deepfake technology has been to superimpose the faces of unsuspecting women onto pornographic videos.
.
By David Klein, November 27, 2020, published on OCCRP