Deepfakes: The most serious AI crime threat?
Thu 20 Aug 2020 | Joe Bloemendaal

A University College London study recently ranked Deepfakes as the most worrying application of artificial intelligence for crime or terrorism. We asked Joe Bloemendaal, head of strategy at digital verification company Mitek, to break down the report’s findings
Why does UCL deem fake audio and video content so pernicious? And what is the significance of its report?
UCL’s report provides evidence that deepfake technology is no longer limited to the dark corners of the internet. There are plenty of apps that allow anyone to convincingly replace faces of pop stars and A-list celebrities with their own, even in videos, turning into viral social media content and putting smiles on people’s faces. However, fake audio and video content have been ranked as the most serious threat due to the increasing number of ways that the content could be used in crime, for example from discrediting a public figure to impersonating someone else to access their bank account. There is a potential for the technology to create widespread distrust in society.
Despite the rise in sophistication and accessibility of deepfakes, is their criminal application still a distant threat?
We know that worldwide, synthetic identity fraud – based on deepfaked or synthetic images and videos as part of someone’s application – is already growing. In the US, it’s the fastest-growing tactic for fraudsters. Fraudsters are now turning to synthetic identities to open new accounts.
These identities can be completely fake or a unique amalgamation of false information, stolen or modified personal identifying information (PII), be that hacked from a database (phished from an unsuspecting person) or bought on the dark web. Because of the limited impact on those whose PII has been used, often this kind of fraud will go unnoticed for longer than traditional identity fraud.
The rise of deepfake and synthetic AI-enabled technology means that it is becoming easier for fraudsters to generate realistic live images or videos of people for these synthetic identities to commit serious levels of fraud.
Are enough people aware of the dangers?
Awareness of the deepfake threat is spreading. Last year, the BBC called out a video powered by this technology, showing a hyper-realistic deepfake of Boris Johnson and Jeremy Corbyn endorsing each other in the UK general election. Earlier this year, a deepfake installation was set up at the World Economic Forum in Davos, demonstrating how advanced the technology is becoming.
What types of crime do deepfakes empower?
Synthetic identities are used by extremely patient and careful fraudsters. These are carefully crafted and becoming harder to spot, often they start out with a small application (e.g. furniture loan) which creates a ‘thin-file’ profile with the credit reporting authority. From this, fraudsters would then have records of a ‘real’ history which can be used to open an account, for example.
While this is currently not happening at scale, this type of fraud can expose real people to scams leading to considerable financial losses and extensive debt obligations. If it’s a completely new synthetic identity, that’s been given a loan – it could translate into a straightforward loss for the financial institution in question.
What can be done to thwart these crimes?
Looking at standard behaviour models will no longer cut it, and coupled with the unreliability of PII, means companies are turning to biometric technologies to fight against fraud. Accurately verifying the identities of new applicants at the onboarding stage will be critical to this.
Customers submit a selfie and a picture of the ID document in an app or online form, and in seconds, AI algorithms analyse and compare the submissions. They assess them for potential forgery or alterations; prove that the selfie is a ‘live’ person; and verify the two against one another. This digital identity verification is effective against synthetic identities for several reasons, but one stands out: most fraudsters don’t want to use a picture of their own face to commit fraud, and they don’t have the tools to fake one.
Even if AI can be developed to spot the deepest of fakes, is it not just a matter of time before higher-fidelity efforts bypass detections?
As deepfake technology become more sophisticated, we’d recommend setting up multiple layers of security to fight this type of fraud. This involves:
- Using physical biometrics at appropriate stages – this means having the user prove their identities by using a selfie and a snapshot of a government-issued ID, and having AI to analyse whether or not the selfie is of a live person, and if they match. This is becoming more important as two-factor and knowledge-based authentication are no longer sufficient in proving someone’s identity.
- Deploying link analysis – Since fraudsters often reuse PII, deploy software that checks for these overlaps and helps spot suspicious identities.
- Having a back-up line of defence – We still need human experts to assist in digital onboarding. The human eye is still better than computers at spotting fraud in some forms of paper ID. There will be exceptions that the technology has not yet been trained to recognise. This is where agent-assisted solutions come into play.
As deepfakes are created with the help of increasingly sophisticated AI technologies, it’s important businesses have the necessary tech prowess to match the threat. A very robust digital check of an ID document using the right identity verification solution can help in the fight against the growing threat synthetic identities can pose. Combining this with a mobile phone selfie, a liveness test and an expert eye, which then counter checks all elements of a person’s identity, provides a robust system.
How do we balance the individual’s right to privacy while developing digital ‘fingerprints’ that detail a user’s habits and every digital move?
Businesses must ensure that they are working with digital identity verification providers of the highest calibre, who have the right processes and procedures in place to ensure personal information is processed, stored and protected, in strict compliance with regulation while actively implementing global best practice.