Taking a deeper look at deepfake technology

https://www.fintechfutures.com/files/2020/05/Deepfake.png
It’s only a matter of time before cybercriminals attempt to use deepfakes as a common way to defraud businesses

In an age when Instagram filters and photoshopping have become standard, it has never been harder for organisations to verify a person’s true identity online. Cybercriminals are deliberately using advanced technology to pull the wool over the eyes of organisations and defraud them.

Deepfakes have recently emerged as a legitimate and scary fraud vector. A deepfake today uses artificial intelligence (AI) to combine existing imagery to replace someone’s likeness, closely replicating both their face and voice. Essentially, a deepfake can impersonate a real person, making them appear to say words they have never even spoken.

Worryingly, the number of deepfake videos online doubled in less than a year, from 7,964 in December 2018 to more than 14,000 just nine months later. Deepfake technology is something that organisations need to be aware of as it’s likely that fraudsters will weaponise this technology to commit cybercrime, adding yet another string to their bow.

The 20th century method of deception in use today

Disinformation has long been used by those in power to deceive and manipulate the masses for a specific outcome. If we look back at the 20th century Soviet Union under Joseph Stalin, we can see that deepfakes were actually widely used to misinform the public. How? Described as a great actor, Stalin not only deceived comrades and enemies alike, he also used deception to purge Russia of millions of influential figureheads such as writers and scientists. Add to that, he invented a “society” that worked to protect his façade by even going so far as to have fake trials, elections and trade unions, to support his own narrative to the heavily media-censored Soviet nation.

With the help of the internet, today we see realistic deepfakes become more commercial, from use in pornography to infiltration of popular culture and other nefarious practices.

We all remember the Obama administration and the onset of a mass realisation of deepfakes used to spread misinformation on a substantial scale. Going beyond a hoax, and verging on corruption, the use of the technology came under scrutiny as a result. Looking at the proliferation of fake political news today on social media sites, we can see parallels between Stalin’s methods and more modern deception methods.

The consequences of deepfakes in today’s society

Deepfakes are likely to continue making a mockery of, and posing a threat to, politicians in the coming decade, but equally it is likely that modern enterprises will also find themselves under threat. Just last fall, the UK boss of an energy company was tricked over the phone when he was asked to transfer £200,000 to a Hungarian bank account by someone using deepfake audio technology. The individual believed the call to be from his boss, but actually, the voice had been impersonated by a fraudster who succeeded in defrauding the man out of money.

Wherever there are substantial amounts of capital, one must also expect multiple efforts to swindle the gatekeeper out of that money. Consequently, organizations should be on alert for these deceptive fraudsters.

Take for example those in the financial services sector. Banks store a vast amount of customer data and a breach of this information, and/or their assets, can have detrimental effects on all involved. When data is breached, the consumer can potentially lose assets if cybercriminals are able to access their accounts and drain their funds. Obviously, the consumer loses faith in the institution and is unlikely to recommend the bank to a friend or colleague. But arguably, far worse is the impact on the organisation itself. They run the risk of having to replace customer funds, incurring penalties and losing public trust in their service, all of which have the potential to lead to the demise of any company.

It’s no wonder that banks, and other organisations alike, are on the lookout for robust cybersecurity solutions to minimise the probability of a breach taking place by keeping fraudsters at bay, and even when it inevitably does, to reduce the likelihood of account takeover by fortifying the authentication process.

Fighting technology with technology

Many banks now require a government-issued ID and a selfie to establish a person’s digital identity when creating new accounts online. However, if a criminal wanted to impersonate an individual, they could utilise deepfake technology to create a spoof video to bypass the selfie requirement.

That’s why more sophisticated identity verification solutions have embedded certified liveness detection to sniff out advanced spoofing attacks, including deepfakes, and ensure that the remote user is physically present.

You would be wise then to think that liveness detection is the answer to recognising whether a face is alive versus a spoof video. However, you’d be slightly off the mark. Most liveness detection solutions on the market today require the user to perform eye movements, nod their head or repeat words or numbers. But these methods can be circumvented with deepfakes. Unless the identity verification provider has certified liveness detection, validated by the National Institute of Standards and Technology (NIST), imposters could still trick the system.

Level 2 certification with iBeta quality assurance means that your authentication solution can discern videos from real selfies to withstand a potential sophisticated deepfake attack. The importance of identity verification technology being certified to this degree could be the difference between a safe house and cyber-espionage.

As institutions increasingly digitise their processes to accommodate an increasing online market, they will need to ensure that they have the most robust security measures in place to protect valuable assets and their reputation.

While we may see deepfakes largely being used for political satire or revenge porn, it’s only a matter of time before cybercriminals attempt to use this as a common way to defraud businesses. It is critical that organisations utilise the latest AI-driven face-based biometric technologies with certified liveness detection to present the strongest defence possible. Only then will companies reclaim control and stand a chance of preventing unauthorised access to their systems and mitigate the harmful impact of identity theft and account takeover fraud.


Sponsored insights by Jumio

https://www.fintechfutures.com/files/2020/05/Jumio-logo-270x101.png?x92426