Cyber criminals are continuously working to find new and more successful ways of duping unsuspecting individuals into handing over their money. Recent years have seen a huge increase in the use of ‘deepfakes,’ a type of identity fraud that leverages artificial intelligence to create frighteningly convincing fake images, videos and voice recordings.
Deepfakes are not a new threat, but as technology advances, this type of fraud is becoming increasingly convincing and difficult to identify. Most deepfakes are created for entertainment purposes and the technology has been used in several feature films to ‘resurrect’ deceased cast members on-screen. A quick Google will give you a taste of what can be achieved with deepfake technology, like this video apparently showing Mark Zuckerberg giving a speech about data “stolen” by Facebook. It was created by artists Bill Posters and Daniel Howe to demonstrate the potential power of fake news.
How is a deepfake created?
To create a convincing deepfake video, software is used to analyse the facial expressions of the chosen subject using AI (artificial intelligence.) This information can then be used to create video footage, superimposing the subject’s face onto someone else. Fairly convincing results can even be created live during a video call. The poor connection and grainy image we often experience during a video conference would mean the fake doesn’t even have to be that perfect to work.
But what about the voice you say? Freely available software such as Lyrebird AI allows you to impersonate the voice of anyone. “Record 1 minute from someone’s voice and Lyrebird can compress her/his voice’s DNA into a unique key. Use this key to generate anything with its corresponding voice,” says the Lyrebird website. None of this software is illegal or requires a special licence to use, meaning anyone can access it. And if you don’t have the technical know-how to create a deepfake yourself, you can pay someone else a couple of hundred dollars to do it for you.
Deepfakes in business
But what are the threats to financial institutions and other businesses from this type of technology? Admittedly, the chances of someone using a deepfake to impersonate your CEO to extract funds are slim. But, despite the odds, this has already happened – and using a far less sophisticated approach than some of the high-profile examples you will see posted online for entertainment purposes. In October 2019, it was reported that a top executive in a UK-based energy company had been duped into transferring £200,000 to cyber fraudsters. The perpetrators used AI voice technology to mimic the executive’s boss, who was based at the German headquarters. The executive was instructed to move the funds immediately to a Hungarian bank account and was told they would be returned later. They never were.
This example demonstrates several factors fraudsters rely upon to ensure a deepfake fraud is successful:
- Authority: In the above example, the senior executive was the head of the UK arm of the company. Despite this, he still felt unable to question the instructions he was given from his boss. Fraudsters rely on professional hierarchies and social norms to predict people’s behaviour patterns. Authority is a useful tool for criminals because people are often reluctant to question it.
- Urgency: If we are told something is urgent (especially by a superior) it immediately changes the way we react and inhibits our ability to think clearly. Instead of following the normal steps required, we rush – focussing our attention on getting the task done, rather than how or why we are doing it in the first place.
- Doubt: Nine times out of 10, the phone call is genuine. Fraudsters rely massively upon the ‘benefit of the doubt.’
- Distance: Contacting someone through email or by phone creates a barrier, allowing for certain inconsistencies or discrepancies to be overlooked or allowed for. The slight difference in the German executive’s voice in the energy company might have been due to the phone line, or because of background noise, or because they’d been unwell. A difference in tone of voice in an email could be because the person was in a rush.
Deepfakes and COVID-19
Even if you had never used a video conferencing app before the COVID-19 pandemic, you will no doubt be more than familiar with the software today. This influx of inexperienced, regular users of apps such as Zoom, Skype and Microsoft Teams has provided an unending supply of data for cyber criminals to exploit. Zoom came under fire recently when it was revealed that thousands of private recordings of Zoom calls could be easily accessed online by doing a simple search of cloud data storage. The recordings weren’t those held by the platform itself, but were files that had been stored locally by the individual users – an option that was given to Zoom users once a recording had been created. Nonetheless, this still means hours of footage is available for fraudsters to access online and potentially use to create deepfake videos.
In times of crisis, financial institutions will always be prime targets for cyber criminals looking to cash in. For this reason, cyber security is something all firms should be investing in right now. But do they fully understand the risks posed by AI and deepfakes? “The answer has to be, no, absolutely not,” says Graeme McGowan, Director, Cyber & Security Risk at the Optimal Risk Group. “There is a complete lack of education, training and awareness about cyber risk in general, even in some of the UK’s largest financial institutions. AI now allows cybercriminals to take their phishing to the next level, meaning that we can’t rely on a phone call or even a video call to verify authenticity. The solution is to get a proper HR training regime installed in every institution. Every company in every sector should have one for every worker, from the cleaner to the CEO.”
Cyber criminals use a combination of factors to break down security barriers in firms and improve the success rates of deepfakes, including social engineering. This involves manipulating individuals to divulge sensitive information which can then be used to access internal systems, or convince people to transfer funds or data. Keeping employees happy is a huge part of mitigating against this risk. But ensuring sensitive information is always treated as such should be the first step. “I was delivering a talk at an event recently and two women who worked at one of the UK’s largest insurance firms approached me afterwards,” says McGowan. “They said they were a bit concerned about cyber security in their office because they had a notebook where everyone’s usernames and passwords were kept for convenience. This kind of mistake is frighteningly stupid – but also terrifyingly common. Once criminals have access to a senior executive’s email account, they can easily impersonate that individual and gather enough data to extract funds or cause untold damage. And that’s before we’ve even entered deepfake territory. All they need is a disgruntled member of staff with an axe to grind and they could easily get hold of that notebook.”
Combatting deepfake fraud
AI technology is advancing all the time, so deepfakes are only going to become more convincing. For businesses, this means upping the ante when it comes to verification methods – even if it seems ‘silly’ or unnecessary. “Encouraging employees to feel comfortable in getting verification from a senior member of staff is essential,” says McGowan. “Measures such as calling someone back if they have phoned you to ask for a funds transfer, or having a series of security questions which only that particular person would be able to answer, are vital. Of course, many of the examples we have seen of deepfake voice calls have involved something that doesn’t usually happen, i.e. a call out of the blue from a senior executive asking for an urgent payment. Firms need to create strict protocols that are always adhered to to ensure that a call like this would immediately trigger alarm bells. If going against protocol is a common occurrence, it increases the likelihood of a deepfake’s success. Being organised and always following the same process and verification measures – without fail – will mean any deviation from normal practice will be easily picked up. If you’re unorganised and regularly make exceptions to the rules, how can you expect your staff to know the difference?”
Always one step ahead
Active learning technology allows cybercriminals to boost the success rates of phishing emails and other such scams by gathering data on what works and what doesn’t, then using this information to adapt their approach. “They are always one step ahead,” says McGowan. “This is why it is so important to keep educating staff about the technological capabilities cybercriminals now have. Share examples of incidents that have occurred, remind staff of the protocols they must follow and don’t just limit cyber risk training to new recruits; it should be ingrained in everyday, business-as-usual activities so that being aware of these types of risk is second nature.”
The sheer speed with which technology is progressing is what makes deepfakes so concerning. All of the examples you will find online are now old – and better, more convincing examples are being created all the time. In a documentary called The Weekly for The New York Times, investigative journalist David Barstow followed a group of AI engineers and machine-learning specialists in their quest to create the perfect deepfake. Their abilities and the capabilities of the technology they were using – which could easily fall into the hands of criminals – was as impressive as it was alarming. “It’s astonishing the progress a handful of smart engineers were able to make in a matter of months,” he said. “Teams of computer scientists around the world are racing to invent new techniques to quickly identify manipulated audio and video. The bad news [is] some deepfake creators are incorporating the machine-learning algorithms behind those countermeasures to make future deepfakes even harder to detect.” Barstow believes that even global web platforms like Whatsapp and Facebook are “woefully unprepared” to help users spot deepfakes. If business is to ensure it doesn’t fall foul of this growing threat, firms need to start taking it seriously now before it’s too late.
Common security mistakes that could result in deepfake fraud: Sharing too much information on social media platformsNot questioning authority – assuming because the boss calls you should bypass all normal security protocolsLeaving cyber security to the IT people – it should be part of every employee’s induction and ongoing trainingNot looking after staff welfare – disgruntled employees who have access to sensitive information/internal systems/usernames/passwordsNot securing personal devices properly – especially in light of increased home working during the COVID-19 pandemicTrusting someone you have only met remotely
High profile examples of deepfakes:
American actor, writer and producer, Jordan Peele, created a deepfake video of Barack Obama making outrageous statements and openly criticising US president Donald Trump to demonstrate the potential power of fake news on politics.
Artists Bill Posters and Daniel Howe made a convincing deepfake video of Facebook CEO Mark Zuckerberg, where he appears to tell CBSN news that he owns “billions of people’s stolen data…all their secrets, their lives, their futures.”
Speaker of the US House of Representatives, Nancy Pelosi, has been targeted several times by individuals looking to damage her reputation through fake videos. The examples here are not technically deepfakes, but the speed and pitch of her voice has been altered to make it sound like she is drunk. It is still not known who is responsible for creating these videos.
Catalan artist Salvador Dalí was brought back to life in 2019 as an exhibition “host” by the Dali museum in Florida. The interactive installation included 45 minutes of footage over 125 videos, which allowed more than 190,000 different combinations depending on visitor responses.
Last year, a suspicious video of the President of Gabon after a long absence sparked rumours of a deepfake, resulting in an attempted military coup. This example demonstrates how even just the knowledge that deepfake technology exists can make us question whether what we are seeing is real.
In July 2020, it was discovered that published British journalist Oliver Taylor, who claimed to have studied at the University of Birmingham in the UK, was in fact a deepfake. Alarms were raised when an article by Taylor was published in US Jewish newspaper The Algemeiner, criticising activist couple Mazen Masri and Ryvka Barnard and accusing them of being “known terrorist sympathisers.” A fabricated photograph and an account on question-and-answer site Quora are the only record of his existence. Despite this, “Taylor” had several articles published in newspapers, including the Jerusalem Post.