While the world is still dealing with how to spot and combat fake news, another form of misinformation has arisen. In its most sophisticated form, the deepfake is created by artificial intelligence systems.
As CSO Online describes, deepfakes are created by “generative adversarial networks” – in other words, two machine learning systems working together. One works on creating a fake video, the other tries to spot that it’s a fake. The person making the fake video keeps the systems going until the machine learning system that’s trying to spot the fake can’t detect the forgery.
Fake news and doctored footage aren’t new phenomena. Propaganda has been around for thousands of years – it was used by the Roman Empire, for example, but with the rise of machine learning and artificial intelligence; and the dominance of video and social media as communication tools, deepfakes represent a different level of threat to individuals, brands and governments.
And it doesn’t take long to make a deepfake. As VFX artist Benjamin Van Den Broeck told The Verge, algorithms could perform a “scarily accurate faceswap over a single gaming computer, possibly in as short as 24 hours. No team, no render farm, no money.”
The threat posed by deepfakes
Deepfakes, such as the one of Mark Zuckerberg, are dangerous because people tend to believe what they see and hear.
In August, The Wall Street Journal reported that the UK CEO of an energy company paid €200,000 to a supplier because the CEO of the firm’s Germany-based parent company told him to do so. Only, it wasn’t his boss making the request – it was a fraudster using AI-generated deepfake software to replicate his voice and compel him to transfer the funds within an hour.
Despite all of the processes and procedures businesses have in place, a lot of work revolves around trust. “My experience of working with you says X, therefore I trust you to do Y.” It’s no different when it comes to brands and their customers.
All it takes is for a CEO to send an ill-advised tweet to negatively affect the company’s share price. Imagine what damage deepfake videos could do. Deepfakes undermine genuine content. They look so genuine that they are easy to believe. In which case, we may come to a point where we start to question every bit of video and audio content we come across. Can we trust that this is from the person, or brand, it says it is?
How can brands combat deepfakes?
Security experts, such as those at Symantec, recommend that brands partner with organisations with expertise in detecting deepfakes as soon as they are released, alerting the brand quickly and allowing them to set the record straight.
Google is working to help researchers tackle deepfakes by giving them access to considerable research material.
However, some tech experts doubt that any “deepfake detector” will be effective in the long-term. As an associate professor at the University of Southern California and CEO of Pinscreen, Hao Li told The Verge: “at some point, it’s likely that it’s not going to be possible to detect [AI fakes] at all. So a different type of approach is going to need to be put in place to resolve this.”
Another method under investigation is creating cameras that create content with tamper-resistant technology – providing proof of authenticity in the code of the pictures.
But until a solution is found, what can brands do to protect themselves from deepfakes?
- Practise transparency. Brands need to be as open and honest in their communications as possible. People need to know that the brand is telling the truth, and that should it come under attack by a deepfake, they know that they can trust its denial.
- Don’t play games with deepfakes. It might be tempting to play around with deepfakes, but brands shouldn’t put themselves in the position of trying to fool their own customers. People need to know that they can take brands at their word, and how can they do this if brands try to fool them?
- Build trust through consistent language and action. If a CEO is known for hitting the headlines with their outrageous statements, many people won’t know what the CEO (and by extension, the brand) will say or do next. If there’s no consistency and little follow-through on promises, there will be minimal trust for the brand. In these situations, it’s easier for a deepfake to be believable. People don’t stand a chance of spotting a deepfake if they can never be sure what the brand, or its representatives, stand for.
Few brands have had to deal with deepfakes yet, but as it’s a relatively accessible form of fraud, it’s likely that more brands will be targeted in the future. While the technology companies search for ways to tackle deepfakes at their source, brands should focus on open communication, community and trust-building as ways to mitigate the damage deepfakes can cause.