Mike Bloomberg’s 2020 presidential campaign posted a video clip of a recent Democratic candidate debate. It showed him saying that he was the only candidate on the stage to have started a business, and then the clip cut to each of his opponents seemingly lost for words as crickets chirped in the background.
— Mike Bloomberg (@MikeBloomberg) February 20, 2020
Of course, the video was edited from sections of the debate where the camera was catching the candidate’s reactions while another candidate was talking. You could say that the video was satire, but according to The Verge, the video would violate Twitter’s policy on deepfakes (but not Facebook’s).
Twitter will refer to this video, and others like it, as manipulated media, under rules that come into force on March 5th. The content will stay online, but be labelled as manipulated content. Facebook, told The Verge that the Bloomberg video wouldn’t be labelled as misleading.
Meanwhile, the Bloomberg campaign’s response was to say that the video was obviously “tongue in cheek” as there were no crickets on stage that night.
So is it an example of a malicious deepfake, or just typical political campaign satire?
The video was intended to make Bloomberg’s opponents look like fools, or at the very least to look like they couldn’t answer the question. Sure, the sound effects give the video a satirical edge, but it’s still manipulated content. It takes real footage and places it out of context. Those who edited the video knew what they wanted to achieve by editing the video the way they did.
It’s not a good idea to get involved in deepfakes. Ever.
As Blaise has already said in a blog post published last year, it’s not a great idea for brands (or any organization) to create deepfakes. Why? Because it can create doubt in the authenticity and sincerity of the brand’s communication. You want people to trust what you say, not to second-guess it.
When you create manipulated content to make your competitor look weak or foolish, you achieve two three things:
- Those who believe the fake have been manipulated into thinking a certain way. When they realise they’ve been duped, they not only lose faith in the person or organization that fooled them, but they become more cynical to legitimate communication from other organizations. For example, in 2019, Gallup found that Americans’ trust in the media was at an all-time low. Logically, we know that while there are issues with clickbait and fake news, a lot of mainstream media can be trusted. However, the effect of the negative publicity around fake news and clickbait has dented people’s trust in the media as a whole. Deepfakes would have the same effect on brand communication if it became normalised.
- People who don’t believe the fake lose respect for the brand, organization or person.
- The brand/organization/individual looks weak. They’ve shown that they have little faith in their own products or ideas. They’ve resorted to tactics to try to undermine the competition instead. If they don’t believe in themselves, why should anyone else?
Brand, organization or high-profile individual, it doesn’t matter, all should strive to gain the trust of those they want to work with. It’s through clear, regular communication that this is achieved, not by mocking competitors or concocting deepfakes to make them (and, ultimately, those who view the deepfake) look foolish. We should be aiming for a higher level of communication and debate – no matter if we’re communicating for a brand or a political campaign.