{"id":12888,"date":"2019-11-04T16:23:09","date_gmt":"2019-11-04T16:23:09","guid":{"rendered":"https:\/\/thesocialelement.agency\/us\/?p=12888"},"modified":"2019-11-04T16:23:58","modified_gmt":"2019-11-04T16:23:58","slug":"dealing-with-deepfakes-how-can-brands-fight-back","status":"publish","type":"post","link":"https:\/\/thesocialelement.agency\/us\/dealing-with-deepfakes-how-can-brands-fight-back","title":{"rendered":"Dealing with Deepfakes – how can brands fight back?"},"content":{"rendered":"
As <\/span>CSO Online<\/span><\/a> describes, deepfakes are created by \u201cgenerative adversarial networks\u201d – in other words, two machine learning systems working together. One works on creating a fake video, the other tries to spot that it\u2019s a fake. The person making the fake video keeps the systems going until the machine learning system that\u2019s trying to spot the fake can\u2019t detect the forgery.<\/span><\/p>\n Fake news and doctored footage aren\u2019t new phenomena. Propaganda has been around for thousands of years – it was used by the <\/span>Roman Empire<\/span><\/a>, for example, but with the rise of machine learning and artificial intelligence; and the dominance of video and social media as communication tools, deepfakes represent a different level of threat to individuals, brands and governments.<\/span><\/p>\n And it doesn\u2019t take long to make a deepfake. As VFX artist Benjamin Van Den Broeck told <\/span>The Verge<\/span><\/a>, algorithms could perform a \u201cscarily accurate faceswap over a <\/span>single<\/span><\/i> gaming computer, possibly in as short as 24 hours. No team, no render farm, no money.\u201d<\/span><\/p>\n <\/p>\n Deepfakes, such as the one of Mark Zuckerberg<\/a>, are dangerous because people tend to believe what they see and hear.\u00a0<\/span><\/p>\n In August, <\/span>The Wall Street Journal<\/span><\/a> reported that the UK CEO of an energy company paid \u20ac200,000 to a supplier because the CEO of the firm\u2019s Germany-based parent company told him to do so. Only, it wasn\u2019t his boss making the request – it was a fraudster using AI-generated deepfake software to replicate his voice and compel him to transfer the funds within an hour.\u00a0<\/span><\/p>\n Despite all of the processes and procedures businesses have in place, a lot of work revolves around trust. \u201cMy experience of working with you says <\/span>X<\/span><\/i>, therefore I trust you to do <\/span>Y.<\/span><\/i>\u201d It\u2019s no different when it comes to brands and their customers.<\/span><\/p>\n All it takes is for a CEO to send an <\/span>ill-advised tweet<\/span><\/a> to negatively affect the company\u2019s share price. Imagine what damage deepfake videos could do. Deepfakes undermine genuine content. They look so genuine that they are easy to believe. In which case, we may come to a point where we start to question every bit of video and audio content we come across. Can we trust that this is from the person, or brand, it says it is?<\/span><\/p>\n Security experts, such as those at <\/span>Symantec<\/span><\/a>, recommend that brands partner with organisations with expertise in detecting deepfakes as soon as they are released, alerting the brand quickly and allowing them to set the record straight.<\/span><\/p>\n Google<\/span><\/a> is working to help researchers tackle deepfakes by giving them access to considerable research material.\u00a0<\/span><\/p>\n However, some tech experts doubt that any \u201cdeepfake detector\u201d will be effective in the long-term. As an associate professor at the University of Southern California and CEO of Pinscreen, Hao Li told <\/span>The Verge<\/span><\/a>:<\/span> \u201cat some point, it\u2019s likely that it\u2019s not going to be possible to detect [AI fakes] at all. So a different type of approach is going to need to be put in place to resolve this.\u201d<\/span><\/p>\nThe threat posed by deepfakes<\/b><\/h3>\n
How can brands combat deepfakes?<\/b><\/h3>\n