Our website use cookies to improve and personalize your experience and to display advertisements (if any). Our website may also include cookies from third parties like Google Adsense, Google Analytics, and Youtube. By using the website, you consent to the use of cookies.

We need to regulate deep fakes before they break our democracies

SAQIB QURESHI

The world has already learned the hard way that digital media – and especially social media – can weaken societies and undermine our systems of government. We are repeating the same mistakes with ‘deep fake technology, as it becomes more readily available, cheaper, and better. 

We must act now to create effective laws and policies against deepfakes before they are used against us – just as social media was in the Cambridge Analytica scandal, where Facebook users’ data was mined by now-defunct firm Cambridge Analytica to assist both Trump’s 2016 presidential campaign and the Leave campaign in Britain. The threat posed, which allows users to create highly realistic, doctored video content of individuals by (for example) superimposing their face onto another’s body, might well be impossible to quantify. But it is significant.

Deepfake technology is currently used mostly for harmless entertainment and artistic purposes. However it has a dark side; it could be used by foreign or domestic actors to create the ultimate in ‘fake news’ and influence behaviours and elections. 

Advertisements

Researchers at an MIT conference last year were able to create a real-time interview with ‘Vladimir Putin’. Outside the ‘safe space’ of an academic conference, its applications are endless – and terrifying. It has the potential to accelerate our trajectory towards a post-truth era that neither our populations nor institutions are prepared for.

READ:  Armed with social media, Zimbabwean youth fight coronavirus 'infodemic'

Our governments are already struggling to adapt to the shift from ‘few-to-many’, to ‘many-to-many’ content distribution. Deepfake video content raises the stakes since it is much more psychologically flammable than static images. We are more receptive to video and audio content, in a way that we no longer are to photos. This makes deepfake video more dangerous than any other type of misinformation.

The consequences of not getting a hold on this are terrifying. The deepfake universe could negate almost everything that we see on a screen. We could take one look at our screen showing a video of a chemical terrorist incident just down the road, and completely ignore it in the belief that it’s another fake.

Or fake videos could do the same thing to everything in our reality – perhaps an effective weapon for any dictatorship’s propaganda machine as he tries to silence dissent. ‘Leaked’ videos could be used to justify foreign wars. 

And all of this is before the bots get involved. Facebook currently predicts that approximately 6 million bots are infesting its platform; it was these bots who were responsible for posting a large portion of political content in 2016. Once bots can create their own deep fakes, the problem becomes exponential.

In a September 2019 study, an Amsterdam-based company named Deeptrace found 14,678 deep fake videos on popular streaming websites – double the number from December 2018. As worrying, there are no specific national laws against deep fakes in the US or UK. We can get the British Prime Minister to announce a blockade of Ireland tomorrow at 5am. Or have the Federal Reserve raise interest rates.

READ:  New video evidence of violence in AKA-Anele’s lives

Misinformation is hard for government to police, despite increased resources during the pandemic. I nonetheless encourage that deepfakes be treated as criminal activity because of the huge harm they can cause. Texas passed a law in September, criminalizing the publishing and distribution of deep fake videos intended to harm a candidate or influence results within 30 days of an election. Other states like California have implemented similar laws aimed at election interference. 

We need national (ideally transnational) laws, backed with serious enforcement. Since many of the content creators are likely to be based overseas, possibly in countries without extradition treaties, law enforcement alone will never be enough.

Criminal charges against anyone sharing content would risk criminalising millions of individuals who (perhaps innocently) shared what they thought was real. This is where the analogy with child pornography breaks down: whereas it is difficult to imagine someone genuinely not knowing that they were sharing child pornography, the very nature of deepfakes means that most sharers could be genuinely duped. 

Big Tech must therefore help identify deepfakes, and swiftly eliminate them. Those firms have the capacity to block overtly illegal content, like child pornography for example. 

Governments and the private sector need to invest in applications of technologies like AI and blockchain that can help to certify genuine content. There will be inevitable objections based on freedom of speech, or that satire depends on aping reality. But these liberties always come with constraints, and we should not be shy about saying so. That’s how it is today, and has been in the past.

Advertisements

Failure to act against deepfake technology could plunge our society into a state of ‘reality apathy’, where we believe nothing; or chaos, where we are even more likely to believe what is simply false. Let’s learn from our mistakes with social media.

READ:  Twitter removes President's 'abusive' civil war post

This time, let’s act before the horse has bolted.

Saqib Qureshi, is a Visiting Fellow at the London School of Economics and Political Science, and the author of The Broken Contract: Making Our Democracies Accountable, Representative, and Less Wasteful and Reconstructing Strategy: Dancing with the God of Objectivity

Advertisements
By The African Mirror

MORE FROM THIS SECTION