Advertisement
Advertisement
Deepfakes are an extended and hyper-specialised form of fake news that has the potential to threaten government functions and social stability across the world. Photo: Shutterstock
Opinion
Opinion
by Mohammed Sinan Siyech
Opinion
by Mohammed Sinan Siyech

From US elections to violence in India, the threat of deepfakes is only growing

  • With the number of deepfake videos doubling every six months, they pose a serious challenge to political systems and social stability
  • Given that technology companies are motivated by profit, government must step in to regulate the creation of this content
In September, Facebook removed several “deepfake” videos pushed out by Russian troll farms to meddle in the US elections, once again prompting discussion on the impact of deepfakes as an evolved version of fake news.
The last decade has been characterised by the spread of fake news across the world, sparking violence in India, France and Indonesia, and contributing to social division in many countries. At best, tech companies and governments have been struggling to combat these fake news; at worst, they have profited off the ensuing hatred and polarisation.

In recent years, deepfakes – fake audio and/or video content created using artificial intelligence technology known as deep learning – have gained prominence online. Deepfakes have a fairly recent history: emerging out of academic research in the 1990s, the democratisation of technology and improvements in digital media brought the technology to online users.

Apps such as Zao and Reface, which allows users to superimpose their faces onto those of celebrities, essentially use deepfake technology.

01:22

NYC ‘Fake Newsstand’ combats misinformation

NYC ‘Fake Newsstand’ combats misinformation
The number of deepfake videos is doubling every six months. For instance, according to Deeptrace Labs, a Dutch company which studies the phenomenon, the number of videos in the first seven months of 2019 were over 14,500 and close to 50,000 in July this year.
While there have been some small positive applications of the technology, the vast majority of deepfakes are used for malicious purposes. Deeptrace Labs noted that more than 96 per cent of deepfakes in 2019 were pornographic, with actresses from Britain, South Korea and America being targeted. The top four deepfake pornographic videos garnered over 134 million views.

Deepfakes have also made an impact in the political sphere. For instance, an Indian political communication firm used deepfake technology to make videos of a politician speaking in languages that he was not proficient in.

Last year, manipulated videos of US House Speaker Nancy Pelosi showing her slurring her words were viewed by at least three million users. In 2018, a deepfake video of former US president Barack Obama speaking out about deepfakes was also circulated widely to raise awareness on the topic.

The existence of deepfake videos means that sometimes real videos are taken as fakes. In January 2019, in the African nation of Gabon, a video of President Ali Bongo, who had not made a public appearance for several months, triggered a coup. The military believed the video was a fake, although the president later confirmed it was real.

Deepfake technology is aided by the increase in the number of videos being uploaded – as a pool for manipulators to draw from when making their own videos – and more importantly by people actually using the applications themselves, thereby providing large tranches of data that developers can use to identify glitches and improve the technology.

For example, the Reface app has now more than 20 million downloads across 100 countries. All the data produced by such videos can theoretically be used to improve the technology, which can be sold to interested parties later. More importantly, third-party technologies can use data from such apps to help polish their own deepfake technology, thus enlisting users of such apps as unknowing developers.

Deepfakes are an extended and hyper-specialised form of fake news that has the potential to threaten government functions and social stability across the world. Most people often believe fake news without checking for sources or clarifications. Even once news reports are outed as fake, the rapid initial circulation makes it impossible to reverse the damage done.

Deepfakes exacerbate the problem of fake news in various ways. First, people are more likely to believe videos than news reports attributed to individuals, making it much harder to combat damaged perceptions arising from such clips.

Second, because the technology is still new to many people, they are more likely to be taken in these videos.

Third, problems also arise when enough people are aware of the issue, leading to real videos being dismissed as fake, such as in the case of Gabon. This can have problematic implications for legal cases, where videos can be dismissed as fake due to the ubiquity of deepfake technology.

Fourth, and more importantly, deepfakes can also contribute to violence given that street mobs often become judge, jury and executioner if the problem is acute enough.

All these problems are exacerbated by the fact that deepfake creators vastly outnumber deepfake detectors. It is wholly possible that deepfakes will be used to foment violence and extremist sentiment across the world.

For instance, terrorist groups may produce deepfake clips of foreign soldiers torturing or killing civilians.

Similarly, right-wing groups could produce deepfakes to further their narrative of minorities or their opponents being intolerant, anti-national or dangers to society.

Given that Silicon Valley companies will only invest in technologies if there is a profit associated to it – as documented by the #hateforprofit campaign, in which civil service organisations urged advertisers to boycott Facebook – governments across the world must take responsibility for strict regulating the creation of deepfakes, and lawmakers should be educated appropriately on the risks of deepfakes.

If the fake news is being weaponised, then deepfakes are a nuclear weapon.

Mohammed Sinan Siyech is a senior analyst with the International Centre for Political Violence and Terrorism Research, a constituent unit of the S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore

Post