From US elections to violence in India, the threat of deepfakes is only growing
- With the number of deepfake videos doubling every six months, they pose a serious challenge to political systems and social stability
- Given that technology companies are motivated by profit, government must step in to regulate the creation of this content
In recent years, deepfakes – fake audio and/or video content created using artificial intelligence technology known as deep learning – have gained prominence online. Deepfakes have a fairly recent history: emerging out of academic research in the 1990s, the democratisation of technology and improvements in digital media brought the technology to online users.
01:22
NYC ‘Fake Newsstand’ combats misinformation
Deepfakes have also made an impact in the political sphere. For instance, an Indian political communication firm used deepfake technology to make videos of a politician speaking in languages that he was not proficient in.
Last year, manipulated videos of US House Speaker Nancy Pelosi showing her slurring her words were viewed by at least three million users. In 2018, a deepfake video of former US president Barack Obama speaking out about deepfakes was also circulated widely to raise awareness on the topic.
Deepfake technology is aided by the increase in the number of videos being uploaded – as a pool for manipulators to draw from when making their own videos – and more importantly by people actually using the applications themselves, thereby providing large tranches of data that developers can use to identify glitches and improve the technology.
For example, the Reface app has now more than 20 million downloads across 100 countries. All the data produced by such videos can theoretically be used to improve the technology, which can be sold to interested parties later. More importantly, third-party technologies can use data from such apps to help polish their own deepfake technology, thus enlisting users of such apps as unknowing developers.
Deepfakes are an extended and hyper-specialised form of fake news that has the potential to threaten government functions and social stability across the world. Most people often believe fake news without checking for sources or clarifications. Even once news reports are outed as fake, the rapid initial circulation makes it impossible to reverse the damage done.
Deepfakes exacerbate the problem of fake news in various ways. First, people are more likely to believe videos than news reports attributed to individuals, making it much harder to combat damaged perceptions arising from such clips.
Second, because the technology is still new to many people, they are more likely to be taken in these videos.
Third, problems also arise when enough people are aware of the issue, leading to real videos being dismissed as fake, such as in the case of Gabon. This can have problematic implications for legal cases, where videos can be dismissed as fake due to the ubiquity of deepfake technology.
Fourth, and more importantly, deepfakes can also contribute to violence given that street mobs often become judge, jury and executioner if the problem is acute enough.
All these problems are exacerbated by the fact that deepfake creators vastly outnumber deepfake detectors. It is wholly possible that deepfakes will be used to foment violence and extremist sentiment across the world.
For instance, terrorist groups may produce deepfake clips of foreign soldiers torturing or killing civilians.
Similarly, right-wing groups could produce deepfakes to further their narrative of minorities or their opponents being intolerant, anti-national or dangers to society.
Given that Silicon Valley companies will only invest in technologies if there is a profit associated to it – as documented by the #hateforprofit campaign, in which civil service organisations urged advertisers to boycott Facebook – governments across the world must take responsibility for strict regulating the creation of deepfakes, and lawmakers should be educated appropriately on the risks of deepfakes.
If the fake news is being weaponised, then deepfakes are a nuclear weapon.
Mohammed Sinan Siyech is a senior analyst with the International Centre for Political Violence and Terrorism Research, a constituent unit of the S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore