Advertisement
Advertisement
A view of Twitter’s office in New York on November 18. Amid mass firings and resignations since Elon Musk took control, the social media company’s future is in doubt. Photo: EPA-EFE
Opinion
Mohammed Sinan Siyech
Mohammed Sinan Siyech

Why Twitter’s firing of content moderators is a sign of worse to come

  • With essential workers such as content moderators cut, hate speech has quickly increased on the social media platform
  • This does not bode well for a platform that already had content moderation problems, and which might descend into chaos
News cycles dedicated to Elon Musk’s troubled takeover of Twitter and his subsequent mission to trim layers of the workforce have dominated discussions in the tech space. After Musk took control of the platform late last month, more than half of Twitter’s 7,500 global workers have lost their jobs or resigned due to new policies early this month.
In addition, an estimated 4,400 out of Twitter’s 5,500 contractors were fired. While most of these contract workers focused on aspects such as engineering, real estate, and marketing, some of them were involved in the crucial work of content moderation.

Content moderators are among the most essential workers in any major social media platform: not just Twitter, but also companies such as Facebook and TikTok. Content moderators are responsible for pulling inappropriate or graphic content such as videos of suicide, hate speech, violence, pornography and disinformation and reviewing content reported for various violations of company policy.

In this sense, moderators are an invisible army essential for protecting Twitterati and other social media users from the depths of human depravity. Given the real-world impact of social media, content moderators also play a vital role in maintaining peace and reducing hate speech and hate crimes.

It is their job to pull disinformation during election cycles and to remove videos posted by terrorist groups such as Islamic State, for example. But the violent nature of the videos reviewed has resulted in content moderators experiencing symptoms of post-traumatic stress disorder or feelings of isolation, and thus a high turnover rate (some employees quit after about a year).

In early signs of the content moderation staff cull, hate speech directed at various racial, religious, and sexual minorities has increased. For example, the number of tweets using the racial slur directed at African-Americans rose by 500 per cent in the first 12 hours following Musk’s takeover. There has also been an uptick in anti-Semitic hate speech on the platform.

Most of these posts come from a small minority of around 300 troll accounts associated with right-wing America. This is not a surprising trend, given that Musk is understood to have been critical of the platform’s previous left-wing bias. It is also telling that shortly after sealing the deal to buy Twitter, Musk fired top executives who had been in favour of tighter content moderation.
One of them, Vijaya Gadde, as head of legal policy, trust, and safety, had played a crucial role in banning Donald Trump from Twitter in 2021, after the January 6 insurrection at the Capitol.

Apart from right-wing extremism, other forms of extremism have also surged on Twitter. For instance, according to the Institute for Strategic Dialogue, a London-based counterterrorism think tank, the number of new Islamic State Twitter accounts went up by 69 per cent soon after Musk’s takeover.

Tick, tick … boom: Twitter Blue sign-ups vanish after fake account chaos

Furthermore, amid Musk’s herky-jerky policies on verification, there have been known instances of Islamic State Twitter accounts impersonating a lingerie model, as well as an OnlyFans model with up to 10 million followers on different platforms.

With fewer content moderators around to take down such content, it is unclear how much more will pop up.

All of this does not bode well for a platform that already had content moderation problems, and where right-wing propaganda had been amplified over the years.

In the wake of the job cuts, two scenarios could unfold. Firstly, it is possible that Musk will streamline and automate the content moderation process to such an extent that the human moderators who were laid off will not be replaced in future. This is difficult to imagine from both a management standpoint and a technical standpoint.

So far, Musk’s management strategy for Twitter seems to have been haphazard in execution, with instances of fired staff being asked to return. It is unclear if the billionaire owner’s understanding of content moderation is as sound as his management strategy; the platform might be in for a tumultuous ride.

From a technical perspective, it is difficult to automate the process of monitoring tweets for several reasons. For starters, artificial intelligence is not advanced enough to spot cues such as sarcasm and satire, which could mark out certain tweets as extreme.

Also, while this problem is bad enough with English-language posts, a lot of content is posted in many other languages, thus compounding the limitations of artificial intelligence.

From Mastodon to Tribel, 4 Twitter alternatives to escape Elon Musk

Besides, hate speech parameters often change over time, as social media participants come up with ways to beat content monitors, and this means that the knowledge base has to be constantly updated – which is surely a task more suitable to humans than machines, at least for now.

Against this backdrop, the second scenario is looking more likely: the increasing prevalence of extremist content could turn the platform into a cesspool. While observers hope this will not come to pass, given the general usefulness of the microblogging site, current trends have yet to demonstrate that order will prevail in the medium to long term.

Mohammed Sinan Siyech is a doctoral candidate at the Islamic and Middle East Studies Department at the University of Edinburgh and a non-resident associate fellow at the Observer Research Foundation, New Delhi

5