Advertisement

How two of the world’s biggest chat apps tackle fake news

WhatsApp limits message forwarding, while WeChat bans some content entirely

Reading Time:3 minutes
Why you can trust SCMP
0
How two of the world’s biggest chat apps tackle fake news
This article originally appeared on ABACUS

Eating breakfast frequently is dangerous. Orange peel can unlock smartphones. Australia announced the end of cancer.

These were all actual story headlines circulated on China’s biggest chat app last year. None of them sound particularly convincing, but they became some of the most widely spread hoaxes on WeChat in 2018.

WeChat, the app that does everything

Around the world, social media firms are grappling with ways to tackle the spread of misinformation. If you’re a WhatsApp user for instance, you may notice that starting from this week onwards, you can only forward a message up to five times -- a new measure adopted by app owner Facebook to curb fake news, which plagued users in India and Brazil (among many other places).

Placing limits on message sharing isn’t new. But the way it’s done differs from platform to platform. On WhatsApp, where messages are encrypted end-to-end, it’s difficult for moderators to read what’s actually being shared and deal with them directly.

WeChat operates differently -- it may not surprise you to learn that it takes a more direct approach. Every month, it posts a list of the top ten rumors on its official account. It also runs a mini program that flags any fake news articles you may have read. Tencent says the mini program has been used by some 38 million people.
But sometimes, WeChat also bans content outright. It regularly blocks certain links and articles from being shared, whether they’re rumors, scams, or spam.

The measure isn’t limited to fighting fake news. It also targets a long list of other content deemed inappropriate by WeChat, including obscene materials and solicitation for religious donations.

Advertisement