Who Is Responsible When Fake News Has Fatal Consequences?


While politicians and the news media in North America and Europe have spent the past two years locked in debate over the effects of social media disinformation, rumours spread by WhatsApp and other instant messaging platforms have become a matter of life and death in other parts of the world.

In India, about two dozen people have been killed since this spring in mob violence inflamed by rumours spread over WhatsApp. In Brazil, public health officials lamented that hoaxes about the dangers of vaccination spread through WhatsApp were leaving people unprotected during an outbreak of yellow fever. In Sri Lanka, calls for violence against Muslim targets preceded days of deadly riots in March. One message shared on WhatsApp featured a photo of axes and machetes and a list of targets: “Thannekumbura mosque and the mosque in Muruthalawa tonight. Tomorrow supposedly Pilimathalawa and Kandy.”

WhatsApp was founded in 2009 and acquired by Facebook in 2014 for US$19 million. While the messaging app isn’t used as widely in North America as other social media platforms — such as Facebook, Instagram or Twitter — WhatsApp has an estimated 1.5 billion users worldwide. Many of those users are in less developed countries, where the internet is accessed largely by mobile phones, connections may be slow and data costs are prohibitive for much of the population. In these settings, WhatsApp has become a preferred alternative to traditional news websites that use more data. And, by allowing users to share audio and video messages, it embraces those who lack reading and writing skills.

As with other popular social platforms, WhatsApp lets users post and share text messages, photos, videos and audio messages. It can be used for one-on-one conversations but also for forwarding memes to groups of up to 256 people at a time. Members of one group can easily re-post messages they receive in other groups they belong to, allowing messages to spread widely and quickly. However, unlike posts to social media platforms, which are posted online and remain at least somewhat publicly accessible, messages sent through WhatsApp are encrypted, which makes it impossible for outsiders or even WhatsApp employees to see the content of messages or to track the spread of viral posts.

The dangers associated with WhatsApp have become tragically evident in India, where more than 200 million people use the app. Fears about child-kidnapping gangs have prompted misguided outbursts of vigilante justice, leading to mob violence, assaults and even death for those targeted by rumours in a country where many people have little faith in the police and justice system. One victim of mob violence was a 65-year-old woman who stopped with her family to ask for directions on their way to a temple in Tamil Nadu state. She was beaten to death, as was a 33-year-old musician in Tripura state who was, ironically, paid to broadcast warnings about fake news from loudspeakers mounted on a van. Following the rash of killings, the Indian government lashed out at WhatsApp, demanding the company take “immediate action to end this menace.”

It’s impossible to place the blame for these attacks entirely on WhatsApp, in the same way, that Facebook can’t be held fully responsible for polarizing an American electorate that was already polarized.

“The truth is that, under governments of all stripes, we Indians were lynching each other long before WhatsApp came along to make it easier to convene murderous mobs,” columnist Mihir Sharma wrote for Bloomberg Opinion. “Then and now, ‘outsiders’ and marginalized groups like migrant labourers, nomadic tribesmen and especially Muslims have been the targets.”

But just as Facebook’s engagement-seeking algorithms encouraged outrageous and divisive content to spread among a public that was already prone to division, WhatsApp has been criticized for functioning in a way that allows fear-mongering to spread through societies with weak institutions and low digital-media literacy. WhatsApp messages are often shared by family and friends, which can make them seem more credible.

“In India, for a lot of people (WhatsApp) is their first entry point into the internet itself,” Govindraj Ethiraj of the Indian fact-checking project Boom Live told a conference this summer. “I think there’s a certain degree of belief among common people about information and messages that come on these platforms…people fall for these things all the time.”

All of this underscores the unintended consequences that can arise when technology that was developed in one location is exported around the world: developers rooted in Silicon Valley do not always consider how their products may be used in societies that are different from their own. Tech giants have been trying to compensate for this by hiring more staff that understand local languages and contexts, but their numbers remain dwarfed by the overwhelming amount of data that travels across their platforms every day.

Meanwhile, state governments and international governance bodies can try to enact regulations or policies that limit the resulting damages, but that raises other concerns. First, there are well-founded concerns that strict restrictions can lead companies to over-censor questionable-but-legal content in order to avoid harsh penalties, which has implications for freedom of expression. Because tech companies are private businesses, there are limits to the public’s oversight of policies and policy enforcement. On top of that, good-faith regulations enacted in democratic countries can be exploited by repressive regimes, some of which have demanded similar restrictions in their own countries but used them to limit human rights: for example, by claiming that public order would be threatened unless posts from dissidents are banned, or by periodically shutting down internet access entirely.

In the past few months, WhatsApp has made several changes in response to the accusations that it has contributed to mob violence while sidestepping the messy issues of message encryption and content censorship. It is actively collaborating with researchers and fact-checking organizations in certain countries to counter disinformation and learn more about how it spreads. It bought full-page ads in major Indian newspapers warning readers against believing rumours they see on the platform. WhatsApp also set limits on the number of people that a message can be forwarded to five people in India and 20 everywhere else. It also introduced a label for forwarded content to distinguish it from the original material. The idea is to encourage users to apply extra scrutiny to forwarded posts.

Fact-checking organizations and other experts in online disinformation are encouraged by these changes but express scepticism that they will do enough to stamp out the problem. In Brazil, which has struggled with political disinformation as well as anti-vaccination hoaxes, users have been getting around the WhatsApp “forwarded” label by downloading media files and sending them from their own phones, Cristina Tardáguila, the director of a fact-checking project, told the Poynter Institute. “Brazilians know a lot about how to dodge new things and new rules,” she said. Some experts have suggested that all posts should include a time-stamp or the name of the person who created it, but WhatsApp, known for its ethos that prioritizes user privacy, has avoided any measures that could widely identify users.

Combining transparency measures in the platform with initiatives to increase digital media literacy might be a more effective option. This approach focuses on helping users of digital technology to develop the knowledge to evaluate and interpret media and to recognize their social and political influence. These skills are valuable because they can be applied to any number of technologies, regardless of what counter-hoax measures digital platforms do or do not adopt. Focusing on users, rather than content, also allows policymakers to take action sooner, while they continue to grapple with the more complicated governance issues related to content management.

In the past few years, non-governmental organizations and schools in developing and conflict-prone areas around the world have launched new digital literacy efforts as part of an attempt to stop online rumours from catalyzing real-world violence. In the Indian city of Kannur, which is in the southern state of Kerala, 150 state-run schools have started teaching classes about disinformation after viral hoaxes hindered a vaccination drive and contributed to mob beatings. “We decided to go teach the children because many of their parents appeared to believe everything they received on the phone was sacrosanct and the truth,” district official Mir Mohammed Ali told the BBC. “I believe that if we can infuse a spirit of enquiry in our children, we can win this battle against fake news.”