Dehradun: With elections in Uttarakhand, Uttar Pradesh, Punjab, Goa, and Manipur scheduled to begin this February, we are providing an update on how Meta is preparing to protect people and our platform during this time.
For these elections, we have a comprehensive strategy in place that includes detecting and removing hate speech and content that incites violence, reducing the spread of misinformation, making political advertising more transparent, collaborating with election authorities to remove content that violates local law, and assisting people in making their voices heard through voting.
Our Elections Operations Center is now operational.
We will activate our Elections Operations Center so that we can monitor and respond to potential election-related abuses in real time.
We’ve been using this model for major elections all over the world since 2018. It brings together subject matter experts from across the company, including threat intelligence, data science, engineering, research, operations, policy, and legal teams, to provide us with greater visibility into emerging threats. As a result, we can respond quickly before they grow larger.
Combating Hate Speech and Other Negative Content
We are acutely aware of how hate speech on our platforms can result in offline harm. With elections approaching, it is even more critical that we detect potential hate speech and prevent it from spreading. This is an area that we have prioritized and will continue to work on comprehensively in order to keep people safe during these elections.
We’ve spent over $13 billion on teams and technology. This has enabled us to more than triple the size of the global team working on safety and security to over 40,000 people, including 15,000+ dedicated content reviewers working across 70 languages. Meta has reviewers in 20 Indian languages for India.
If a piece of content violates our hate speech policies, we remove it using proactive detection technology or by hand.
Furthermore, we remove certain slurs that we determine to be hate speech under our existing Community Standards. We are also regularly updating our policies to include additional risk areas. To supplement that effort, we may use technology to identify new words and phrases associated with hate speech and either remove or limit the distribution of posts containing that language. We also deactivate repeat offenders’ accounts or temporarily limit the distribution of content from such accounts that have repeatedly violated our policies.
We’ve made a lot of progress. The platform’s prevalence of hate speech has now dropped to 0.03 percent. But we are well aware that there is always more work to be done.
Improving Political and Social Advertising Transparency
Every voter, we believe, deserves transparency as they engage in political discussion and debate. As a result, we’ve developed a set of tools to provide more information about political ads on Facebook and Instagram.
We announced the expansion of ad enforcement last December, requiring “Paid By For” disclaimers for ads about elections or politics, as well as social issues. Ads that discuss, debate, or advocate for or against important issues will be subject to the enforcement. We also require that anyone running ads on Facebook or Instagram about social issues, elections, or politics be authorized. This allows people to see the name of the person or organization who is running these advertisements. Ads are also added to our Ad Library.
How to Stay Safe on WhatsApp
We make certain that WhatsApp remains an industry leader in end-to-end encrypted private messaging services, and that user safety is at the heart of everything we do. We have made efforts, both in terms of product innovation and education, to provide users with resources that assist them in verifying information.
WhatsApp actively limits virality on its platform. The restrictions we imposed on ‘forwards’ reduced the spread of ‘highly forwarded messages’ on WhatsApp by more than 70%. If users encounter problematic messages, they can block and report accounts to WhatsApp.