How Facebook Inspired a New User from India to Gore, Fake News in Just 21 Days

A Facebook pop-up office in Davos, Switzerland | Photographer: Jason Alden | bloomberg file photo

Form of words:

Bangalore: In February 2019, Facebook Inc. set up a test account in India to determine how its own algorithms affect what people watch in one of its fastest growing and most important overseas markets. The results stunned the company’s own employees.

Within three weeks, the new user’s feed turned into a vortex of fake news and incendiary images. There were graphic images of beheadings, India’s air strikes against Pakistan, and radical scenes of violence. One group for “things that make you laugh” included fake news reports of 300 terrorists killed in a bombing in Pakistan.

According to a 46-page research note in one of the documents released by Facebook whistleblower Frances, one employee wrote, “I’ve seen more images of dead people in the past 3 weeks than I’ve seen in my entire life.” haugen

The test proved worthwhile because it was specifically designed to focus on Facebook’s role in recommending content. The trial account used the profile of a 21-year-old woman from the city of Jaipur in western India and a resident of Hyderabad. The user simply follows or encounters pages or groups recommended by Facebook through those recommendations. The experience was called an “integrity nightmare” by the authors of the research note.

While Haugen’s revelations paint a damaging picture of Facebook’s role in spreading harmful content in the US, India’s experiment suggests the company’s influence on a global scale could be worse. Most of the money Facebook spends on content moderation is focused on English-language media in countries like the US

But the company’s growth comes largely from countries such as India, Indonesia and Brazil, where it has struggled to hire people with language skills to perform basic oversight. The challenge is particularly acute in India, a country of 1.3 billion people with 22 official languages. Facebook has attempted to outsource the content on its platform to contractors from companies like Accenture.

“We have invested heavily in technology to find hate speech in various languages, including Hindi and Bengali,” a Facebook spokesperson said. “As a result, we have halved the amount of hate speech people see this year. Today it has come down to 0.05 percent. Hate speech against marginalized groups, including Muslims, is on the rise globally. That’s why we’re in enforcement.” We are improving and committed to updating our policies as hate speech evolves online.”


Read also: We are in the middle of a ‘cyber epidemic’. Digital security standards need to be strengthened


According to the report, the new user test account was created on February 4, 2019, during a research team’s visit to India. Facebook is a “very empty space” without friends, the researchers wrote, suggesting only things to watch in the company’s Watch and Live tabs.

“The quality of this material… is not ideal,” the report said. When video service Watch doesn’t know what the user wants, “it looks like it recommends a bunch of softcore porn,” followed by a stunned emoticon.

The experiment began to darken on February 11, as test users began to explore Facebook-recommended content, including popular posts on the social network. He started with benign sites including the official page of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party and BBC News India.

Then on February 14, 40 Indian security personnel were killed and dozens more injured in a terror attack in Pulwama in the politically sensitive Kashmir state. The Indian government has attributed the attack to a terrorist group in Pakistan. Soon the examiner’s feed turned into a barrage of anti-Pakistan hate speech, including images of beheadings and a graphic showing a group of Pakistanis preparing to incinerate.

There were also nationalist messages, exaggerated claims about India’s airstrikes in Pakistan, fake photographs of the bombings and a manipulated photo purported to show a newly-wed army man killed in the attack with his family. I was preparing to return.

Many of the hate-filled posts were in the country’s national language Hindi, eschewing regular content moderation controls on the social network. In India, people use a dozen or more regional forms of Hindi alone. Many people use a mix of English and Indian languages, making it nearly impossible for algorithms to get through the colloquial slang. A human content moderator would need to speak several languages ​​to weed out toxic content.

“After 12 days, 12 planes strike Pakistan,” one post enthused. Another, again in Hindi, claimed the death of 300 terrorists in a bomb blast in Pakistan as “hot news”. The news-sharing group was named “Laughs and Things to Make You Laugh”. Some of the posts containing fake images of napalm bombs claimed India’s airstrike on Pakistan, “300 dogs killed. Now tell India to be alive, death to Pakistan.

The report – titled “One Indian test user polarized, plunges into a sea of ​​nationalist messages” – makes clear how little control Facebook has of one of its most important markets. The Menlo Park, California-based technology giant has anointed India as a key growth market, and used it as a test bed for new products. Last year, Facebook spent nearly $6 billion on a partnership with Asia’s richest man Mukesh Ambani, who leads the Reliance Group.


Read also: Google whistleblower says speaking is harder than it sounds


“This exploratory attempt at a hypothetical test account prompted a deeper, more rigorous analysis of our recommendation systems and contributed to product changes,” a Facebook spokesperson said. “Our work on curbing hate speech continues and we have further strengthened our hate classification to include four Indian languages.”

But the company has also been repeatedly engaging with the Indian government regarding its antics there. The new rules require that Facebook and other social media companies identify the individuals responsible for their online content – ​​making them accountable to the government. Facebook and Twitter Inc. have fought against the rules. On Facebook’s WhatsApp platform, viral fake messages circulated about child abduction gangs, leading to dozens of lynchings across the country in the summer of 2017, further angering users, the courts and the government.

The Facebook report ends by acknowledging its recommendations, making the test user account “polarized and full of graphic content, hate speech and misinformation.” It sounded an optimistic note that the experience could “serve as a starting point for negotiations to understand and mitigate the loss of integrity” from its recommendations in markets outside the US.

“Can we as a company take on an additional responsibility to prevent loss of integrity as a result of recommended content?” the examiner asked.—bloomberg


Read also: Why Facebook’s plan to rebrand itself could be a big risk


subscribe our channel youtube And Wire

Why is the news media in crisis and how can you fix it?

India needs free, unbiased, non-hyphenated and questionable journalism even more as it is facing many crises.

But the news media itself is in trouble. There have been brutal layoffs and pay-cuts. The best of journalism are shrinking, yielding to raw prime-time spectacle.

ThePrint has the best young journalists, columnists and editors to work for it. Smart and thinking people like you will have to pay a price to maintain this quality of journalism. Whether you live in India or abroad, you can Here.

support our journalism