Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, according to leaked documents obtained by the Associated Press, even as employees of the internet giant questioned its motives and interests.

Based on research as recently as March of this year for the company’s memos going back to 2019, internal company documents in India highlight Facebook’s ongoing struggle to squash offensive content on its platforms in the world’s largest democracy and the company’s largest growth market.

The files show that Facebook has been aware of the problems for years, raising questions about whether it has done enough to address these issues.

Around the world, Facebook is becoming increasingly important in politics, and India is no different.

The leaked documents include a range of internal company reports about hate speech and disinformation in India that appear to have been intensified in some cases by the feature and its “recommended” algorithms. It also includes company employees’ concerns about the mishandling of these issues and their resentment of viral “critics” on the platform.

According to the documents, Facebook considered India one of the “most vulnerable countries” in the world and identified Hindi and Bengali as priorities for “automation in violating hostile rhetoric.” However, Facebook didn’t have enough local language brokers or tag content to stop misinformation that has sometimes led to real-world violence.

In a statement to the Associated Press, Facebook said it had “invested significantly in technology to find hate speech in different languages, including Hindi and Bengali” which would “reduce the amount of hate speech people see by half” in 2021.

Hate speech against marginalized groups, including Muslims, is on the rise globally. We are improving enforcement and are committed to updating our policies as hate speech evolves online, a company spokesperson said. This AP story, along with other news being disseminated, is based on disclosures made to the Securities and Exchange Commission and submitted to Congress in a redacted version by the legal counsel of former Facebook employee-turned-whisker Francis Haugen. Edited copies were obtained by a consortium of news organizations, including the Associated Press.

In February 2019, a Facebook employee wanted to understand what a new user in India saw in their news feed if all they did was follow only Pages and Groups recommended by the platform itself.

The employee created a beta user account and survived for three weeks, a period during which an unusual event shook India – a militant attack in Kashmir killed more than 40 Indian soldiers, bringing the country close to war with rival Pakistan.

In the memo, titled “An Indian user’s descent into a sea of ​​nationalist and polarizing messages,” the employee whose name has been removed said he was “shocked” by the content flooding the news feed. The benign and benign groups recommended by Facebook quickly morphed into something else entirely, as hate speech, unverified rumors, and viral content spread.

The report raised deep concerns about what divisive content could lead to in the real world. “Should we as a company have an additional responsibility to prevent integrity damage that results from recommended content?” asked the researcher in their conclusion.

The memo that was circulated with other employees did not answer this question. But it did reveal how platform-specific algorithms or default settings played a role in producing such objectionable content. The employee noted that there are obvious “blind spots”, particularly in “local language content”. They said they hope the findings will start conversations about how to avoid such “safety harms,” ​​especially for those who are “significantly different” from the average American user.

Although the research was conducted in three weeks and was not an average representation, they acknowledged that it showed how this “unmoderated” and problematic content “can take over completely” during a “major crisis event”.

The Facebook spokesperson said the test study “inspired deeper and more rigorous analysis” of its recommendation systems and “contributed to product changes to improve.”

“Separately, our work continues to curb hate speech and we have enhanced our hate ratings to include four Indian languages,” the spokesperson said.

Read all the latest news, breaking news and coronavirus news here. Follow us Facebook social networking siteAnd Twitter And cable.

Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *