The Ministry of Electronics and Information Technology is developing a report on the principal findings linked to India in the internal papers obtained by Facebook whistleblower Frances Haugen, including claimed algorithmic flaws that drive new users in India to “misinformation and hate speech.”

“If needed, we will call their executives to explain how their algorithms work and the action they have taken so far to counter misinformation and hate speech. For now, we will have to study (the revelations made by Haugen),” sources said.

The report will most likely be written and finalised this week, and will reveal how Facebook failed to prevent the spread of disinformation and hate speech on its platform in India, owing to a lack of appropriate tools to flag or monitor material in Hindi and Bengali.

The findings of a Facebook researcher in Kerala from a self-created user account, which encountered several instances of hate speech and misinformation on the basis of algorithmic recommendations of the platform, are also likely to be included in the report, the sources said.

In her complaint to the US Securities and Exchange Commission (SEC), Haugen had said that despite being aware that “RSS users, groups, and pages promote fear-mongering, anti-Muslim narratives”, Facebook could not take action or flag this content, given its “lack of Hindi and Bengali classifiers”.

Citing an undated internal Facebook document titled “Adversarial Harmful Networks-India Case study”, the complaint sent to US SEC by non-profit legal organisation Whistleblower Aid on behalf of Haugen noted: “There were a number of dehumanizing posts (on) Muslims… Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned, and we have yet to put forth a nomination for designation of this group (RSS) given political sensitivities.”

Apart from Haugen’s revelations about the alleged inaction by Facebook on hate speech and misinformation being spread in India, The New York Times reported that the company’s own employees were grappling with the effects the platform had on users in India, especially in the run-up to the 2019 general elections.

Responding to queries sent then by The Indian Express, Facebook had said that based on the algorithmic recommendations made to the test user account it had created, the company had undertaken “deeper, more rigorous analysis” of its recommendation systems in India.

“This exploratory effort of one hypothetical test account inspired deeper, more rigorous analysis of our recommendation systems, and contributed to product changes to improve them. Product changes from subsequent, more rigorous research included things like the removal of borderline content and civic and political Groups from our recommendation systems,” a Facebook spokesperson had said.

The New York Times reported that the Facebook researcher’s report “was one of dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India”.

“They provide stark evidence of one of the most serious criticisms levied by human rights activists and politicians against the world-spanning company: It moves into a country without fully understanding its potential effects on local culture and politics, and fails to deploy the resources to act on issues once they occur,” the report said.