While 2021 has seen rises in antisemitic and anti-Muslim violence, Facebook has failed to enforce its hate speech policies.
Facebook failed to remove 90% of reported antisemitic content over a two-month period, according to a new study from the research and advocacy nonprofit organization Center for Countering Digital Hate, whose findings spotlight the extent of Facebook's religious hate speech problem.
Throughout a six-week period in May and June 2021, researchers reported 714 blatantly antisemitic posts to Facebook, Instagram, Twitter, YouTube, and TikTok to see how the platforms would respond. They found that the platforms removed fewer than one out of six reported posts, with Facebook ranking dead last in enforcement.
"As a result of their failure to enforce their own rules, social media platforms like Facebook have become safe places to spread racism and propaganda against Jews," the center wrote in its study.
Between post views, followers of the posters' accounts, and groups that shared the content, the reported antisemitic content could have been viewed by more than 7.3 million users.
Examples of content include a Facebook video, watched over 75,000 times, that blamed the 9/11 attacks on the Rothschild dynasty, a wealthy Jewish family whose power is often exaggerated by conspiracy theorists, as well as both public and private Facebook groups with tens of thousands of members making posts about how Jews control the U.S. government or are responsible for creating the coronavirus.
Facebook has long been criticized for insufficiently policing hate and misinformation on its platform, used by almost 3 billion monthly active users. Earlier this month, President Joe Biden lambasted the platform for allowing misinformation around vaccines and the coronavirus, accusing Facebook of "killing people."
Stop Hate for Profit, a coalition that includes civil rights groups Color of Change and the NAACP, succeeded in mobilizing an advertising boycott over Facebook's allowance of hate speech for all of July 2020. More than 1,000 companies halted ads on the platform, with some reducing their spending, some participated for just a month, and others boycotted for months after the initial campaign. While 98% of Facebook's nearly $86 billion revenue comes from advertising, the boycott only exacted a minimal financial impact on the company, and its revenue actually rose during the boycott period.
The platform's policies have evolved over time. In what Facebook CEO Mark Zuckerberg called "standing for free expression," Facebook allowed content that distorted or denied the Holocaust until last October, when Zuckerberg reversed course.
And Facebook has pushed back on criticism, claiming to take a "zero tolerance approach" to hate speech. In an article titled "Facebook Does Not Benefit from Hate" posted in July 2020, Facebook official Nick Clegg wrote:
More than 100 billion messages are sent on our services every day. … In all of those billions of interactions a tiny fraction are hateful. When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them.
Unfortunately, zero tolerance doesn't mean zero incidences. With so much content posted every day, rooting out the hate is like looking for a needle in a haystack. We invest billions of dollars each year in people and technology to keep our platform safe. We have tripled — to more than 35,000 — the people working on safety and security. We're a pioneer in artificial intelligence technology to remove hateful content at scale.
But critics have continued to charge Facebook with profiting from hate speech that circulates on its platform, arguing that its business model makes it less likely to take steps to remove content that keeps its users engaged with the site.
The Anti-Defamation League, a leading Jewish advocacy organization, has spent years exhorting Facebook to remove antisemitic content from its site. In June, the organization sent a letter to Facebook's oversight board, an independent panel of about 20 former political leaders, activists, and journalists, taking it to task for a number of reported antisemitic posts the board permitted.
"Despite years of requests to address antisemitism, Facebook has failed to take sufficient action against posts, groups and users that promote antisemitism in clear contravention of the company's community standards," the letter says. "Facebook's inaction has helped spread hatred of Jews and has contributed to historical high levels of antisemitism in America and antisemitism online and offline across the globe."
Daniel Kelley, the associate director of the ADL's Center for Technology and Society, said oftentimes the only way to get Facebook to take down antisemitic posts is through a public campaign, because the platform is more concerned with its public image than with a meaningful approach to tackling hate speech.
"Facebook has this content moderation-as-a-PR-concern mode of operating that's deeply disingenuous," Kelly told The American Independent Foundation.
Facebook declined to comment for this story.
A civil rights group filed suit in April against Facebook over enforcement of its stated policies on hateful content on its platform.
Muslim Advocates alleges that Facebook has deceived users time and time again by promising to remove hateful content while refusing to actually enforce its own policies.
The lawsuit cites a study from Elon University computer science professor Megan Squire, who conducted a social network analysis and found more than 1,800 anti-Muslim Facebook groups with over 700,000 members.
"The level of vitriol in anti-Muslim groups is just so high, it's really just alarming," Squire said in a phone call.
Advocates argue that Facebook's refusal to remove groups and police hateful content has dangerous real-world implications. Reported incidences of antisemitic and Islamophobic violence continue to increase both domestically and worldwide.
"We don't believe that this is healthy for our society — that you can so easily violate public community standards on a platform like Facebook by fomenting hate against others, organizing rallies with calls to arms against people in order to threaten and intimidate them with your weapons and violence," Muslim Advocates staff attorney Sanaa Ansari told The American Independent Foundation. "And despite having these platform rules, this kind of organizing of threats and violent assault — that's dangerous, prohibited action — is still taking place on the platform."
Published with permission of The American Independent Foundation.