On Tuesday afternoon, Facebook announced that it had deleted more than 200 accounts linked to the violent and extremist anti-government “boogaloo” movement. This move comes after weeks of criticism of the company’s handling of hate speech on its platform. Still, banning boogaloo accounts doesn’t solve Facebook’s biggest hate speech problem.
More than 100 major brands, from Unilever to Verizon, have withdrawn advertising from the platform after civil rights groups called for a boycott last week. Facebook’s efforts to address the controversy included the announcement of further efforts to prevent racial and ethnic-based voter suppression and a possible audit of its moderation practices.
The controversy over social media companies and hate speech has escalated in recent weeks as protesters across the United States have been fighting for greater racial justice. About a month ago, when the protests first erupted, Facebook sparked outrage when it decided to do nothing after President Trump posted a comment saying “when the looting begins, the shooting begins” in a post about the protests. This angered civil rights leaders, as well as some of Facebook’s employees. It also triggered the “Stop Hate for Profit” ad boycott, led by organizations such as the Anti-Defamation League and the NAACP. Three US senators joined the choir on Tuesday, sending a letter to Facebook asking the company to more strictly enforce its rules on extremist content.
Facebook has long had a policy that explicitly prohibits hate speech. The company says it removes 89 percent of hate speech on the platform before it is reported, and has argued that while there are always exceptions to its scale, overall it is doing a good job. A recent European Commission report found that Facebook was quicker than some of its competitors to tackle hate speech cases.
“We do not benefit from hatred. We have no incentive to hate on our platform, “Facebook Vice President Nick Clegg said Tuesday in an appearance on Bloomberg, and reiterated in an appearance on CNN.
Regardless of the company’s claims about hate speech surveillance, recent events are fueling the perception that Facebook is simply not doing enough. Specifically, critics have argued that the company is making exceptions for politicians like Trump, and that resounding violent groups like the boogaloo movement can continue to gain traction on the platform. They say some of the company’s smaller competitors, like Reddit and Twitter, more aggressively enforce hate speech rules, by banning or moderating accounts linked to President Trump and popular accounts that support him.
Facebook’s approach conforms to an established playbook for the company. The reported audit, for example, would be the third that the company has commissioned. The company has removed harmful conspiracy networks on its platform, only to see them reappear or new ones emerge. Meanwhile, the announcement Boycotts are unlikely to have a serious financial impact, because most of Facebook’s revenue comes from smaller brands (the top 100 advertisers only accounted for just over 6 percent of its revenue last year). But Facebook and other social media companies at least appear to be responding to this new source of pressure.
“What you are seeing right now is that people are taking advantage of various mechanisms, whether economic or public relations, to push back the policies they don’t like,” Kate Klonick, assistant professor, University of California School of Law St. John, who studies social science. media and freedom of expression, he told Recode. “And you’re seeing the platforms give way.”
Twitter started a wave of social media companies taking on Trump
Recent moves by Reddit, Snapchat, Twitch, and YouTube mark the decision of social media companies to begin more strictly enforcing hate speech rules, weeks after protests surrounding the police murder of George Floyd have caused a national lawsuit on systemic racism in the United States. state
In many ways, Twitter kicked off this wave of action when in late May, it added a warning tag to glorify violence to President Trump’s “shooting … looting” post. This represented a precedent-setting movement for social media companies, who have been reluctant to moderate Trump, no matter how incendiary his rhetoric. (Twitter has spent the past two years refining its policies to moderate politicians’ speech.) Facebook was nervous when it responded very differently to the same Trump post on its own platform. The company decided not to moderate the position, arguing that it was not an incitement to violence but an announcement of the state use of force.
Now other platforms are joining in and following the assertive leadership of Twitter.
On Monday, Reddit banned ar / The_Donald, a popular message board for Trump fans to share memes, videos, and messages, for constantly breaking its rules on bullying and hate speech. The same day, Twitch, an Amazon-owned live streaming company, decided to temporarily suspend Trump’s account after he discovered that some of his live broadcasts included “hate behavior,” such as a broadcast of Trump’s initial rally. where he said that Mexico was bringing rapists to the United States. Those moves follow Snapchat’s decision in early June to stop promoting Trump in its “Discover” section because, according to the company, its account had incited racial violence. And YouTube banned several high-profile, far-right accounts, including those of white supremacist Richard Spencer and former KKK leader David Duke.
While there will be many more examples of hate speech on these platforms that are unlikely to be addressed, the series of takedowns and bans could have serious political consequences. They run counter to the stated values of free speech from early Internet forums like Reddit, which have historically tried to be as laissez-faire as possible in their approach to moderating content.
“I have to admit that I have fought to balance my values as an American, and around freedom of expression and freedom of expression, with my values and the company’s values around common human decency,” the CEO said Monday. from Reddit Steve Huffman to journalists, according to The Verge, announcing the company’s decision to ban ar / The_Donald.
Even Facebook has drawn some lines with Trump, removing a Trump campaign ad with Nazi insignia and at least two other pieces of Facebook content sponsored by Trump in recent months, including a Trump ad that tried to trick people into complete a false census form and publication for copyright infringement. But the company is not reversing course in publishing the president’s “looting … shooting”, and while it says it is open to labeling misleading political information, it has not yet done so with Trump.
There is two-way political pressure on Facebook
Historically, Facebook and other social media companies have been cautious not to over-moderate content for the sake of appearing to protect freedom of expression online. At the same time, President Trump and other Republican politicians have accused social media companies of having an “anti-conservative” bias.
Trump issued an executive order that seeks to override Section 230, a landmark internal law that protects social media companies like Facebook from being sued for what people post on the platform. The reason for revoking Section 230, according to Trump, is that Facebook is allegedly putting the finger on the scales against Republican content, which is a largely unproven, bad faith claim, many argue.
That pressure puts Facebook in trouble. If you over-moderate popular conservative figures, even if those users post hateful or extremist content, that prompts Trump and other Republicans to argue that they are being unfairly censored.
On the other hand, if Facebook doesn’t do a good job of moderating white supremacy and other hateful content, Democrats, civil rights leaders and top advertisers could continue to accuse the company of turning a blind eye to hate.
“I’m not going to pretend that we’re going to get rid of everything that people, you know, react negatively,” Facebook’s Clegg said on CNN Tuesday. “Politically, there are people on the right who think we remove too much content, people on the left who think we don’t remove enough.”
While all major platforms have long had policies on hate speech, important national events, such as mass shootings, have often been held to pressure companies to put more force on these issues. So now we will see if Facebook becomes a significant change in the way the company moderates content or waits for controversy to break out, as it has in the past.
Ultimately, it appears that Mark Zuckerberg will be the judge when it comes to drawing the line on Facebook by restricting hate speech rather than protecting freedom of expression.
Support Vox explanatory journalism
Every day at Vox, our goal is to answer your most important questions and provide you, and our audience worldwide, with information that has the power to save lives. Our mission has never been more vital than it is right now: empowering you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism requires resources, particularly during a pandemic and economic downturn. Your financial contribution will not constitute a donation, but will allow our staff to continue offering free articles, videos and podcasts with the quality and volume required at this time. Please consider making a contribution to Vox today.