Social media platforms imposed blocks on the outgoing president’s accounts as violence flared at the Capitol
By Avi Asher-Schapiro
WASHINGTON – (Thomson Reuters Foundation) – As hundreds of Donald Trump’s supporters stormed the U.S. Capitol building on Wednesday, social media platforms scrambled to clamp down on posts seen as inciting unrest – even blocking the outgoing president’s own accounts.
Four people died as rioters forced their way past security barricades, smashed windows and climbed walls to fight their way into the Capitol as lawmakers were certifying Democratic President-elect Joe Biden’s victory over Trump in last year’s election.
As violence engulfed the Capitol, major social media platforms including Facebook, Twitter, Snapchat and YouTube all took urgent steps aimed at controlling how the unfolding events were being discussed online.
While Facebook called the assault an “emergency situation“, Twitter said “threats of and calls to violence are against the Twitter Rules, and we are enforcing our policies accordingly”.
Both temporarily locked Trump’s accounts after he released a video in which – while urging his supporters to go home – he praised those who stormed the Capitol and repeated false claims that vote-rigging denied him victory.
Facebook CEO Mark Zuckerberg said on Thursday Trump would be locked out of Facebook and Instagram for at least the rest of his term in office.
Here are some key facts about content-moderation rules in light of the assault on the Capitol:
When can social media platforms suspend your account?
From Twitter, to Facebook and YouTube, all major social media platforms have terms of service, or community standards, which set out the rules for their users.
While law enforcement agencies can also ask platforms to remove posts that may break the law, in the United States, the companies exercise discretion over what legal content can be posted.
Individual posts that fall foul of the companies’ guidelines can be removed, and accounts that repeatedly violate them can be locked, restricted or even removed completely.
Facebook, for instance, has a system of “strikes” where pages and accounts that violate rules consistently can be removed as repeat offenders.
Many platforms have specific rules for public figures, and for posting about elections or civil events.
Before the Nov. 3 U.S. election, Facebook announced a new set of rules for posts concerning the ballot, including a policy of removing posts that spread misinformation about the voting process.
Last year, Twitter unveiled specific rules for the accounts of world leaders, saying it was in the public interest to allow leaders a wider latitude to post.
How did the platforms react to the Capitol violence?
After the storming of the Capitol, all major social media companies took some action against Trump’s own accounts.
Twitter and Facebook froze them, suspending him from posting further – in the case of Facebook, until Biden is sworn into office.
Twitter said the block would last 12 hours, and would be extended unless Trump deleted a number of tweets it said made baseless claims about the election.
The platforms had previously stopped short of locking Trump’s accounts even when he was accused of posting false information or stoking violence – instead opting to add labels to the president’s posts.
During last year’s Black Lives Matter protests, Twitter attached a number of labels to the presidents tweets warning that they might glorify violence, after Trump suggested protesters who engaged in looting would be shot.
As Wednesday’s violence flared in Washington, the platforms said they would scrutinise all posts related to the events from participants and the wider public.
Facebook said it would remove posts that offered “praise and support of the storming of the U.S. Capitol,” and add labels to those that incorrectly challenged the election result.
It also vowed to increase the use of artificial intelligence to flag content that might violate its policies.
Social media platforms have been criticized for not doing enough to police content on their platforms, being too quick to block or restrict controversial content and for applying their rules unevenly.
The platforms tend to have a U.S-centric approach, leading to sloppy and uneven rule enforcement elsewhere, said Jillian York, director for freedom of expression at the digital rights group the Electronic Frontier Foundation.
In 2018, Reuters documented a pattern of Facebook allowing posts urging hatred against the minority Rohingya Muslim population in Myanmar amid ethnic violence.
Facebook was also slow to block homophobic hate speech in Arabic, while YouTube quickly deleted videos of potential war crimes evidence in Syria, the Thomson Reuters Foundation has reported.
“There are valid arguments for removing violent content from the public view,” York said. “But there are indeed risks that blanket policies, implemented using automation, could remove key newsworthy content as well.”
The risk of erasing posts that may have long-term value, for example for law enforcement, increases if companies make rushed decisions, said Jeff Deutch, director of operations and research at Mnemonic, which focuses on archiving digital records.
“It’s important to archive the vast amount of online content created by and about American right-wing extremists,” he said.
“Already, we’re seeing countless narratives forming, some in good faith and some in bad, and archiving as much as possible will help in research and holding perpetrators accountable, and create historical records,” he added.