Long before President Trump earned widespread bans across social media, Snap enacted one of the earliest measures to curb his reach, booting him from its Discover feed of curated content in June 2020. As violent unrest swept Capitol Hill on Jan. 6, it indefinitely suspended him, and a week later, it permanently barred him.
On YouTube, though, the former president faced a different outcome. Six days after the riot, the site “temporarily suspended” him and has since said it would allow him to return at some point. And he never gave TikTok the chance to punish him: He never had an account on the app (and, of course, launched a highly public campaign to force TikTok’s Chinese owner to sell the app or stop operating in America).
Each app’s unique reaction to Trump reflects their disparate approaches to moderating their sites, as well as their different relationships with U.S. politics—dynamics that will go under the spotlight on Tuesday morning when the three companies appear in Congress.
Their testimony before a Senate subcommittee on consumer protection, product safety and data security comes less than a month after those lawmakers first summoned Facebook Head of Safety Antigone Davis and then, more dramatically, Facebook whistle-blower Frances Haugen, a former product manager on the team chiefly responsible for policing Facebook. Haugen’s remarks and her disclosures to the SEC have ignited the worst scandal for Facebook since Cambridge Analytica in 2018. The internal documents released by Haugen show Facebook ignoring warnings from its own employees that its apps amplify hateful speech, political discontent and issues around teen mental health. (Facebook has dismissed these criticisms—and Haugen generally—for lacking context and offering a limited picture of its operations.)
Now Snap, TikTok and YouTube get a turn in the Congressional hot seat. Extending the investigation beyond Facebook could indicate that politicians may finally renew efforts to draft new regulations for social media. “There’s real political momentum,” says Evelyn Douek, a Harvard Law School lecturer who studies online speech and misinformation. “And that inevitably means you have to go to the other platforms, particularly to where the teens are,” she says, making Snap, TikTok and YouTube “the natural choices.”
The three companies share some basic DNA. They’re widely used by pre-teens and teenagers and count their users by the hundreds of millions. On Tuesday, each will need to reckon with the same broad topics: content moderation and protections for young users. And none of them will probably face the frosty hostility that Facebook received. The similarities mainly end there, though, and their exact fates in Congress on Tuesday and beyond will differ based on their varying stances on content moderation and corporate histories, as well as the nuances around how each app works.
In the trio, YouTube is the biggest, oldest and most well known, having grown to over 2 billion users worldwide over 16 years. Nearly 80% of U.S. teens watch YouTube videos, according to data from Statista, a statistics research firm, and close to 70% of all Americans spend time there.
The site has a checkered past with content moderation. One famous debacle came in 2018, when the influencer Logan Paul posted a video that went viral featuring a dead body in a Japanese forest tragically popular as a suicide site. In some moments, YouTube has seemed to act inconsistently in taking down content, sometimes flip-flopping on decisions. Several years ago, for instance, it was allowing some Neo-Nazis on its site but later banned one group, Atomwaffen, amid pressure from the Anti-Defamation League. It is partly but not entirely built around a recommendation algorithm, a feature that can lead users to increasingly radicalized content and has drawn the attention of lawmakers looking to latch onto possible areas for reform.
More recently, YouTube has imposed some degree of greater self-control over itself, cracking down on things like right-wing hate speech and vaccine misinformation. In August, YouTube CEO Susan Wojcicki penned a Wall Street Journal op-ed defending her company’s policies and signaling YouTube would welcome new regulations. (YouTube wouldn’t comment for this story beyond identifying the executive it is sending to Washington: Leslie Miller, vice president of government affairs and public policy.)
These efforts haven’t been for naught. YouTube has largely escaped the same scrutiny put on Facebook (and to a lesser degree, Twitter). Douek, the Harvard lecturer, has recently spent a good deal of time thinking about why YouTube has seemed to skate by. Some of it, she believes, comes from the fact that many politicians don’t use YouTube as much as Facebook or Twitter, making them less attuned to its problems. There’s another more fundamental reason YouTube has avoided the heaviest backlashes, she says: “Video by its nature is harder to study. It’s more work intensive. It’s much easier to search text or read than it is to search in video, particularly content in a long video.”
TikTok also dispenses great quantities of video—to 1 billion or so users on Earth. But its popularity revolves entirely around its recommendation algorithm, the For You feed, perhaps making it a more tempting target for lawmakers. TikTok videos are shorter than a typical one on YouTube, usually a minute or less. This lets users see more TikTok videos more quickly, a rate of increased consumption that then carries an increased risk of landing on sludgy content, says Cameron Hickey, the project director for algorithmic transparency at the National Conference on Citizenship. “On YouTube, maybe you watch 24 one-hour videos, and one of those videos is made by an anti-vax crackpot,” he says. “That’s very different from watching 60 videos every hour” on TikTok, where a user would then find an even wider range of “potentially harmful or damaging concepts.”
Complicating matters further for TikTok: its Chinese owner, the billionaire Zhang Yiming, and Asia-based CEO, Shouzi Chew. The company has said its Chinese headquarters holds no sway over the videos served up to American viewers, a claim likely holding little weight with Republican senators; several within the GOP, including Texas’ Ted Cruz and Missouri’s Josh Hawley, have long voiced strident opposition to TikTok, declaring it a risk a national security and fearing China’s community government could manipulate it. (TikTok couldn’t be reached to comment for this story.)
Like YouTube, TikTok has gone public with efforts to combat misinformation. In its latest transparency report, the company said it removed 81.5 million pieces of offending content during the first half of 2021. What’s powering those take-downs? A combined effort of man and machine: human moderators reviewing videos and some internal software to catch bad posts. Hickey has recently sought to test the effectiveness of TikTok’s dectective work. In one experiment, he and his researchers reviewed thousands of TikToks about the recent German federal election to see how well TikTok had filtered out bad information. Their conclusion about TikTok’s efforts? “Largely insufficient,” Hickey says. He and his team found that TikTok’s moderating algorithm seemed to kick in only when a video hit a certain threshold: over 25,000 views. Moreover, his team concluded, the algorithm had likely only parsed videos published with hashtags related to the election, leaving many thousands more unreviewed. “They’re really good at finding and serving up bad stuff to people. But they’re not really very good at taking action on that bad stuff,” Hickey says.
And then there’s Snap. Its app has mostly avoided becoming a misinformation hotbed. Why? “The content doesn’t go viral in the same way. It’s ephemeral,” says Douek, the Harvard lecturer. Plus, those self-deleting messages mostly get traded between friends or acquaintances, making it harder for harmful posts to reach a viral scale on the same level as they might on other social media networks. “But it is a user-generated-content platform. So it’s going to have some of the same issues” as its competitors, Douek says.
“We look forward to appearing before the subcommitte to discuss our approach to protecting the safety, privacy and wellbeing of our Snapchat community,” a Snap spokeswoman says. The company will dispatch Jen Stout, vice president of global public policy, as its Congressional witness.
Snap maintains a devoted following among teens and young adults: Nearly a quarter of its 300 million-plus monthly users are 19 and under, according to Statista. But such ubiquity among teens has made it a popular place for online bullying. One study by Thorn, a nonprofit that advocates for child safety online, found that 26% of respondents reported having a potentially harmful experience on Snapchat. On Thorn’s list, Snapchat tied in first place with Facebook-owned Instagram, which has already been a focus for the Senate subcommittee in its earlier hearings.
There is a final commonality among YouTube, Snap and TikTok. They’re relative newcomers to Washington, and as such, the senators lack any great familiarity with them. YouTube has sent representatives to Congress only three times, while Tuesday marks the first appearances from Snap and TikTok. By contrast, Facebook executives have trod a well-worn path to D.C. They’ve gone 31 times in the last four years. Twitter executives, meantime, logged 18 appearances. Twitter CEO Jack Dorsey has testified on five occasions; Facebook CEO Mark Zuckerberg has done so seven times. None of the CEOs for YouTube, Snap or TikTok have ever testified.
Even in those advanced discussions with Facebook and Twitter, “it’s been frustrating because the questions have been so bad,” says Sarah Roberts, a UCLA professor and author of Behind the Screen: Content Moderation in the Shadows of Social Media. She holds out little hope for a markedly better showing on Tuesday. “They get so confused about these platforms, and it really muddies the water of these proceedings and makes them have less impact.”