The Taliban control all 34 provinces in Afghanistan, Kabul, the capital city, the presidential palace—and possibly soon a slew of valuable digital assets: Twitter and Facebook accounts once operated by the nation’s democratically elected government.
The social media companies say they won’t rule out allowing the Taliban to run those channels, which number more than two dozen across the two sites. Doing so would almost certainly hand the insurgents a useful platform to spread propaganda and misinformation, and no matter the decision, it is likely to reopen long-simmering debates about what should be on the internet and who should determine what belongs there.
“If I’m a large organization,” like the Taliban, “I want as many different pathways to shape my narratives and shape perceptions of what I’m doing,” says William Braniff, director of the National Consortium for the Study of Terrorism and Responses to Terrorism in Maryland. “Having more channels is ultimately quite useful.”
Twitter, whose spokesperson couldn’t be reached to comment for this story, has allowed different Taliban accounts on its site in the past and continues to let the group’s spokesman, Zabihullah Mujahid, tweet regularly to his 293,400 followers. Back in 2011, Sen. Joe Lieberman, then head of the Homeland Security and Government Affairs Committee, sent a letter to Twitter complaining about two active Taliban accounts, which were eventually taken down. In 2021, Mujahid has been issuing updates throughout the latest conflict in Afghanistan, including one missive sent Monday that said, simply: “The situation in Kabul is under control.”
As for Facebook, it has officially banned the Taliban from its platform. But even with the prohibition against the organization on its books, Facebook says it can’t yet decide what to do about the Afghani government accounts until more time passes, and there’s greater clarity about the situation in the country.
MORE FOR YOU
The Afghan government accounts range from a Facebook page dedicated to the Afghan embassy in Washington to the Afghan Ministry of Defense’s Twitter feed (its motto: “Allah, Country, Duty!”). These accounts have sizeable followings, likely totaling more than a 1 million subscribers across all of them. These wide audiences would make them useful prizes for the Taliban, who could quickly take these megaphones and begin broadcasting information intended to damage opponents in the West or sew confusion about conditions in the country. They would work to bolster the group’s existing efforts on social media to dispense misinformation and draw in recruits and funding.
Most crucially, these Afghan government ones could give Taliban social media posts an added sheen of respectability and legitimacy. A post from an account belonging to a Taliban commander is one thing. One from a nation’s Ministry of Defense is another, particularly in an era where internet users have shown little ability to critically analyze what’s shared online. “It would add one degree of separation between the information and the Taliban,” says Braniff.
Twitter and Facebook have long struggled to decide what type of content they would police and remove. Both Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg have said they’re uncomfortable with the power to make those decisions and shied away from active moderation during much of their companies’ history. That has changed in recent years after the 2016 election and President Trump’s use of social media demonstrated how damaging misinformation and conspiracy theories could become.
Twitter and Facebook already regularly take down large amounts of terrorist content. Facebook said it took action against 9 million pieces of such information in the first quarter, according to the company’s latest transparency report, while Twitter said it did the same against nearly 60,000 accounts in the second half of 2020, according to its most recent transparency report. Unlike the challenge of stopping white supremacists and other homegrown radical groups, the social media companies have had greater experience in combating terrorists, whose ideology-laden and irony-free posts are easier for its AI moderation software to catch.
But here is where the situation gets a little trickier: The Taliban aren’t official terrorists. Or at least they don’t appear on the State Department’s list of foreign terrorist organizations, a roll that social media companies have in the past relied to justify taking down accounts. Twitter, for instance, removed accounts linked to Hamas and Hezbollah in November 2019 with the company concluding, “There is no place on Twitter for illegal terrorist organizations and violent extremist groups.” But it’s why the Democratic People’s Republic of Korea still has a presence on Twitter, where it’s free to tweet out snippets from such augustly titled propaganda films like Let Us Go to Mt. Kumgang and sneers at U.S. leaders.
Perhaps the Taliban’s absence from the list could give some cover to Twitter, enough of a reason to hand over the government accounts to the Taliban. But Twittter would likely face pressure to apply the fact-check labels and warnings it has increasingly attached to misleading posts, as well as complaints from conservative U.S. lawmakers who are already outraged that President Trump can’t use Twitter, but the Taliban still can.
Since Facebook has already outlawed the Taliban, it’s much less clear what justification the site could could take to grant the accounts to them. Facebook could fall back on an old idea: Some accounts are so important to hear from, Facebook can’t possibly ban them—they’re not only too big to fail, they’re too newsworthy to fail. Matt Perault, director of Duke University’s Center on Science & Technology, buys into this idea. “I think it’s important that political organizations that run a country are able to speak so that people can see what those organizations believe,” he says. “The social media companies are going to be put in a really difficult position, but ultimately, it’s important that people are able to understand what a governing organization thinks.” In years past, Facebook may have gone this route, but it has signaled a change in heart recently, one that may stop from them from doing so: In June, it said it would no longer give politicians special treatment and shield them from moderation rules. It’s unclear whether the policy switch would apply to things like embassy or ministry accounts or solely to accounts run by elected officials, thought it could presumably.
Even if the platforms do permit the Taliban to run the accounts, they don’t, of course, have to let them keep forever. There’s a slower, alternative path the companies could take: authorize the Taliban to access the accounts, then slowly, methodically catalog the rules they break and then level a ban. But this would almost certainly require constant enforcement and active policing—something the companies have long been wont to do. And just as surely, any time given to the Taliban to manage the accounts would give them opportunities to misuse them.
“Spreading propaganda, recruiting people, radicalizing people, it would be that type of stuff,” says Jeremy Blackburn, a computer science professor at Binghamton University who has studied hate speech and extremists online. “Bottom line: They’d have additional influence and increased reach. With more people hearing from you, you can spread information that much easier and that much further.”