Social Media

YouTube Won't Distinguish Between Misinformation and Reporting, So It Suspended My Channel

The platform punished The Hill's morning show, Rising, for showing a clip of Trump speaking.

|


On December 9, 2020, YouTube announced it would take down videos that alleged widespread fraudulent voting in the 2020 presidential election. A month later, after President Donald Trump's lies about his loss inflamed a mob that attacked the U.S. Capitol, YouTube strengthened those policies to prevent the spread of election-related misinformation.

What casual observers might not understand, however, is just how far the policy goes. Not only does YouTube punish channels that spread misinformation, but in many cases, it also punishes channels that report on the spread of misinformation. The platform makes no distinction between the speaker and the content creator. If a channel produces a straight-news video that merely shows Trump making an unfounded election-related claim—perhaps during a speech, in an interview, or at a rally—YouTube would punish the channel as if the channel had made the claim, even if no one affiliated with the channel endorsed Trump's lies.

I learned this firsthand on Thursday after YouTube suspended my show—Rising—for violating the election misinformation policy, despite the fact that neither my co-hosts nor I had said anything to indicate that we believe the election was rigged.

Let me explain: In addition to my role as a senior editor at Reason, I also work for The Hill as a co-host of Rising, the news website's morning show, which airs on YouTube. My co-hosts are Ryan Grim of The Intercept and Kim Iversen, an independent commentator.

Last night, we learned that YouTube had suspended The Hill's entire account for the next seven days, preventing us from publishing new videos. The reason was election misinformation, stemming from two previous videos. The first video in question, which was not aired as part of Rising, was raw footage of Trump's speech at the Conservative Political Action Conference on February 26, during which he made false claims about the election.

The second video contained a clip of Fox News host Laura Ingraham interviewing Trump, who claimed that the Russian invasion of Ukraine is only happening because of a rigged election. Following that clip, Grim and I both criticized Trump in general—my co-host even called the former president a mad man—but neither of us explicitly corrected the "rigged election" claim.

No one who has watched Rising, read my work at Reason, or read Grim's work at The Intercept, could possibly come away with the impression that either of us thinks the 2020 election was rigged. We have criticized that false claim, both on the show and in our respective publications.

But YouTube has taken the position that merely acknowledging an utterance of the false claim is the same thing as making the claim yourself unless you correct and disavow it elsewhere in the video. (It is also sufficient to post a warning label in the video's description that a false election claim makes an appearance. YouTube is thus moving in the direction of trigger warnings.)

This is a policy that effectively outlaws straight news reporting on YouTube. Say a news channel creates a video that merely intends to provide viewers with footage of a Trump speech or interview—minus any additional commentary—where he makes claims about the election. That video is, according to YouTube, in violation of its misinformation policy if the creator does not call out or correct the claims.

It is one thing for YouTube to ban people who are making false claims. It is quite another for YouTube to prohibit people from educating their viewers about the reality that the former president is still spreading these false claims. But the policy makes no distinction: It treats the report about misinformation as misinformation itself unless clearly labeled—even in a video where no commentary is being offered at all.

Such a policy could also imperil work being done by content creators that are trying to counter misinformation. For instance, a news video that introduced a Trump speech by merely noting, "former President Trump continued to call into question the legitimacy of the 2020 election in his recent speech, and said the following," could be flagged for not sufficiently rebutting his false claims. Imagine trying to report on live events.

I am quite surprised that YouTube would willingly put itself in the position of having to vet all content for election-related misinformation, including content produced by channels that are clearly not promoting such claims, even if these channels occasionally reference the fact that Trump, a pivotal national political figure, is indeed making them. This is certainly not doing any favors for Rising's viewers, who are well aware that the show's hosts disagree with Trump's claims that he was cheated.

YouTube is a private company, of course, and it's free to design whatever policies it wants. No one is owed a video channel. But I don't think most people are aware of just how vast the misinformation policy has become. I understood that the platform would punish content creators who made false statements about the election. I had no idea that YouTube would punish news channels for reporting the news.