YouTube Algorithm Steers People Away From Radical Content
Another blow to the idea that algorithms are driving our political dysfunction.

"If you randomly follow the algorithm, you probably would consume less radical content using YouTube as you typically do!"
So says Manoel Ribeiro, co-author of a new paper on YouTube's recommendation algorithm and radicalization, in an X (formerly Twitter) thread about his research.
The study—published in February in the Proceedings of the National Academies of Sciences (PNAS)—is the latest in a growing collection of research that challenges conventional wisdom about social media algorithms and political extremism or polarization.
You are reading Sex & Tech, from Elizabeth Nolan Brown. Get more of Elizabeth's sex, tech, bodily autonomy, law, and online culture coverage.
Introducing the Counterfactual Bots
For this study, a team of researchers spanning four universities (the University of Pennsylvania, Yale, Carnegie Mellon, and Switzerland's École Polytechnique Fédérale de Lausanne) aimed to examine whether YouTube's algorithms guide viewers toward more and more extreme content.
This supposed "radicalizing" effect has been touted extensively by people in politics, advocacy, academia, and media—often offered as justification for giving the government more control over how tech platforms can run. But the research cited to "prove" such an effect is often flawed in a number of ways, including not taking into account what a viewer would have watched in the absence of algorithmic advice.
"Attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals—what a user would have viewed in the absence of algorithmic recommendations—and hence cannot disentangle the effects of the algorithm from a user's intentions," note the researchers in the abstract to this study.
To overcome this limitation, they relied on "counterfactual bots." Basically, they had some bots watch a video and then replicate what a real user (based on actual user histories) watched from there, and other bots watch that same first video and then follow YouTube recommendations, in effect going down the algorithmic "rabbit hole" that so many have warned against.
The counterfactual bots following an algorithm-led path wound up consuming less partisan content.
The researchers also found "that real users who consume 'bursts' of highly partisan videos subsequently consume more partisan content than identical bots who subsequently follow algorithmic viewing rules."
"This gap corresponds to an intrinsic preference of users for such content relative to what the algorithm recommends," notes study co-author Amir Ghasemian on X.
Pssst. Social Media Users Have Agency
"Why should you trust this paper rather than other papers or reports saying otherwise?" comments Ribeiro on X. "Because we came up with a way to disentangle the causal effect of the algorithm."
As Ghasemian explained on X: "It has been shown that exposure to partisan videos is followed by an increase in future consumption of these videos."
People often assume that this is because algorithms start pushing more of that content.
"We show this is not due to more recommendations of such content. Instead, it is due to a change in user preferences toward more partisan videos," writes Ghasemian.
Or, as the paper puts it: "a user's preferences are the primary determinant of their experience."
That's an important difference, suggesting that social media users aren't passive vessels simply consuming whatever some algorithm tells them to but, rather, people with existing and shifting preferences, interests, and habits.
Ghasemian also notes that "recommendation algorithms have been criticized for continuing to recommend problematic content to previously interested users long after they have lost interest in it themselves." So the researchers set out to see what happens when a user switches from watching more far-right to more moderate content.
They found that "YouTube's sidebar recommender 'forgets' their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content," per the paper abstract.
Their conclusion: "Individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role."
It's Not Just This Study
While "empirical studies using different methodological approaches have reached somewhat different conclusions regarding the relative importance" of algorithms in what a user watches, "no studies find support for the alarming claims of radicalization that characterized early, early, anecdotal accounts," note the researcher in their paper.
Theirs is part of a burgeoning body of research suggesting that the supposed radicalization effects of algorithmic recommendations aren't real—and, in fact, algorithms (on YouTube and otherwise) may steer people toward more moderate content.
(See my defense of algorithms from Reason's January 2023 print issue for a whole host of information to this effect.)
A 2021 study from some of the same researchers behind the new study found "little evidence that the YouTube recommendation algorithm is driving attention to" what the researchers call "far right" and "anti-woke" content. The growing popularity of anti-woke content could instead be attributed to "individual preferences that extend across the web as a whole."
In a 2022 working paper titled "Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos," researchers found that "exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment" who typically subscribe to channels from which they're recommended videos or get to these videos from off-site links. "Non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered."
And a 2019 paper from researchers Mark Ledwich and Anna Zaitsev found that YouTube algorithms disadvantaged "channels that fall outside mainstream media," especially "White Identitarian and Conspiracy channels." Even when someone viewed these types of videos, "their recommendations will be populated with a mixture of extreme and more mainstream content" going forward, leading Ledwich and Zaitsev to conclude that YouTube is "more likely to steer people away from extremist content rather than vice versa."
Some argue that changes to YouTube's recommendation algorithm in 2019 shifted things, and these studies don't capture the old reality. Perhaps. But whether or not that's the case, the new reality—shown in study after recent study—is that YouTube algorithms today aren't driving people to more extreme content.
And it's not just YouTube's algorithm that has been getting reputation rehabbed by research. A series of studies on the influence of Facebook and Instagram algorithms in the lead up to the 2020 election cut against the idea that algorithmic feeds are making people more polarized or less informed.
Researchers tweaked user feeds so that they saw either algorithmically selected content or a chronological feed, or so that they didn't see re-shares of the sorts of that algorithms prize. Getting rid of algorithmic content or re-shares didn't reduce polarization or increase accurate political knowledge. But it did increase "the amount of political and untrustworthy content" that a user saw.
Today's Image

Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
The counterfactual bots following an algorithm-led path wound up consuming less partisan content.
.
The researchers also found "that real users who consume 'bursts' of highly partisan videos subsequently consume more partisan content than identical bots who subsequently follow algorithmic viewing rules."
You know ENB.... without a definition of what extreme or partisan is, the study is worthless. Lab leak was considered partisan. Myocarditis was considered extreme and partisan.
All this is showing is the bias of the researchers who seemingly agree with the politics of YouTube.
Did you think through it at all?
When they refuse to define their terms and metrics it suggests the data is meaningless or at least suspect. Whether or not the study was biased in this manner, ENB only speaks of right-wing content being monitored. It broadly labels right-wing content as extremist. That leaves you wondering whether the "centrist" content the algorithms are driving users to are actually eatablishment left-wing (or even extreme left). If they didn't also evaluate and define extremist left-wing content then they had no interest in actually digging into the bias and how traffic is directed.
I would like to see what they defined as extremist and centrist content. My experience and suspicion is that it drives all political content towards left-wing extremist content. That would substantiate what those on the right have been saying even as this study half-assedly shuts down the retarded right-wing rabbit hole claim from the left.
My first reaction too. "Social science" is nonsense. Any study claiming to judge such subjective terms is double nonsense. And accepting any "steering" is indicative of a collective mindset.
Haven't read any of the article, don't intend to. What a waste. Sure not worth a dime ($25 / 250 days).
"A 2021 study from some of the same researchers behind the new study found "little evidence that the YouTube recommendation algorithm is driving attention to" what the researchers call "far right" and "anti-woke" content."
So 2 flavors of nasties are "far right" and "anti-woke". Ergo, "far left" and "woke" are perfectly mainstream, not radical, and nothing to be concerned about people being directed to these subjects by the algorithm.
The belief that a man can actually, literally, change into a woman is mainstream. Biological sex is completely a social construct - mainstream. Doctors are willy-nilly assigning biological sex to babies at birth - mainstream. Men can give birth - mainstream.
I'm still trying to figure out if that makes Orwell mainstream or radical.
I remember around the turn of the century the term "wingnut" briefly applied equally to left- and right-wing nuts, but then it quickly started to be applied only to the right wing. So this has been going on a while.
When I saw 'Today's Image' in the article, I immediately thought ENB would be posting pics of whores, trannies, and other human detrius.
Instead, a siamese cat.
Yeah when I read things like "Getting rid of algorithmic content or re-shares didn't reduce polarization or increase accurate political knowledge. But it did increase the amount of political and untrustworthy content that a user saw." I kind of need to know the definition of "accurate political knowledge" and "untrustworthy content". ENB is appealing to the authority of people who are so oblivious to their own bias that they don't even pretend to hide it. If there's a libertarian angle here I confess I don't see it.
At this point, ENB should just focus on her sandwich serving duties. Maybe she has a future at Subway.
You know who else provided services on the subway?
Daniel Penney?
Sorry, using computers to evaluate computers is a waste of time and electricity.
You can't tell what an 'Al Gore rhythm' is doing without reviewing the actual code.
Al Gore has no rhythm. People nicknamed him The Tin Woodman for a reason.
🙂
😉
release. my. shakras.
I notice that YouTube steers people away from saying words that can be used in innocuous, harmless ways like "gun" in "squirt-gun," or "kill" in "kill-switch," or "choke" as in the starter function of an old jalopy. And, of course, the word "sex."
I even seen a video short where an Ex-Mormon was saying that Joseph Smith was dragged out of jail and "un-lived.". What the actual fuck is up with YouTube???
Exactly what your betters want it to do.
Did Rev. Artie take over your identity?
🙂
😉
Seriously, it's even spilled over to Top 40 Terrestrial radio stations. I heard one bleep out every reference to "gun" in the Foster The People song "Pumped Up Kicks."
And one time in a Michaels Craft Store, I heard the overhead Muzak player delete the word "Devil" from the Andy Gibb song "(Love is) Thicker Than Water."
I'm thinking: "Has the whole world turned into an infantilized Goddamn Romper Room?"
Has the whole world turned into an infantilized Goddamn Romper Room?
Yes. Yes it has.
Demolition Man was a warning, these people think its aspirational.
Radio censored a line from Van Morrison's Brown Eyed Girl "making love in the green grass behind the stadium" like 25 years ago. Post Tipper Gore.
Embrace the nanny state, little toddler.
Is that why people type "phookin" instead of ... well, you know ...
And "shite" for "shit."
They better be careful with the AutoCorrect or they might accidentally type "Shi'ite" and have a fatwa on their head and assassin's from Iran on their ass.
What's a Scott to do?
A TRUE Scotsman will constantly be on guard against Donald Trump peeking up his kilt, and grabbing him "up there"! Also he will be on guard against Joe Biden sniffing his hair! And if he does NOT do these things, then he is NOT a TRUE Scotsman!
And you are no true Scotsman! I know scotsmen, scotsmen are my friend. Where’s the haggis?
If you're familiar with Eminem's "Stan", I just saw yesterday how completely obliterated the video is with *bleeps*. Don't bother bleeping cuss words, but the latter half of the song is unrecognizable. "I saw some shit on the news a couple weeks ago, it made me sick. *bleeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeep* Oh shit, it was you".
Radical? Like Steve Bannon radical or Greta Thunberg radical?
I think you know the answer to that question.
Greta is more stupid than radical.
"A lie will go round the world while truth is pulling its boots on." - C.H. Spurgeon, 1859
Radical like classical liberal and libertarianism?
It's totally cool when private companies censor those crazies.
@Reason.com
“Today’s Image” at the bottom sure is one fine, high-class pussy! I hope that whoever owns it, is vigilantly protecting it from Trump grabbing it!
This article never defines "radical" and is thus worthless.
"Radical" = Relating to or advocating fundamental or revolutionary changes in current practices, conditions, or institutions.
"radical politics; a radical political theorist."
Also I noticed that your post never defines "this article", and is thus worthless.
lol
The referent of "this article" is obvious. The definition you provide is probably a good one (and includes libertarianism). But if someone is doing a study they should really define how they are categorizing things pretty rigorously. I haven't read it, so maybe they did.
When nattering nabobs of negativity natter on and on about "radicals", they really mean "radicals that we do NOT like". They don't (out loud, at least) speak out against "radically loving your neighbor" in a neighborly-love manner far, far more than most people practice today. And... PLEASE find me ANY radical power pigs that do NOT favor EXACTLY the kinds of "radical reforms" that will bring them... POWER!!! And DIS-favor "radical reforms" that LESSEN their power!
To me at least, this is the context that "woke folk" (of all political stripes) bring to the table, when we oppose "radicals" and their "misinformation".
Government Almighty (of MY kind) LOVES us all!!! Everything else is "misinformation"!
Considering that targeted advertising is literally the entire model for how YouTube makes money, not only is this study useless, it's bullshit.
It's not bullshit. The advertising revenue model is exactly the source of the problem but the effect of it is different than than the studies are structured. It is to drive extremist attention-seekers to MAKE videos in order to capture eyeballs. The YouTube algorithm is obviously there to maximize the advertising revenues. Which means the combo of:
a)maximizing the time spent wasting away watching videos and
b)maximizing the number/$ of advertisers who pay for eyeball time and
c)maximizing the free content generated by users so YouTube doesn't have to pay for that.
There is no real challenge by these studies to the ad-based model. And I suspect that is because the media that is being replaced by social media has been ad-based for centuries and obviously has zero interest in that model being challenged or even alternatives being perceived.
For a brief window in Web 1.0, there was a notion that there might be an alternative to ad-based mass-based communication. I suspect that focusing on protocols was the solution - until attention-seeking spammers obliterated email and usenet. Instead of figuring out how those protocols (essentially open-source and interoperable) could avoid that problem, the Web instead simply moved to proprietary platforms.
It doesn't matter if social media sites are biased, or unbiased. They are toxic and people should spend less time on them. Parents should stop their kids from spending more than an hour a day on them.
They said that about tv, before that radio, before that novels...
Who said that about novels? Those were never ad-based except maybe Charles Dickens who wrote in serial form. And the only criticism there is that he's being paid by the word.
It's the ad-based revenue where there is a direct conflict between increasing ad revenue v an individual prioritizing their time AWAY from spending time that tilts into obsessive and valueless for that individual.
I don’t mind targeted advertising. Worst that can happen is it suggests something I don’t want. Best case it suggests something I do want. Could argue that it improves my life.
You likely get bombarded with ads for bottom shelf liquor.
But advertising wasn’t the point. The point was young people “wasting” time on things not approved by their elders.
Then advertising IS the point. Advertisers produce childrens TV shows for the sole objective of selling stuff (get 1000 box tops from cereal for a racoon cap) or creating a consumer brand for their product. I don't know how old you are but there is a big dividing line between Nam-era boomers and later era. Roughly, those who had crossed over to the 17+ generation v those still watching kid's TV in the 1965-1970 timeframe.
Before then, all TV is ad-based. Which means it appeals to the concerns of the 17-35 age demographic. That is the age group advertisers care about and is the age group advertisers - and their media flunkies - started DEFINING in 17-20 year generation classifications - boomers, GenX, millennial, Z, etc.
In the late 60's - all TV shows changed massively to switch from WW2/silent to boomer targeting. Kids TV basically disappeared because it was now a 'bust' generation and the older boomer target was mostly childless. In families where divorce was now happening, where 'latchkey kids' now became a thing, and cartoons don't make for good babysitters.
You may not like Mr Rogers and he is clearly the libertarian personification of evil. But his message is what created an alternative funding mechanism for kids tv (this testimony is what created PBS). At that time after-school TV was shows in syndication and Sat morning cartoons were as always shit.
ok. I was pointing out the “them youths be wastin all their time” criticism of new technology that's been going on forever. You’re off somewhere else on the advertising model. That's its own discussion.
Kids don't raise themselves, usually. When they do, they end up completely fucked up, usually.
Did you know broad support of the Free Palestine and antisemitism movement is correlated to TikTok use? Same with trans movement.
Of course not.
Sarc has really developed a synergy between stupid, ignorant, and ‘angry drunk’.
And every time they get a little bit more right.
Nah. That's just you getting old 😉
I've been wondering why my instagram and tiktok feeds are full of girls in bikinis. I guess it's preference, not algorithm.
Sounds like you’re doing it right.
What did they classify as 'radicalizing' and 'extremist' content?
I bet its right wing stuff.
I bet they didn't look for the effect on trasgenderism, communism, etc, content.
Meaningless at best and even more concerning at worst. Covid, lockdowns, lab leak, Russia Russia Russia, pushback on the "insurrection" push back on anything progressive etc etc etc were all considered radical by the regime. Now they are mainstream fact.
It's easy and fun enough to just try it yourself. I don't remember whether it was last year or the year before that WFMU Station Manager Ken Freedman tried it himself, going from suggestion to suggestion, and the result was both shocking and hilarious. But listeners tried themselves with mixed results.
"Proceedings of the National Academies of Sciences (PNAS)"
https://www.youtube.com/watch?v=U7UBy1usnrg
What's radical content?
I disagree with Nolan Brown, as per usual. She is missing the point that algorithms that are guided by ideology are fundamentally flawed. It is extremely important in the digital space to allow ideas to come up that ‘explode’ into importance, as ideas always do. Just like music or art.
In a personal experience with YouTube I’ve been annoyed that as a lover of many types of music the videos that fit the algorithm, are easy to have load and repeat, but any Christian music or country music you have to search for. I can get Fleetwood Mac or Chris Stapleton on speed dial, but Alan Jackson not so much. I can only imagine how this extrapolates to independent journalism,
It is not a ‘positive’ against ‘extreme’ views, it’s a subtle nudge away from very normal views that the big ‘they’ would prefer you to avoid. And for Reason to advocate this as a good thing I strongly disagree with.
Perhaps Nolan Brown is so into the publishers viewpoints (see who these publishers were for goodness sake, Yale?) that she doesn’t actually have any non-establishment views or beliefs, and doesn’t go on YouTube. Why is she discussing this only at an academic level? We’re more interested in the effects of corporate web sites having algorithms that change our experience.