In Defense of Algorithms
They're good for us. They might even be good for democracy.

When Facebook launched in 2004, it was a fairly static collection of profile pages. Facebook users could put lists of favorite media on their "walls" and use the "poke" button to give each other social-media nudges. To see what other people were posting, you had to intentionally visit their pages. There were no automatic notifications, no feeds to alert you to new information.
In 2006, Facebook introduced the News Feed, an individualized homepage for each user that showed friends' posts in chronological order. The change seemed small at the time, but it turned out to be the start of a revolution. Instead of making an active choice to check in on other people's pages, users got a running list of updates.
Users still controlled what information they saw by selecting which people and groups to follow. But now user updates, from new photos to shower thoughts, were delivered automatically, as a chronologically ordered stream of real-time information.
This created a problem. Facebook was growing fast, and users were spending more and more time on it, especially once Apple's iPhone app store brought social media to smartphones. It wasn't long before there were simply too many updates for many people to reasonably follow. Sorting the interesting from the irrelevant became a big task.
But what if there were a way for the system to sort through those updates for users, determining which posts might be most interesting, most relevant, most likely to generate a response?
In 2013, Facebook largely ditched the chronological feed. In its place, the social media company installed an algorithm.
Instead of a simple time-ordered log of posts from friends and pages you followed, you saw whichever of these posts Facebook's algorithms "decided" you should see, filtering content based on an array of factors designed to suss out which content users found more interesting. That algorithm not only changed Facebook; it changed the world, making Facebook specifically—and social media algorithms generally—the subject of intense cultural and political debate.
Nearly a decade later, the list of social ills blamed on algorithms is a long one. Echo chambers. Political polarization. Misinformation. Mental health problems. The election of Donald Trump. Addiction. Extremism. Teen suicides. The election of Joe Biden.
Headlines are full of warnings about algorithms. They "are controlling your life" (Vox), "amplifying misinformation and driving a wedge between people" (The Hill), fueling "massive foreign propaganda campaigns" (The Conversation), and serving as a "radicalization machine for the far-right" (The Daily Beast), to list a few.
Congress has been fretting too. Tech companies use "algorithms to drive destructive content to children," according to Sen. Richard Blumenthal (D–Conn.). Sen. Josh Hawley (R–Mo.) claims that Google algorithms dictate the outcomes of elections, while Sen. Elizabeth Warren (D–Mass.) says Amazon algorithms are "feeding misinformation loops." And Facebook algorithms "undermine our shared sense of objective reality" and "intensify fringe political beliefs," according to Reps. Anna Eshoo (D–Calif.) and Tom Malinowski (D–N.J.).
Algorithms, especially those used by search engines and social media, have become a strange new front in the culture war. And at the heart of that battle is the idea of control. Algorithms, critics warn, influence individual behavior and reshape political reality, acting as a mysterious digital spell cast by Big Tech over a populace that would otherwise be saner, smarter, less polarized, less hateful, less radical. Algorithms, in this telling, transform ordinary people into terrible citizens.
But the truth is much more complex and much less alarming. Despite the dire warnings found in headlines and congressional pronouncements, a wealth of research and data contradicts the idea that algorithms are destroying individual minds and America's social fabric. At worst, they help reveal existing divides, amplify arguments some would prefer to stay hidden, and make it harder for individuals to fully control what they see online. At best, they help make us better informed, better engaged, and actually less likely to encounter extremist content. Algorithms aren't destroying democracy. They just might be holding it together.
What Are Algorithms?
An algorithm is simply a set of step-by-step instructions for solving a problem. A recipe is a sort of algorithm for cooking. In math, an algorithm helps us with long division. Those are examples of algorithms meant for human calculations and processes, but machines use algorithms too. For computers, this means taking inputs (data) and using the explicit rules a programmer has set forth (an algorithm) to perform computations that lead to an output.
Machine-learning algorithms are an artificially intelligent (A.I.) subset of computer algorithms, in which a programmer will not explicitly spell out all of the rules; some of these, a computer program will find for itself. Uwe Peters of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge has described them as programs "that can find patterns in vast amounts of data and may automatically improve their own performance through feedback."
The big difference between an A.I. algorithm and a recipe is that an algorithm can in some sense learn, adapting to new information, in particular to choices made by users. Think of a streaming video service like Netflix. As a brand new user, you'll get watch recommendations based on a certain set of generic criteria—what's popular, what's new, etc. But you'll start to get more personalized recommendations as the Netflix algorithm learns what kinds of shows you like.
Machine learning is why Netflix kept recommending Korean dramas to me after my K-pop–obsessed mother-in-law used my account for a month (sigh). Algorithms are why Amazon keeps recommending libertarian-leaning books to me and my Instagram ads are full of pretty dresses, day planners, and baby gear—all things I am likely to click on. Without algorithms, I may have to wade through romance novels and politicians' biographies to find things I want to read; I might be served ads for men's boots, baseballs, and adult diapers.
The modern internet offers seemingly endless examples of sophisticated algorithms. Search engine results are based on algorithms that determine what is most relevant and credible on a given query. Algorithms determine what videos surface most on TikTok and what posts show up in your Twitter feed. But their roles go far beyond social media. Map apps use algorithms to choose the best route, Spotify uses them to choose what to play next, and email services use them to discard spam while trying to ensure you see emails from your boss and your grandmother.
Algorithms also drive many things unrelated to the consumer internet. They power facial recognition programs and medical diagnoses. They help child protective services weigh risk, NASA microscopes identify life, bored office workers generate weird art, and governments sort immigration applicants. Neuroscientists have used them to chart neural connections, judges to determine prison sentences, and physicists to predict particle behavior.
Algorithms, in other words, help solve problems of information abundance. They cut through the noise, making recommendations more relevant, helping people see what they're most likely to want to see, and helping them avoid content they might find undesirable. They make our internet experience less chaotic, less random, less offensive, and more efficient.
The 2016 Election and the Beginnings of a Bipartisan Panic
Until recently, algorithms were mostly something that mathematicians and computer programmers thought about. One early reference to them in the Congressional Record, from 1967, is an aside noting that they can help medical professionals analyze brain waves. Algorithms weren't widely understood—but to the extent they were, they were believed to be benign and helpful.
But in the last decade, they have become the subject of politicized controversy. As Facebook and other social media companies started using them to sort and prioritize vast troves of user-generated content, algorithms started determining what material people were most likely to see online. Mathematical assessment replaced bespoke human judgment, leaving some people upset at what they were missing, some annoyed at what they were shown, and many feeling manipulated.
The algorithms that sort content for Facebook and other social media megasites change constantly. The precise formulas they employ at any given moment aren't publicly known. But one of the key metrics is engagement, such as how many people have commented on a post or what type of emoji reactions it's received.
As social media platforms like Facebook and Twitter, which shifted its default from chronological to algorithmic feeds in 2016, became more dominant as sources of news and political debate, people began to fear that algorithms were taking control of America's politics.
Then came the 2016 election. In the wake of Trump's defeat of Hillary Clinton in the presidential race, reports started trickling out that Russia may have posted on U.S. social media in an attempt to influence election results. Eventually it emerged that employees of a Russian company called the Internet Research Agency had posed as American individuals and groups on Facebook, Instagram, Tumblr, Twitter, and YouTube. These accounts posted and paid for ads on inflammatory topics, criticized candidates (especially Clinton), and sometimes shared fake news. The Senate Select Committee on Intelligence opened an investigation, and Facebook, Google, and Twitter executives were called before Congress to testify.
The question, The Guardian suggested in October 2017, was whether these companies had "become Trojan horses used by foreign autocracies…to subvert democracies from the inside." Blumenthal told the paper, "Americans should be as alarmed about it as they would by an act of war."
In a November 2017 hearing before the Senate Select Committee on Intelligence, Sen. Mark Warner (D–Va.) said Russia's goal was to reach "into the news feeds of many potentially receptive Americans and to covertly and subtly push those Americans in the directions the Kremlin wants to go." Time said that by using "a trove of algorithms" a Russian researcher had brought back from America, Russia "may finally have gained the ability it long sought but never fully achieved in the Cold War: to alter the course of events in the U.S. by manipulating public opinion." The New Yorker proclaimed in a blunt headline, "Russia Helped Swing the Election for Trump." The Center for American Progress insisted that "the Trump campaign and the Kremlin worked together to install Trump in the White House."
The reach and impact of Russia's social media posts turned out to be massively overblown. Many of the Facebook ads were seen by very few people. "Roughly 25% of the ads were never shown to anyone," according to Facebook's former vice president of policy and communications, Elliot Schrage—and more than half of their views actually came after the 2016 election. All told, Internet Research Agency–linked accounts spent just $100,000 on Facebook ads, some of which backed Green Party candidate Jill Stein or Clinton primary rival Bernie Sanders in addition to Trump. Its Google ad expenditure was even smaller ($4,700), and its YouTube videos earned just 309,000 views.
Looking at some of the propaganda—like a meme of Jesus arm-wrestling Clinton—the idea that it was influential seems laughable. But the idea that Russian troll farms were trying to sow discord in the American electorate (true) and that this had somehow tipped the election to Trump (hogwash) proved an irresistible proposition for people looking to explain the improbable.
For some, it wasn't a big leap from the Russian trolling revelations to the notion that the 2016 election didn't turn on people disliking Clinton, liking Trump, or not finding it imperative to vote for either. The real reason for Trump's win, they decided, was that voters were duped by Russian memes amplified by Big Tech's algorithms.
This scenario had the bonus of lending itself to swift action. Foreign actors may be out of reach, but U.S. tech companies could be berated before Congress, excoriated by attorneys general, and targeted through legislation.
Progressives continued to embrace this explanation with each new and upsetting political development. The alt-right? Blame algorithms! Conspiracy theories about Clinton and sex trafficking? Algorithms! Nice Aunt Sue becoming a cantankerous loon online? Algorithms, of course.
Conservatives learned to loathe the algorithm a little later. Under fire about Russian trolls and other liberal bugaboos, tech companies started cracking down on a widening array of content. Conservatives became convinced that different kinds of algorithms—the ones used to find and deal with hate speech, spam, and other kinds of offensive posts—were more likely to flag and punish conservative voices. They also suspected that algorithms determining what people did see were biased against conservatives.
"The biggest names in social media are cracking down…disproportionately on conservative news," wrote Ben Shapiro in National Review in 2018, blaming a bias he believed was baked into these platforms' algorithms. "Big Tech companies…have established a pattern of arbitrarily silencing voices—overwhelming conservative voices—with whom they disagree," declared Sen. Ted Cruz (R–Texas) in December 2019, alleging that tech companies were hiding behind algorithms to get away with this bias. He also claimed that biased Google results had swung millions of votes to Clinton. Trump himself suggested, on Twitter, that Google results were biased against him.
These narratives—that algorithms manipulate us, drive political extremism, and are biased against conservatives—persist today, joined in the years since the 2016 election by a host of other complaints. Once algorithms entered the public consciousness as a potentially powerful villain, there was no internet-enabled ill or confusing social development that couldn't be pinned on them. COVID-19 vaccine hesitancy? Algorithms! The rise of TikTok? Algorithms! Trump losing to Biden? Algorithms, of course.
A common thread in all this is the idea that algorithms are powerful engines of personal and political behavior, either deliberately engineered to push us to some predetermined outcome or negligently wielded in spite of clear dangers. Inevitably, this narrative produced legislative proposals, such as the 2019 Biased Algorithm Deterrence Act, the 2021 Justice Against Malicious Algorithms Act, and the 2022 Social Media NUDGE Act. All of these bills would discourage or outright disallow digital entities from using algorithms to determine what users see.
Are Algorithms Driving Addiction in Kids?
Another common complaint is that algorithms are uniquely harmful to children. "I think we're at the same point [with Big Tech] that we were when Big Tobacco was forced to share the research on how its products were harming individuals," North Carolina State University's Sherry Fowler told Time in 2022, calling social media "just as addictive" as cigarettes.
The result has been legal pressure and attempted legislative crackdowns. At press time, Meta—Facebook's parent company—faces at least eight lawsuits alleging that its algorithms are addictive. A Minnesota measure would ban large platforms from using algorithms "to target user-created content" at minors. A California bill would have let the state, some cities, and parents sue if they believe tech companies are deliberately targeting kids with addictive products. In a press release about the Kids Online Safety Act, which would require platforms to let minors opt out of algorithm-based recommendations, Blumenthal summed up the argument: "Big Tech has brazenly failed children and betrayed its trust, putting profits above safety."
It's no secret that tech companies engineer their platforms to keep people coming back. But this isn't some uniquely nefarious feature of social media businesses. Keeping people engaged and coming back is the crux of entertainment entities from TV networks to amusement parks.
Moreover, critics have the effect of algorithms precisely backward. A world without algorithms would mean kids (and everyone else) encountering more offensive or questionable content.
Without the news feed algorithm, "the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content," Nick Clegg, Meta's then–vice president of global affairs, told George Stephanopoulos last year. That's because algorithms are used to "identify and deprecate and downgrade bad content." After all, algorithms are just sorting tools. So Facebook uses them to sort and downgrade hateful content.
"Without [algorithms], you just get an undifferentiated mass of content, and that's not very useful," noted Techdirt editor Mike Masnick last March.
The underlying premise that people can become "addicted" to social media is backed up by relatively little evidence. Most children, and most adults, don't develop unhealthy online habits. "An estimated 95.6% of adolescents do not qualify as excessive Internet users," notes a 2021 study in the Health Informatics Journal.
Some kids spend unhealthy amounts of time on social media. But that tends to reflect other underlying issues. Preexisting mental health problems can combine with non-digital life stressors to create "pathological technology use," says the Stetson University psychologist Christopher Ferguson. But "parents come along or politicians come along and all they can see is" the symptom, so they think "let's take the video games away, or Facebook or Instagram away, and everything will be resolved."
"In reality, that's not the case at all," says Ferguson, whose research focuses on the psychological effects of media and technology. He suggests that people like blaming technology because it's easier than the "hard work of figuring out why that person has fallen into this particular behavioral pattern."
"Twenty years ago, we were worried about video games and mass homicide," notes Ferguson. "Before that, it was rock music and Dungeons & Dragons. And then before that it was comic books in the 1950s and radio in the 1940s….We just have this whack-a-mole kind of effect, where these panics kind of rise and fall. And I think social media is just the latest one."
Are Algorithms Politically Biased?
When algorithms aren't allegedly addicting teens, they're allegedly undermining American politics.
In 2022, GOP senators introduced a bill—the Political Bias in Algorithm Sorting Emails Act—that would ban email services from using "a filtering algorithm to apply a label" to an email from a political campaign, "unless the owner or user of the account took action to apply such a label." It was a direct response to largely mistaken allegations that Gmail's algorithms were designed to flag conservative campaign emails.
The senators cited a study by North Carolina State University computer scientists to argue that Gmail is inherently biased against the right. In the study, researchers set up Gmail, Microsoft Outlook, and Yahoo email accounts and signed up for campaign messages with each. When accounts were new, "Gmail marked 59.3% more emails from the right candidates as spam compared to the left candidates, whereas Outlook and Yahoo marked 20.4% and 14.2% more emails from left candidates as spam."
But once researchers started actually using their accounts—reading emails, marking some as spam and some as not—"the biases in Gmail almost disappeared, but in Outlook and Yahoo they did not," lead researcher Muhammad Shahzad told The Washington Post. Google's email sorting algorithm doesn't just account for the content of the email. Over time, it also accounts for user behavior, learning to better sort content by what users want to see.
Meanwhile, several studies suggest social media is actually biased toward conservatives. A paper published in Research & Politics in 2022 found that a Facebook algorithm change in 2018 benefitted local Republicans more than local Democrats. In 2021, Twitter looked at how its algorithms amplify political content, examining millions of tweets sent by elected officials in seven countries, as well as "hundreds of millions" of tweets in which people shared links to articles. It found that "in six out of seven countries—all but Germany—Tweets posted by accounts from the political right receive more algorithmic amplification than the political left" and that right-leaning news outlets also "see greater algorithmic amplification."
As for the Republican email algorithms bill, it would almost certainly backfire. Email services like Gmail use algorithms to sort out massive amounts of spam: If the GOP bill passed, it could mean email users would end up seeing a lot more spam in their inboxes as services strove to avoid liability.
The Hypodermic Needle Fallacy
One of the biggest mistaken assumptions driving panic over algorithms is that exposure to biased content alone produces bias. This notion that ideas, especially political ideas, can be injected directly into consumers through nothing more than media exposure is known as the "hypodermic needle" theory of media effects. It assumes people are passive vessels, mindlessly absorbing whatever information is put before them.
In this theory of media power, reading "the sun is blue" would prompt you to suddenly believe the sun is blue. But communication is always filtered through one's own knowledge, experience, and biases. If you have seen the sun, you know it's not blue. That affects how you perceive the statement.
This hypodermic needle theory "enjoyed some prominence in the 1930s and 1940s" but "has been consistently discredited ever since," note the Penn State political scientists Kevin Munger and Joseph Phillips in a 2019 working paper, "A Supply and Demand Framework for YouTube Politics," a version of which would go on to be published in The International Journal of Press/Politics. And it's still "implausible" today when applied to YouTube, where algorithms are often accused of driving viewers to radical beliefs by recommending extremist content.
Munger and Phillips suggest that people like to blame algorithms because it "implies an obvious policy solution—one which is flattering to the journalists and academics studying the phenomenon. If only Google (which owns YouTube) would accept lower profits by changing the algorithm…the alternative media would diminish in power and we would regain our place as the gatekeepers of knowledge." But this, they write, is "wishful thinking."
None of this bolsters the idea that Russian memes swayed the 2016 election. Just because polarizing content and fake news were posted doesn't mean it was widely believed or shared. Indeed, most of the news shared on social media originates from mainstream sources, and little of it is truly "fake." In the run-up to the 2016 election, "the vast majority of Facebook users in our data did not share any articles from fake news domains," according to a 2019 study published in Science Advances. A 2021 study in Proceedings of the National Academy of Sciences (PNAS) found "less than 1% of regular news consumption and less than 1/10th of 1% of overall media consumption could be considered fake." And a 2019 paper in Science reported that "for people across the political spectrum, most political news exposure still came from mainstream media outlets."
In 2020, researchers at Harvard University's Berkman Klein Center for Internet & Society looked at 5 million tweets and 75,000 Facebook page posts on mail-in voter fraud. They found that most disinformation didn't originate in "Russian trolls or Facebook clickbait artists." Instead, it was "an elite-driven, mass-media led process," propagated by Trump, the Republican National Committee, and major media outlets, including Reuters.
Are Algorithms Driving Echo Chambers and Polarization?
People really are getting their news through social media, and that means social media algorithms are likely sorting and filtering that news for them—though users of Facebook and Twitter do have the option to still view their feeds chronologically. As a result, some fear that algorithmically determined social media feeds make it easy to stay in filter bubbles and echo chambers, where challenging information is always filtered out. In this scenario, conservatives only see news from conservative sources, liberals only see news from left-leaning outlets, and the electorate gets more polarized.
It may sound plausible, but there's very little evidence to back up this theory. In 2015, a study in Science examined how 10.1 million U.S. Facebook users interact with "socially shared news"; it found that "compared with algorithmic ranking, individuals' choices played a stronger role in limiting exposure to cross-cutting content." A 2017 article in the Journal of Communication found "no support for the idea that online audiences are more fragmented than offline audiences, countering fears associated with audience segmentation and filter bubbles." And a 2016 paper published in Public Opinion Quarterly found social media and search engines associated with an increased exposure to the user's "less preferred side of the political spectrum."
"Most studies have not found evidence of strong filter bubble and echo chamber effects, with some even suggesting that distributed news use increases cross-cutting news exposure," note researchers Richard Fletcher, Antonis Kalogeropoulos, and Rasmus Kleis Nielsen in a 2021 analysis published in New Media & Society. Their own research using data from the U.K. found "people who use search engines, social media and aggregators for news have more diverse news repertoires" than those who don't, though using those tools was "associated with repertoires where more partisan outlets feature more prominently."
Conversely, research shows traditional media and nondigital communities are more likely to create echo chambers. Duncan J. Watts, director of the University of Pennsylvania's Computational Social Science Lab, has found that just 4 percent of people fall into online echo chambers—while 17 percent fall into TV echo chambers. "All of the attention focused on social media algorithms looks somewhat overblown," Watts told Penn Today.
"Vulnerability to echo chambers may be greatest in offline social networks," according to a study published by the National Academy of Sciences in 2021. The researchers had some people in Bosnia and Herzegovina stay off Facebook during a week of remembrance for victims of genocide. At the end of the experiment, those who stayed on Facebook had more positive views of people of other ethnicities.
Piecing together the research, it becomes clear why people might feel like algorithms have increased polarization. Life not long ago meant rarely engaging in political discussion with people outside one's immediate community, where viewpoints tend to coalesce or are glossed over for the sake of propriety. For instance, before Facebook, my sister and an old family friend would likely never have gotten into clashes about Trump—it just wouldn't have come up in the types of interactions they found themselves in. But that doesn't mean they're more politically divergent now; they just know more about it. Far from limiting one's horizons, engaging with social media means greater exposure to opposing viewpoints, information that challenges one's beliefs, and sometimes surprising perspectives from people around you.
The evidence used to support the social media/polarization hypothesis is often suspect. For instance, people often point to political polarization. But polarization seems to have started its rise decades before Facebook and Twitter came along.
And the U.S. is polarizing faster than other countries. A 2020 study by some economists at Brown University and Stanford University looked at "affective polarization"—feelings of negativity toward rival political parties and people in them, combined with positive beliefs about one's own political tribe—and found that since the 1970s, affective polarization in the U.K., Canada, Australia, New Zealand, Germany, Switzerland, and Norway had either decreased or increased much more slowly than in the U.S. Since people in these countries also use a lot of social media, this suggests that something else is at play.
What's more, polarization "has increased the most among the demographic groups least likely to use the Internet and social media," according to a 2017 paper in PNAS. It's hard to blame that on social media algorithms.
How Fake News and Radical Content Really Spread
YouTube has gained an especially bad reputation for algorithms that boost conspiracy theories and fringe political content. A trio of Democratic congressional leaders wrote in a March 2021 letter to Google CEO Sundar Pichai that YouTube's algorithm-driven recommendations are "effectively driving its users to more dangerous, extreme videos and channels."
Yet little evidence supports this conjecture—and some evidence suggests that critics of YouTube's algorithmic recommendations get the causality backward.
"Despite widespread concerns that YouTube's algorithms send people down 'rabbit holes' with recommendations to extremist videos, little systematic evidence exists to support this conjecture," cross-university researchers Annie Y. Chen, Brendan Nyhan, Jason Reifler, Ronald E. Robertson, and Christo Wilson conclude in a 2022 working paper, "Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos." Instead, such videos are mainly viewed by people whose preexisting views they flatter.
Looking at YouTube activity data from 1,181 people, the authors found that "exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment." What's more, these viewers "typically subscribe to these channels (causing YouTube to recommend their videos more often)" or get there by following external links, not in-site recommendations. Meanwhile, "non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered."
A 2019 study by Australian software engineer Mark Ledwich and University of California, Berkeley, social media researcher Anna Zaitsev suggests that YouTube's recommendation algorithm might actively discourage viewers from "radicalizing or extremist content." They found the most algorithmically disadvantaged types of political content were "channels that fall outside mainstream media," with "both right-wing and left-wing YouTube channels" disadvantaged and "White Identitarian and Conspiracy channels being the least advantaged by the algorithm."
Even if someone watches extreme content, "their recommendations will be populated with a mixture of extreme and more mainstream content." YouTube, they conclude, is "more likely to steer people away from extremist content rather than vice versa."
One response to recent studies like these is to suggest that a YouTube algorithm change in early 2019 corrected for some of the "rabbit hole" effects of its earlier formulation. Since researchers can't go back and study recommendations under the old algorithm, there's no way to gauge this argument's veracity. But even if correct, it would seem to be a point against algorithm alarmists. In this narrative, YouTube was made aware of a potential flaw in its recommendation machine and took effective corrective action without any new laws or regulations requiring it.
Of course social media does sometimes contribute to people believing crazy things. "Those with a history of watching conspiratorial content can certainly still experience YouTube as filter-bubble, reinforced by personalized recommendations and channel subscriptions," point out researchers in the 2020 paper "A longitudinal analysis of YouTube's promotion of conspiracy videos."
But too many critics of YouTube's recommendations seem to subscribe to a "zombie bite" model of influence, in which mere exposure to one or a few extremist or conspiracy theory video recommendations will turn you.
The reality is that YouTube's recommendations are more likely to draw people away from extremism. When people do dive into fringe content, it's usually because they're already in agreement with it, not unwitting victims of algorithmic manipulation.
The Algorithmic Advantage
For the average person online, algorithms do a lot of good. They help us get recommendations tailored to our tastes, save time while shopping online, learn about films and music we might not otherwise be exposed to, avoid email spam, keep up with the biggest news from friends and family, and be exposed to opinions we might not otherwise hear.
So why are people also willing to believe the worst about them?
Perhaps because the past decade and a half have been so disorienting. Far-right movements around the world gained ground. Both major U.S. political parties seem to be getting more extreme. News is filled with mass shootings, police shootings, and rising murder and suicide rates, along with renewed culture wars. The Barack Obama years and their veneer of increasing tolerance gave way not to America electing its first female president but to Trump and the rise of the alt-right. Then a pandemic hit, and large numbers of people rejected a potentially life-saving vaccine.
All of this happened in the years after algorithms became far more prominent in our daily lives. That makes algorithms a convenient scapegoat for our societal ills.
To some extent, the arguments about algorithms are just a new front in the war over free speech. It's not surprising that algorithms, and the platforms they help curate, upset a lot of people. Free speech upsets people, censorship upsets people, and political arguments upset people.
But the war on algorithms is also a way of avoiding looking in the mirror. If algorithms are driving political chaos, we don't have to look at the deeper rot in our democratic systems. If algorithms are driving hate and paranoia, we don't have to grapple with the fact that racism, misogyny, antisemitism, and false beliefs never faded as much as we thought they had. If the algorithms are causing our troubles, we can pass laws to fix the algorithms. If algorithms are the problem, we don't have to fix ourselves.
Blaming algorithms allows us to avoid a harder truth. It's not some mysterious machine mischief that's doing all of this. It's people, in all our messy human glory and misery. Algorithms sort for engagement, which means they sort for what moves us, what motivates us to act and react, what generates interest and attention. Algorithms reflect our passions and predilections back at us.
They also save us time and frustration, making our online experiences more organized and coherent. And they expose us to information, art, and ideas we might not otherwise see, all while stopping the spread of content that almost everyone can agree is objectionable. Algorithms are tools, and on the evidence, people—and tech companies—are using them pretty well.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
It is a religion at this point for some, praying to algorithms. As someone who works in advanced technologies that includes a focus on ML and AI, algorithms are not flawless and often represent the bias of those designing them. They are only effective against test data being very flawed against the entirety of information. They are a tool, not sentient. Yet some believe the latter, largely based on Hollywood and science fiction.
Some of the biggest fail points in the systems o help create is the tuning and training data being insufficient to encapsulate the entirety of a problems data space. Overturning results leads to bad results with outcomes being no better than a random number generator. The algorithms are biased to the designer and the training data utilized. And not acknowledging these issues can actually lead to worse results when testing on wider data sets.
"...algorithms are not flawless..."
Say WHAT?!?! Wow! How deep! HOW did you EVER figure THAT out? What algorithm did you USE, pray tell?
What exactly IS flawless, in the Perfect Mind of Der JesseBahnFuhrer, anyway, other than the Perfect Mind of Der JesseBahnFuhrer?
Air isn't perfect either... Sometimes it smells bad. And I think that I just smelled some taurine feces, AKA bullshit!
Google pay 200$ per hour my last pay check was $8500 working 1o hours a week online. My younger brother friend has been averaging 12000 for months now and he works about 22 hours a week. I cant believe how easy it was once I tried it outit..
🙂 AND GOOD LUCK. 🙂
HERE====)> http://WWW.WORKSFUL.COM
I get paid over 190$ per hour working from home with 2 kids at home. I never thought I’d be able to do it but my best friend earns over 10k a month doing this and she convinced me to try. The potential with this is endless. Heres what I’ve been doing..
HERE====)> http://WWW.RICHSALARIES.COM
Can you tell us which technologies you work on so we can make a point of avoiding them? You're not very bright and your reasoning skills are nonexistent, so whatever it is you work on must be beyond terrible.
Can you tell us which technologies you work on so we can make a point of avoiding them?
He works on the software that supports the Reason comments section.
I would have quit 20 years ago.
Just a reminder. Sarc had to go ask comment sections how to use basic html tags.
It's so cute when you accuse me of doing what you are doing while you are doing it.
What did I accuse you of buddy? Did you or did you not do that? We know the answer us yes. I've never asked what basic html tags are here.
What tags do you need help with?
None. I've known them since the 90s. I'm not the one who asked for help. That was you.
Are you literally retarded?
I'd ask you to post a link for when I did that, except that we both know it doesn't exist. Because every single sentence you write is a lie.
Lol. Sure buddy. Multiple people here have stated that fact. You are pathological.
I am making $92 an hour working from home. I never imagined that it was honest to goodness yet my closest companion is earning $16,000 a month by working on a laptop, that was truly astounding for me, she prescribed for me to attempt it simply.
Everybody must try this job now by just using this website. http://www.LiveJob247.com
Move the reply button
That would explain why he thinks that when someone mutes posts with the handle 'sarcasmic' and it mutes other names but not this one, that that means my account was used to impersonate people.
Cite when I claimed that? Amazing how you freely lie.
Uhhh, whenever I ask the Canadian Cunt to repost the truth?
My statements to you were that he has answered your accusations multiple times.
Are you literally retarded?
Start getting paid each month more than $17,000+ just by w0rking 0nline from home. Last month i have earned andreceived $18539 from this easy 0nline j0b. This 0nline j0b is just amazing and regular earni ng from this are just awesome. Start making extra dollars 0nline just by follow instructions on this website..,
OPEN►►►►► GOOGLE WORK
When I want to discuss technologies I don’t discuss them with a 2nd level IT help desk like you lol.
It is amazing you go with this attack as I've demonstrated to you multiple times my education and background far exceeds yours with you not even knowing common terms from engineering backgrounds.
Hilarious.
Do you have anything that disputes what I wrote? Or just being the usual hypocritical troll?
Based upon the fact that I can say something like “I don’t vote” and you interpret it to mean “I voted for Biden,” I can only conclude that you’re not very bright and your reasoning skills do not exist.
Again. Those aren’t the arguments I make. But you have the reading comprehension of a simple minded 5 tear old still looking at picture books.
When you spend all your energy attacking one side, attacking those against the other side, deflecting from any criticism, etc it is reasonable to assume where your loyalties lie.
We just did this yesterday where you attacked other posters criticizing Biden in an article criticizing Biden. You didnt lay down a single criticism instead expending your energies attacking those who were. Your main concern was defending against attacks. Do you deny this?
Your delusions are palpable. It is amazing watching someone as dumb as you think they are fooling anybody.
I also notice you didn't dispute what I wrote despite your first post.
I've got to say you have an amazing talent for mendacity. Every single sentence you just wrote was false. If you're anything like that in your work then everything you touch must turn to shit.
Which sentence was false?
Every. Single. One.
Which one do you wish for me to prove?
Prove? You'll post me saying "I don't vote" as proof that I voted for Biden. Even your "proof" is a lie.
At least you're consistent.
So you want me to prove an assertion you made about me?
Are you literally retarded?
I already told you I'm not going to break down your post sentence by sentence to address each lie. You're not worth the effort.
So nothing was false. Got it.
This is why conversations with you are impossible. When everything you write is a lie I can refute it piece by piece or just ignore it all. I'm doing the latter.
You are making bald without specifying the reason or the claim.
Are you literally retarded?
What do you want me to prove. Examples like this?
sarcasmic 2 weeks ago
Flag Comment Mute User
What about the blind hatred for Biden in these comments? Seems like a majority of the people here start the day with the “Fuck Joe Biden” prayer. Why is that warranted while anyone who says “Boo” about Trump is accused of Trump Derangement Syndrome?
You attacking those critical of Biden as I stated? In articles critical of Biden?
That's an observation. You're too stupid or dishonest to understand the difference between an observation and an attack.
Keep it up though. Watching you post irrelevant shit while claiming it means things it obviously doesn't mean only furthers my point that you're a disturbing mix of stupidity and dishonesty.
Except you do it constantly. Need me to link yesterday? Hell I could link the last 20 articles critical of the left and it will show you only attacking those critical of the left. Not a single post of you critical of the subject at hand.
You really are pathological.
You also literally assert anyone saying fuck Joe biden is a trump supporter. You really are fucking retarded. I’m not even going to ask if you are anymore.
Still waiting for you to disprove anything from my original post by the way.
Jesse original post: "...Some of the biggest fail points in the systems o help create..."
"o" meaning Obama? We're talking about algorithms used in social media, and... What... Obama helped create them? In what way? Obama is of the WRONG political tribe, so let's just lash out and blame Obama? Woodrow Wilson was a demon-crap too, and a far worse humanoid in my mind at least, so I think that we should blame Woodrow Wilson!
(That makes more sense than what YOU wrote, you pretentious windbag!)
I am making $92 an hour working from home. I never imagined that it was honest to goodness yet my closest companion is earning $16,000 a month by working on a laptop, that was truly astounding for me, she prescribed for me to attempt it simply.
Everybody must try this job now by just using this website. http://www.LiveJob247.com
Ideas!
He works on all the technologies involved in life saving medical devices. So avoid those.
Google pay 200$ per hour my last pay check was $8500 working 1o hours a week online. My younger brother friend has been averaging 12000 for months now and he works about 22 hours a week. I cant believe how easy it was once I tried it outit.. ? AND GOOD LUCK.:)
HERE====)> www.richsalary.com
That's a lot of words and I can't be bothered to read them, but I'm sure I vehemently disagree with ENB.
I would bet money on that in Vegas. Plus I skimmed it and it looked pretty idiotic.
Maybe ENB should quit her bitchin’ and get back to the kitchen. I think we could all go for a sandwich.
An algorithm is simply a set of step-by-step instructions for solving a problem.
Soooo...Big Brother wants to regulate Porky Pig's errand list from his Mom?
"A l-l-loaf of b-b-bread, a b-b-bottle of milk, and c-c-come home right away!...". 🙂
https://dai.ly/x5jp26g
I have a very simple algorithm of my own to judge the credibility of Facebook, Twitter, Google, or any other social media site. Oddly enough, it's the same algorithm I use to judge whether or not ENB knows what the hell she's talking about.
Last month i managed to pull my first five figure paycheck ever!!! I’ve been working for this company online for 2 years now and i never been happier… They are paying me $95/per hour and the best thing is cause i am not that tech-savy, they only asked for basic understanding of internet and basic typing skill… It’s been an amazing experience working with them and i wanted to share this with you, because they are looking for new people to join their team now and i highly recommend to everyone to apply…
Visit following page for more information…………..>>> onlinecareer1
Algorithms are software. Why should Government Almighty have ANY more control over the free speech rights of software writers, than of book (etc.) writers?
(Besides the fact that Government Almighty is made of voracious power pigs, I mean.)
https://www.voanews.com/a/us-supreme-court-will-hear-social-media-terrorism-lawsuits-/6773833.html
From there...
Gonzalez's relatives said that the company's computer algorithms recommended those videos to viewers most likely to be interested in them.
But a judge dismissed the case and a federal appeals court upheld the ruling. Under U.S. law — specifically Section 230 of the Communications Decency Act — internet companies are generally exempt from liability for the material users post on their networks.
Wooo-Hooo! A judge actually got one right!
Poor ENB still asserting that manual leftist moderation is some sort of algorithm of truth. Fuck off you proggy cunt, stick to topics you have some knowledge about like sex work.
And making sandwiches.
Don’t forget baking pies, from scratch.
"Poor ENB still asserting that manual leftist moderation is some sort of algorithm of truth."
I read the whole thing and I didn't see that. Are you seeing things that aren't there? Are you being mind-controlled by the Lizard People maybe? The "algorithm of truth" that makes YOU write things like you just wrote... Comes from where? Lizard People, Amphibian People, Russians, Dalmatians, Thermans, Germans, or Termites? I'm at a loss here to parse this... I just don't know where it's coming from!
I just don’t know where it’s coming from!
Not surprising.
Then please enlighten me (and other readers) concerning WHERE do these lies come from, where liars say things that they then REFUSE to support? "Still the man hears what he wants to hear, and disregards the rest."
Burning down strawmen is easy. So please also tell me... WHY did you say that we should torture all of the innocent Christian babies to death, and then drink their blood?
Never said anything like that, you fuvcking idiot.
Yet you defend "Social Justice is neither" for attacking strawmen, and putting words in peoples mouths. Could it be MOTIVATED "REASONING" to mindlessly defend anyone and everyone who you perceive as belonging to YOUR tribe?
"WHY did you say that we should torture all of the innocent Christian babies to death, and then drink their blood?"
Did he say "innocent"? Because that would be wrong.
I see what you did there! Because much of classical Christianity says "We are ALL sinners!", guilty of "original sin", even if we've not had a chance to sin yet!
You will go FAR as a certain kind of theologian! Here, let me help...
God COMMANDS us to kill EVERYONE!
Our that them thar VALUES of society outta come from that them thar HOLY BIBLE, and if ya read it right, it actually says that God wants us to KILL EVERYBODY!!! Follow me through now: No one is righteous, NONE (Romans 3:10). Therefore, ALL must have done at least one thing bad, since they’d be righteous, had they never done anything bad. Well, maybe they haven’t actually DONE evil, maybe they THOUGHT something bad (Matt. 5:28, thoughts can be sins). In any case, they must’ve broken SOME commandment, in thinking or acting, or else they'd be righteous. James 2:10 tells us that if we've broken ANY commandment, we broke them ALL. Now we can’t weasel out of this by saying that the New Testament has replaced the Old Testament, because Christ said that he’s come to fulfill the old law, not to destroy it (Matt. 5:17). So we MUST conclude that all are guilty of everything. And the Old Testament lists many capital offenses! There’s working on Sunday. There’s also making sacrifices to, or worshipping, the wrong God (Exodus 22:20, Deut. 17:2-5), or even showing contempt for the Lord’s priests or judges (Deut. 17:12). All are guilty of everything, including the capital offenses. OK, so now we’re finally there... God’s Word COMMANDS us such that we’ve got to kill EVERYBODY!!!
(I am still looking for that special exception clause for me & my friends & family… I am sure I will find it soon!)
Well, there's Original Sin, but if you get Baptized that wipes that out. So if you stay clean from there you should be good (unless you're white. Then you're damned to Eternal Fire no matter what you do or don't do.)
It USED to be that "white makes right". (Or so I am told). It's a brand-new day now, and white now makes WRONG! (White cis-het-norm-abled!?!?! Fuhgitabouddit!!!)
Ok shit for brains, maybe i get it from all the evidence that ENB ignores that the algorithms she's talking about are being ignored/overridden by manual review by partisan actors. She ignores that so she can falsely assert it's just algorithms when it's anything but.
Your citation(s) fell off!
"...all the evidence that ENB ignores..." Evidence from where? Lizard People, Amphibian People, Russians, Dalmatians, Thermians, Germians, or Termites, and did they document their concerns on a reputable web site, or did they leave their messages in goat-guts and burnt turtle shells?? Or were you personally there to see all of the manual over-rides, AND to mind-read the moderators as to WHY the over-rides were done?
"In this theory of media power, reading "the sun is blue" would prompt you to suddenly believe the sun is blue. But communication is always filtered through one's own knowledge, experience, and biases. If you have seen the sun, you know it's not blue. That affects how you perceive the statement."
In this theory of media power, reading "climate change emergency" would prompt you to suddenly believe the climate is an emergency. But communication is always filtered through one's own knowledge, experience, and biases. If you have seen the climate, you know it's not an emergency.
Maybe the biggest hurdle is figuring out how supposedly 'thinking' humans can turn into massive [WE] Power-Hungry Gov-Gun-Toting mobs of such wild idiocy................
Oh yeah; Something about being Power-Mad like mad cow disease literally destroying rational thought. Curtailing that plague of a disease - Well not so easy.
Or specifically to the authors take.
The USA is NOT a Nazi(National Socialist)-"democracy"......
"They might even be good for democracy." - IS THE CURSED narrative...
It's a *CONSTITUTIONAL* Union of Republican States.
"democracy" doesn't even attempt at preserving liberty and justice. It's nothing but [WE] gangland RULES politics. Do tell how does a nations government preserve liberty and justice with unlimited "democracy"??? It's a theory of which ever gang can subscribe the most members gets to RULE with GUNS however their hearts desire.
"Hypocrisy is the yeast of the Pharisees." A thought pattern that flits from mind to mind, leaving some very ugly things behind, in its wake.
SAYING one thing loudly and proudly, then moving on to say and do OTHER things that are 180 out! If we could fix THAT, we'd get somewhere, I'd bet! And BOTH SIDES are guilty here... See below for a short essay on that... The POTUSes (of BOTH SIDES)swear to defend the USA Constitution, then immediately afterwards, shit all over it!
This looks like a prime opportunity for me to explain a few things I’ve learned on this planet, while becoming a geezer. A few things, that is, about human nature, and excessive self-righteousness, tribalism, the “rush to judge” others, and the urge to punish.
“Team R” politician: “The debt is too large, and government is too powerful. If you elect ME, I will FIX that budget-balance problem SOON! But, first things first! THOSE PEOPLE OVER THERE ARE GETTING ABORTIONS!!! We must make the liberals CRY for their sins! AFTER we fix that RIGHT AWAY, we’ll get you your budget balanced and low taxes!”
“Team L” politician: “The debt is too large, and I’ll get that fixed soon, I promise you, if you elect ME! First, the more important stuff, though: THOSE PEOPLE OVER THERE ARE OWNING GUNS!!! We must PROTECT the American People from guns and gun-nuts!!! AFTER we fix that RIGHT AWAY, we’ll get our budgets balanced!”
And then we gripe and gripe as Government Almighty grows and grows, and our freedoms shrink and shrink. And somehow, the budget never DOES get balanced!
Now LISTEN UP for the summary: Parasites and politicians (but I repeat myself) PUSSY GRAB US ALL by grabbing us by… Guess what… by our excessive self-righteousness, tribalism, the “rush to judge” others, and the urge to PUNISH-PUNISH-PUNISH those “wrong” others! Let’s all STOP being such fools, and STOP allowing the politicians OF BOTH SIDES from constantly pussy-grabbing us all, right in our urge to… Pussy-grab the “enemies”, which is actually ALL OF US (and our freedoms and our independence, our ability to do what we want, without getting micro-managed by parasites)!!!
Shorter and sweeter: The pussy-grabbers are actually pussy-grabber-grabbers, grabbing us all in our pussy-grabbers. Let us all (as best as we can) AMPUTATE our OWN nearly-useless-anyways pussy-grabbers, and the pussy-grabber-grabbers will NOT be able to abuse us all NEARLY ass much ass these assholes are doing right now!
^Power-Mad-Cow disease destroying rational thought.....
Case & Point exhibit #189573129540.
https://reason.com/2021/01/18/carjacker-beaverton-mom-kid-waiting/#comment-8710844
Model TJJ2000 Dictatorbot believes that the USA already is (and should be) a 1-party dicktatorshit! That the USA HAS BEEN a 1-party dicktatorshit for some 200 years!!! There is NO point in trying to persuade the Model TJJ2000 Dicktatorbot of ANYTHING! Almost ALL of the circuits of the Model TJJ2000 Dicktatorbot have gone kaput, big-time!
Model TJJ2000 Dicktatorbot is lusting after an UPGRADE to its rusting old body! Wants to be upgraded to Model TJJ20666 Dicktatorbot, and run for POTUS in 2024, with Alex Jones as the VEEP of Model TJJ20666 Dicktatorbot!!! Be ye WARNED!!! Model TJJ20666 Dicktatorbot will be well-nigh INDESTRUCTIBLE! (Unreachable by ANY logic or considerations for the freedoms of others, MOST certainly!)
PLEASE do NOT enable the lusting of the rusting TJJ20000 Dictatorbot!!!
From https://reason.com/2021/01/18/carjacker-beaverton-mom-kid-waiting/ … Model TJJ20666 Dicktatorbot has yet to take it back… USA is a 1-party state just like North Koprea, it seems…
1. TJJ2000
“1-party dictatorshit, which worked in the long run”…
Your 1-party dictatorship example… The USA; being dictated entirely and completely by the U.S. Constitution for 200-YEARS. The most SUCCESSFUL country ever seen……………………..
UNTIL Democratic National Socialists decided a 1-Party dictation was a bad thing and created [WE] mobs of identity “empowered” politics and pretended the U.S. Constitution was a waste of time.
Is she paid by the word now?
My algorithm;
Read the headline, read the author, go to the comments of the few who still make sense on occasion (or are the most fun to snipe).
Someone’s paying her to write this:
Perhaps because the past decade and a half have been so disorienting. Far-right movements around the world gained ground. Both major U.S. political parties seem to be getting more extreme. News is filled with mass shootings, police shootings, and rising murder and suicide rates, along with renewed culture wars. The Barack Obama years and their veneer of increasing tolerance gave way not to America electing its first female president but to Trump and the rise of the alt-right. Then a pandemic hit, and large numbers of people rejected a potentially life-saving vaccine.
How many left-wing shibboleths can be put in a paragraph?
WTF happened to Reason?
Whatever it is, she’s overpaid. Maybe I should become a columnist. It obviously isn’t difficult, nor does it require any talent.
My only criticism of social media algorithms is that it leads to bubbles. But that's me, everyone else LIKES the bubbles. The complaints aren't about bubbles, but that other bubbles exist. People don't like that there isn't a person in charge they can write to demanding their views be given priority. Conservatives are mad that progressives aren't seeing their posts, and progressives are mad that conservatives aren't being fed their lies.
Or you have conservatives angry that some progressive posts are making it through their bubble, but the algorithm sees that these folk like arguing back against the progressives. Or you have progressives angry that some conservative posts are making it through their bubble, but the algorithm sees that these folk like arguing back against the conservatives. But they are getting what they want. Stop responding to posts you don't like. Stop forwarding their crap along. All that does is tell the algorithm that you like that stuff.
Revealed preferences. Angry people want stuff to be angry about, and the algorithm will accommodate them. They want to be validated that progressives are destroying the country, so they get fed validation. Ditto for those certain that conservatives are promoted fake news, they get fed the fake news that validates them.
The problem isn't that there are algorithms designed to deliver personalized content, the problem is that people just haven't learned to live with impartially personalized content. They spent their lives with curated content from people in news departments, or political leaders telling them how to think, or some other human being with a bias curating biased content just for them.
Note that I am NOT claiming that the algorithm is perfect. Not at all. I am merely claiming that it's doing what it was ostensibly designed to do: deliver personalized content based on the user's demonstrated preferences. I would tune it differently, but the idea that it was designed to be "progressive" or "conservative" is utter bullshit. The algorithm is agnostic.
EXCELLENT mini-essay there Brandybuck!!! People want contradictory things (incompatible with one another).
I for one, want to be the Catholic Pope, AND a famous porn star!
Bob Seger, He wants to live like a sailor at sea...
He wants his home and security
He wants to live like a sailor at sea
Beautiful loser, where you gonna fall?
You realize you just can't have it all…
substitute prostitute for p0rn star and you can probably pull it off.
Whoa, why did I not think of that?!?! I'll give it a try... Thanks! Excellent, VERY deductive, seductive, suggestive suggestion! Good for my digestion!
(I am a ho of the mo of the bro, and can rap my head around shit all around the hood, ass I should!)
Having once hung out with the famous Starchild, being a prostitute is not all it's made out to be. (Was not habbing secks with him, it was an LP thing). If all you want is sex all the time, just hang out in truckstops or be a bazillionaire like Trump or Weinstein grabbing all the pussies.
When you have enough money they come and grab your dick.
Again, for me it's the thumbs on the scales of an algorithm. The early days of Google were the golden years. The goal was literally (make this algorithm give people the most relevant thing they're looking for).
It wasn't perfect, people famously learned to game the algorithm, and google famously had to recognize when that was happening and 'tweak' it so it would close that loophole.
Remember youtube in the old days? Most viewed, most engaged, most commented on.
And for what it's worth, I still say youtube does a pretty excellent job with its modern, more sophisticated algorithms that steer stuff to you that's within your interest based on previous searches and views. But everyone knows they're also blocking stuff because it's politically disfavored.
Let me give you a recent example that happened to me using straight google search vs. duck duck go.
As a history buff, I watch a lot of restored, upscaled video from wwi (and before) where machine learning has done a really great job of tweening missing frames, smoothing the video, slowing it to natural pace-- like Peter Jackson's They Shall Not Grow Old. I was watching some clips and was trying to figure out whether I was looking at American soldiers or British, since they had essentially the same helmet, and the helmet is one of the faster ways of identifying faction and country. (I don't have the sharpness of eye or depth of knowledge to look at things like epaulettes or boot style etc.) But I did spot a soldier with a crew-served machine gun. So I googled 'wwi machine gun british' and 'american'. I don't remember the EXACT search term, but google sent back very few results. Like almost nothing. Less than a full page. I was like "this can't be right" so I took the exact search term, plugged it into duckduckgo and it gave me exactly what I was looking for, and sent back hundreds if not thousands of results.
I know for a fact that google puts its thumbs on the scales for anything related to gun searches. Is that the reason? I don't know, but regardless of what the exact reason is, given the factual knowledge I do have: That google manipulates gun search results... it makes me not trust google. That's not a good business model for them. It's going to be their downfall.
The algorithm is still impartial, and any thumbs on the scale are NOT partisan political. The big thumb on the scale at Google is ad revenue. Companies pay Google for higher rankings. I was in a tiny company who used to pay for a certain number of word search results. It worked. But it was not at all political. If someone was searching for our core business services, we got bumped. That is all.
Then there is stuff like searching for weapons. Depending on how you worded it, not surprising you got few results. I could make a search and get thousands of hits. It is NOT Google trying to hide the existence of WWI British firearms. That's just silly.
The algorithms SHOULD BE open. But I also believe in radical stuff like open source and full transparency, so I'm the weirdo here. But could you understand the algorithm if it were open? Really? It's also proprietary, and represents a lot of a social media site's core advantages. Trade secrets of a sort. Forcing them to be open source would mean anyone could set up their own Facebook (provided they had a literal army of DevOps on their side). I think that would be cool, but it's understandable why they keep it closed. It's not political.
The only agenda they have is making lots of money.
Let me repeat, the only agenda they have is making lots of money.
The American troops in WWI used British and French machine guns and artillery rather than shipping heavy equipment across the Atlantic with them - and also because we didn't have much to send besides small arms. IIRC, the main machine gun in American inventory was still the 1895 "potato digger", and they didn't have many more of these than were required to deal with Pancho Villa. So scanty returns for your search might mean only that the algorithm doesn't show you Americans with foreign machine guns for your exact phrasing.
Sounds like a great way to turn people's social media feed into a giant circle jerk/ echo chamber to me.
Not all algorithms are created equal. Some are good: for example satellite guidance, navigation, and control algorithms. Social media algorithms that create echo chambers and lead to increased polarization and decreased social cohesion are not.
And the chief complaint is that it isn't echo-y enough. Stuff from other tribes is still getting through. You like someone's kitten pics and next thing you know they post about people needing to be wearing masks. Rage! Rage! Filter this stuff even moar! Aaargh!
Sen. Elizabeth Warren (D–Mass.) says Amazon algorithms are "feeding misinformation loops."
Heh. Algorithms feeding a misinformation loop is a misinformation loop.
She only worries about misinformation when it's not _her_ side's misinformation.
computer programs are free expression.
You do know that the algorithm can he explicitly set to simply block, not list, not sort or not include specific results, right? This isn't some question of high-minded AI machine-learning process. The algorithm might simply have an exclude block:
{exclude:
*hunter*biden*laptop*new*york*post*ivermectin*unsafe*vaccines*}
and so on.
Saying "algorithms might be saving democracy" is equally as dumb as saying they're destroying it.
Last month i managed to pull my first five figure paycheck ever!!! I’ve been working for this company online for 2 years now and i never been happier… They are paying me $95/per hour and the best thing is cause i am not that tech-savy, they only asked for basic understanding of internet and basic typing skill… It’s been an amazing experience working with them and i wanted to share this with you, because they are looking for new people to join their team now and i highly recommend to everyone to apply…
Visit following page for more information…………..>>> onlinecareer1
As we old line mainframe programmers always say
"Never trust a CPU you can lift"
I, for one, welcome our new algorithm overlords...
You're just saying that to stay alive - - - - - - - -
Monsters from the Id.
The ultimate questions here are: 1. how GOOD are the algorithms that filter our searches and solve problems; and 2. how BIASED are those algorithms in the sense of steering content to us, not based on our own preferences or the quality of the sources but, rather, based on the preferences and goals of the platform operators (or moles planted inside those platforms) for the purpose of subtly, without our knowledge, changing our information input.
The degree of item 2 is the measure of item 1.
Just for the record, they aren't doing it without your knowledge; the ToS clearly state what will happen. They will take data about you, and sell it all over the web.
Social media tech ceased to be a positive when the business model turned from software developers selling a social tool for community 'creators' to selling eyeballs to advertisers using top-down methods of creating artificial 'groups'.
OK, so then, JFree, I think that I see your point... Butt... What advertisements are YOU wanting for me to see?
(And how much will you pay me to look at them? EVERYONE has their price, ya know!)
The social media tool developers would sell products to:
existing communities who want to create an online presence (eg churches, school alumni groups, gyms, clubs, etc.
people who saw the opportunity for the internet to gather the like-minded together - from Sturmfront to LGBT activists.
Both of the above can do their own internal content moderating and can do their own linking to other groups. FB etc didn't need to incur any of those expenses.
There were many business models being tested before the VC based selling eyeballs model destroyed everything else by eliminating the funding
I think we can see where the selling eyeballs model leads - the metaverse. Which to me looks like life is supposed to turn into perpetual zoom meetings with avatars and cartoons. Hopefully of jumping sharks
Pure sophistry. '....At worst, (Al Gore Rhythms) help reveal existing divides, (white supremacists won't let us groom 5 year olds; twitter permits 'harmful' speech). At best, they help make us better informed ('...grooming is not taking place at our grade schools...CRT is taught in law schools, not grade schools...').
If you believe those last two sentences, the algorithms worked to misinform you.
tl;dr: garbage in, garbage out is still a thing.
Like when you tell the model that masks work, the model tells you that masks work.
The newspeak version is garbage in, Gospel out.
First hierarchies and now algorithms, the collateral damage of fanatics.
The calendar is evil, too.
Did anyone else watch the last season of Evil on CBS?
To paraphrase Aristotle, if you want to have a meaningful debate, define your terms. The word "Algorithm" encompasses far too many different procedures to have a meaningful conversation about "Algorithms."
When authors write about "Algorithms," I always get the sense that they are liberal arts majors who really don't know what they are talking about.
Great article, Mike. I appreciate your work, I’m now creating over $35000 dollars each month simply by doing a simple job online! I do know You currently making a lot of greenbacks (igj-11) online from $28000 dollars, its simple online operating jobs.
.
.
Just open the link———————————————>>> http://Www.RichApp1.Com
The fact that algorithms are mistaking you for a libertarian is a strong indication that they are not to be trusted.
Go. To. Hell.
lmfao
At best, they help make us better informed, better engaged, and actually less likely to encounter extremist content.
Better informed about the approved views and less likely to encounter dissenting views is what that means.
If I see the word "democracy" one more time I'm going to throw up in my mouth. Especially if it's used as an axiomatic ultimate good worshipped at the secular religious alter. Democracy is a bunch of ignoramuses voting for one demagogue or another based on who they think the nicer grandpa would be. Then those nice grandpas put their collective boots on everyones' neck. Yay!
I'm with you. I always referred to the channel Democracy Now as Mob Rule Now. 🙂
When you get down to what this article is saying, it is saying (implying) that algorithms arise organically from God's green earth, and not from the minds of programmers. In reality, if the programmer is biased, then the algorithm is biased.
And the programmers are biased.
The article seems, in the end, to be saying, "Trust us." My response to that is "Up yours."
"They also save us time and frustration, making our online experiences more organized and coherent. And they expose us to information, art, and ideas we might not otherwise see, all while stopping the spread of content that almost everyone can agree is objectionable. "
What BS. I looked at who wrote this pile of excrement and sure enough a lefty "senior editor". Now it's clear why Reason has moved further down my reading list over the past couple years. Ms. Brown also wrote in the last few days a piece entitled "The Bad News for Trump Piles Up."
News alert for Brown: Americans have and are moving away from FB and similar others for platforms where no "algorithm" filters what they can and cannot see. FB is laying off employees.
I haven't used FB in years.
Google pay 200$ per hour my last pay check was $8500 working 1o hours a week online. My younger brother friend has been averaging 12000 for months now and he works about 22 hours a week. I cant believe how easy it was once I tried it outit.. ???? AND GOOD LUCK.:)
HERE====)> http://www.worksclick.com
Saying algorithms are bad is like saying the alphabet is bad.
Just plain stupid.
"In 2013, Facebook largely ditched the chronological feed. In its place, the social media company installed an algorithm."
Yeah, I remember that. Everybody who didn't have an insane number of people they were following ended up incredibly ticked, because we freaking LIKED chronological feed: If you wanted to go back to find something, it would be in a predictable location, instead of the order of the stuff on your feed moving around every time it refreshed.
Replacing it with an algorithmic feed degraded the FB experience for anybody who was strongly engaged with a reasonable sized group of people, or tried to hold conversations in comments on posts.
FB doesn't tweak the algorithms for the benefit of users. It's strictly for the bottom line, paid content, and making it harder to avoid.
In any work, sequence, algorithms are important, because only this can lead you to the desired result. In software development, one of the important steps in this sequence is the MVP development, so I believe that mvp development services is essential if you want effective, attractive software for you and other users.