On February 22, Odysseus became the first private spacecraft to have a (largely) successful soft moon landing. It launched seven days earlier on another private sector success, SpaceX's Falcon 9 rocket. Developed by Intuitive Machines, the mission carried cargo from NASA in addition to commercial and academic customers. Odysseus landed on its side, however, limiting how much data it was able to send in the limited number of days it had before being engulfed in the cold lunar night.
The post Photo: A Private Moon Landing appeared first on Reason.com.
]]>In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Nick Gillespie, and Peter Suderman weigh in on the approved House bill that could potentially usher in a ban on popular social media app TikTok in the United States.
01:49—Legislation to ban TikTok
16:39—California's continued high-speed rail boondoggle
33:48—Weekly Listener Question
43:25—Elon Musk launches Starship rocket
46:31—This week's cultural recommendations
Mentioned in this podcast:
"Banning TikTok Would Give the Feds Way Too Much Power," by Robby Soave
"Algorithm Not for Sale," by Liz Wolfe
"TikTok's Opponents Want Chinese-Style Censorship in America," by Matthew Petti
"The U.S. Steel/Nippon Deal Should Be None of Joe Biden's Business," by Eric Boehm
"Hey look a new scientific book abt how TikTok is addicting kids like 'narcotics'!" writes Nick Gillespie on X, formerly Twitter
"California's High-Speed Rail Needs Another $100 Billion. That's a Great Reason Not To Build It." by Eric Boehm
"Annie Duke: Quitting Is Totally Underrated," by Nick Gillespie
"America Is Taking a High-Speed Train to Bankruptcy," by David Ditch
"How Florida Beat California to High-Speed Rail," by Natalie Dowzicky
"The Problem With the 'Abundance Agenda,'" by Christian Britschgi
"We Told You Why and How California's High-Speed Rail Wouldn't Work. You Chose Not To Listen." by Matt Welch
"The Political Class Knew California High-Speed Rail Was B.S., and Supported it Anyway," by Matt Welch
"3 Reasons Obama's High-Speed Rail Will Go Nowhere Fast," by Meredith Bragg and Nick Gillespie
"On the Passing of a Liberal Deregulator," by Matt Welch
"Learning From Kodak's Demise," by Nick Gillespie and Matt Welch
"Milton Friedman Was No Conservative," by Brian Doherty
"Jennifer Burns on Milton Friedman's Legacy," by Nick Gillespie
"Oscar-Nominated Robot Dreams Is a Gentle Animated Love Story About Dogs, Robots, and 1980s New York," by Peter Suderman
Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.
Upcoming Events:
Today's sponsors:
Audio production by Ian Keyser; assistant production by Hunt Beaty.
Music: "Angeline," by The Brothers Steve
The post The CCP Sucks. So Does Banning TikTok. appeared first on Reason.com.
]]>TikTok might be in trouble: The U.S. House of Representatives approved legislation that would force ByteDance, the popular social media app's Chinese parent company, to sell it to a U.S.-based firm, or else it will be banned in the United States. The vote was 352–65. President Joe Biden has indicated that he will sign the legislation, so now it's just a matter of whether the Senate chooses to act.
The margin of the vote's passage and the bipartisan nature of the legislation might make people think that it's popular. This is certainly the framing that Republican proponents have adopted. Oren Cass, a former adviser to Sen. Mitt Romney (R–Utah) and a national conservative policy thinker, described libertarians who oppose the anti-TikTok legislation as "badly isolated" and "hurting their credibility in the conservative coalition."
1/ Can't be overstated how badly isolated libertarians are in this TikTok fight, and how much their kicking and screaming is hurting their credibility in the conservative coalition. ????
— Oren Cass (@oren_cass) March 12, 2024
On the other hand, three of the most important political and media figures on the right oppose the legislation. They are Tucker Carlson, Elon Musk, and…former President Donald Trump. Each has railed against the federal government's misguided efforts to police social media. Musk said the bill would result in "censorship and control." Carlson fretted that the feds could use this new authority to take similar action against X (formerly Twitter). Trump, who previously supported banning TikTok, is now worried it would hurt him with young people in the 2024 presidential election—definitely a flip-flop, but at least a directionally correct one.
The bill was also opposed by some of the most conservative members of the House (as well as some of the furthest left): Rep. Thomas Massie (R–Ky.), Rep. Matt Gaetz (R–Fla.), and Rep. Marjorie Taylor Greene (R–Ga.), but also Rep. Alexandria Ocasio-Cortez (D–N.Y.), Rep. Ayanna Pressley (D–Mass.), Rep. Ro Khanna (D–Calif.), and so on.
What this really shows is that the battle over TikTok is not a battle between libertarians and everyone else; on the contrary, it's a battle between various members of the bipartisan Washington, D.C., establishment who see direct government intervention as the answer to every problem and another group of people—libertarians included—who recognize that this power is likely to be abused.
Various dubious arguments have been deployed against TikTok, but the establishment's logic this time is that the app's Chinese owners are beholden to the Chinese Communist Party (CCP), and thus it poses a national security risk. To give this argument some credit, it is true that the CCP is an authoritarian menace; the Chinese government absolutely pressures TikTok to censor content about Tiananmen Square, the religious sect Falun Gong, and criticism of Chinese President Xi Jinping. Americans who get their news primarily from TikTok should keep this in mind.
Of course, the U.S. government has also pressured American tech companies to censor content on social media. Thanks to the Twitter Files, the Facebook Files, and other independent investigations, we know that multiple federal agencies—including the FBI, the Centers for Disease Control and Prevention, the Department of Homeland Security, and even the White House—instructed social media platforms to take down contrarian content relating to elections, Hunter Biden, COVID-19, and other subjects. When Joe Biden decided the companies had been insufficiently deferential to his pandemic-related dictates, he accused them of killing people and threatened to take action against them.
If Congress really wanted to do something about government censorship of content on social media, legislators could rein in the feds. They could reduce funding to agencies that engaged in wrongdoing, they could fire wrongdoers, and they could craft guidance for bureaucrats.
Instead, they are singularly focused on TikTok.
The legislation passed by the House would apply to any social media company that is designated as a "foreign adversary controlled application." U.S. law currently defines China, North Korea, Russia, and Iran as foreign adversaries. The bill further stipulates that an app is deemed to be controlled by a foreign adversary if it satisfies at least one of three different criteria: if it is headquartered in one of those countries, if one of those countries owns a 20 percent stake in it, or if the app is subject to "direction or control" by one of the foreign adversaries.
It is easy to see how this legislation creates a blueprint for taking future action against social media companies beyond just TikTok. In the wake of the 2016 election, Democratic lawmakers, mainstream media pundits, and national security advisers all accused Facebook of being complicit in Russia's various schemes to sow election-related discord online. Former Director of National Intelligence James Clapper said Russia was more responsible for Hillary Clinton's loss than Trump was. The thrust of this argument was that Facebook CEO Mark Zuckerberg had allowed his platform to be compromised by Russian misinformation.
It's true that if the TikTok bill were to become law today, the federal government would not likely take direct action against Facebook or X tomorrow. But the language in the bill—"direction and control"—is exceedingly slippery. It is not difficult to imagine a future where vengeful bureaucrats accuse a disfavored app of promoting contrarian views and punish it accordingly.
"The House ban of TikTok is not securing our nation," wrote Sen. Rand Paul (R–Ky.) on X. "It's a disturbing gift of unprecedented authority to President Biden and the Surveillance State that threatens the very core of American digital innovation and free expression."
The post Banning TikTok Would Give the Feds Way Too Much Power appeared first on Reason.com.
]]>Could Elon Musk's Twitter jokes about sexual harassment kickstart a chain of events that ends with large swaths of the administrative state being struck down by the courts? We're further along this timeline than you might think.
Earlier this month, SpaceX—where Musk is co-founder and CEO—filed a lawsuit in federal court against the National Labor Relations Board (NLRB) and its members arguing that the independent agency's insulation from presidential authority and its system for deciding labor disputes is unconstitutional.
The company's lawsuit came one day after the NLRB filed its own complaint against SpaceX for firing several employees who'd circulated an open letter demanding the company condemn Musk's "harmful Twitter behavior."
The harmful behavior mainly involved Musk joking about SpaceX's settlement of a sexual harassment claim against him brought by a former company employee, per reporting from The New York Times.
SpaceX claimed the employees' anti-Musk activism was disruptive and merited termination.
After being fired, the employees filed NLRB complaints, arguing SpaceX violated their rights under federal labor regulations. That eventually led to the board filing a complaint against SpaceX this January.
Like many independent federal agencies, the NLRB complaint against SpaceX will be heard by an administrative law judge who works for the agency. If SpaceX doesn't like the judge's decision, it has to appeal to NLRB members themselves.
Neither NLRB members nor its administrative law judges can be removed by the president at will. They can only be fired for cause. The agency also uses its adjudication process to set substantive policies that bind private parties.
"In effect, the NLRB is acting as the lawmaker, the prosecutor, the jury, the judge, and the appellate court before a company has a chance to get to a real court," says Oliver Dunford, an attorney at the Pacific Legal Foundation (PLF).
SpaceX's lawsuit argues that this whole setup is unconstitutional. The inability of the president to fire NLRB members and judges violates his constitutional powers to hire and fire "officers of the United States." Its adjudication process violates the Seventh Amendment's guarantee of jury trials. The fact that the NLRB prosecutes violations of policies it sets in front of its administrative law judges and hears appeals of those judges' decisions all violates due process guarantees.
SpaceX's arguments against the NLRB have implications for many other federal agencies that are structured the same way.
"To say this structure itself violates the Constitution would be saying a large swath of the federal government is unconstitutional," one attorney told Bloomberg Law.
The U.S. Supreme Court is currently considering a case that challenges the Security and Exchange Commission's structure and adjudication process on similar grounds. PLF is suing the Consumer Product Safety Commission and the Environmental Protection Agency over related issues.
Claims that SpaceX's arguments would end the administrative state are overblown, says Dunford. Most federal administrative law judges are within the Social Security Administration dealing with questions about government benefits, he says.
That's much different than independent agencies that regulate private rights. It's when government bodies are telling private parties what to do, and punishing them for non-compliance, that due process and jury trial rights are relevant, argues Dunford.
The post Will Elon Musk's Twitter Sex Jokes End the Administrative State? appeared first on Reason.com.
]]>Elon Musk inspires strong emotions in people. Some admire his pugnacious entrepreneurial spirit, advocacy of open debate, and willingness to challenge politicians. Others question his business practices, oppose his embrace of free speech (or tire of his inconsistency on the issue), or recoil from his hard-to-pin-down politics. But even as states embrace marijuana legalization and lawmakers who fight over everything else find common ground over the therapeutic potential of psychedelic drugs, Musk is coming under attack for his taste in intoxicants.
"The world's wealthiest person has used LSD, cocaine, ecstasy and psychedelic mushrooms, often at private parties around the world, where attendees sign nondisclosure agreements or give up their phones to enter," Emily Glazer and Kirsten Grind reported over the weekend for The Wall Street Journal. "Musk has previously smoked marijuana in public and has said he has a prescription for the psychedelic-like ketamine."
Tales of drug use are attributed "to people who have witnessed his drug use and others with knowledge of it," which sounds like another term for folks with an axe to grind. Of course, Musk has been open about at least some drug use, including smoking marijuana with Joe Rogan. Last summer, he suggested ketamine is a better treatment option than antidepressants after the Journal reported he "microdoses ketamine for depression, and he also takes full doses of ketamine at parties."
And good for Elon Musk. He's not alone in seeing both recreational and therapeutic benefits in many intoxicants disapproved of by lawmakers and scolds.
"Two-thirds of Americans with treatment-resistant anxiety, depression or PTSD believe that psychedelics should be made available for therapeutic means," according to a 2021 survey conducted by The Harris Poll on behalf of Delic Holdings Corp., which incorporates psychedelics in treatments at its clinics.
A 2022 YouGov poll found that 28 percent of Americans have actually tried one or more psychedelic drugs, including LSD, psilocybin (mushrooms), MDMA, mescaline, ketamine, DMT, and salvia. More than half (54 percent) of respondents supported allowing research into psychedelic substances for military members with PTSD.
Such efforts are already underway and have been promising. Research published last summer in the Journal of the American Medical Association found that a single 25-mg dose of psilocybin "was associated with a rapid and sustained antidepressant effect." Researchers reported "no serious treatment-emergent adverse events."
In September 2023, a paper published in Nature Medicine reported that, relative to therapy alone, therapy accompanied by MDMA "reduced PTSD symptoms and functional impairment in a diverse population with moderate to severe PTSD and was generally well tolerated."
In 2022, writing for a Harvard Medical School blog, Dr. Peter Grinspoon noted that, while ketamine should be used under medical supervision, "relief from [treatment-resistant depression] with ketamine happens rapidly. Instead of waiting for an SSRI to hopefully provide some relief over the course of weeks, people who are suffering under the crushing weight of depression can start to feel the benefits of ketamine within about 40 minutes."
The therapeutic potential of these drugs is one of the few things that get U.S. politicians to work across party lines (at least when it comes to increasing rather than decreasing liberty). Reps. Dan Crenshaw (R–Texas) and Alexandria Ocasio-Cortez (D–N.Y.) and Sens. Cory Booker (D–N.J.) and Rand Paul (R–Ky.) have sought to make it easier to study the medical benefits of psychedelics with an eye to making them more readily available for legal use. They haven't had much success at the federal level, though some states have been more open.
And the feds pose a problem.
"Illegal drug use would likely be a violation of federal policies that could jeopardize SpaceX's billions of dollars in government contracts," add the Journal's Glazer and Grind.
Let's remember that SpaceX largely dominates America's presence beyond the atmosphere. That's why the Planetary Society observed that "without SpaceX, the only U.S. company currently capable of carrying cargo to the ISS would currently be Northrop Grumman, and NASA would still be reliant on the Russian Soyuz for crew transportation."
If we're going to link performance to attitudes about drugs, maybe Musk should be setting the tone for NASA. Perhaps a microdosing schedule would get federal employees out of their ruts and set their creative juices flowing.
Glazer and Grind also report that "some executives and board members at his companies and others close to the billionaire" are concerned that drugs may be responsible for "his contrarian views, unfiltered speech and provocative antics."
Maybe that's true; intoxicants can certainly change our behavior, sometimes for the worse. Or maybe the tech entrepreneur is naturally contrarian, unfiltered, and provocative. In 2022, Musk told an interviewer that he and other SpaceX employees were required by the feds to submit to random drug tests for a year after he publicly smoked weed with Joe Rogan. Whatever he's doing now, that probably would have curbed his chemical indulgences for a while. There's little evidence that Musk's critics consider that period a golden age when they were happier with his conduct.
That's not to say Elon Musk is immune from criticism. He's been called out as a recipient of corporate welfare, attacked for championing free speech, criticized for inconsistency in his tolerance for free speech, and questioned over his business ethics. Some executives apparently find him hard to work with. If he's a human train wreck, he wouldn't be the first one to find success despite (or maybe even because of) character flaws.
Reports that Musk uses drugs recreationally and therapeutically aren't, in themselves, proof that they're the source of his quirks. If he's indulging rather than overindulging, they may enhance his ability to function. A straight-edge Elon Musk would not necessarily be an improvement.
It's possible that Musk really does need to sober up. Or maybe he's a flawed person functioning better than he would otherwise because of the relaxation offered by the occasional hit of acid and the relief from microdosing ketamine. Drugs are tools, and reports that somebody likes reaching into the toolbox doesn't tell us whether they're being misused or just used to good effect.
The post Let Elon Musk Enjoy Drugs appeared first on Reason.com.
]]>In November, SpaceX conducted the second unmanned test of Starship, with mixed success. Unlike in the first test, all 33 engines fired and the super heavy booster and Starship spacecraft successfully separated. Thirty seconds after separation, the rocket experienced "rapid unscheduled disassembly" (read: explosion) with the spacecraft following soon after. The company will file a mishap report to the Federal Aviation Administration and may have to wait months for regulatory approval to conduct a third test.
The post Photo: Is SpaceX Ready for Liftoff? appeared first on Reason.com.
]]>In late November, Tesla Motors delivered the first 10 Cybertrucks to customers, over four years after it was first unveiled as a concept prototype and two years after it was originally supposed to begin production. The odd, angular electric vehicle with a stainless steel exterior attempts to marry Tesla-style sports car performance with the rugged function of a pickup truck. The Wall Street Journal characterized it as "a giant, steel triangle on wheels."
Depending on options, the Cybertruck can achieve 600–845 horsepower and cost between $60,000–$100,000—clearly making it a luxury purchase and not for the shopper on a budget.
So why, then, do some models qualify for federal tax credits?
As part of President Joe Biden's pledge to transition the U.S. to greener sources of energy, the 2022 Inflation Reduction Act (IRA) established $7,500 tax credits for purchases of electric vehicles (E.V.s).
"Working families will be able to use tax credits that make electric vehicles more affordable," brags the White House's Clean Energy webpage. "Purchasing an electric vehicle (EV) can save families thousands of dollars on fuel costs over the life of their car."
But according to FuelEconomy.gov, maintained by the U.S. Department of Energy, the Cybertruck can qualify for the tax credits as well.
The site notes that qualifying models must be assembled in North America and are limited to a retail price of $80,000, the same parameters put on any vehicles that hope to qualify. Since the Cybertruck is assembled in Tesla's Texas Gigafactory and two of its current options retail under $80,000, it could indeed qualify.
Neither the IRS nor the Department of Energy responded to Reason's requests for confirmation that the Cybertrucks would qualify, but Tesla clearly thinks so: The Cybertruck order page on the automaker's website lists "purchase price" alongside prices with "probable savings," which "assume IRA Federal Tax Credits up to $7,500 for Rear-Wheel Drive and All-Wheel Drive and est. gas savings of $3,600 over 3 years."
It's worth wondering why the Cybertruck should qualify for tax credits that are nominally intended to benefit people who want to switch to an electric vehicle and need a little help. It stands to reason that anybody both willing and able to spend around the median annual household income on a futuristic-looking luxury truck should have to cover the entire cost themselves, without any help from the American taxpayer.
Earlier this year, Biden promoted the E.V. tax credits by tweeting a photo of himself driving a GMC Hummer EV, even though no version of the Hummer EV available at the time cost less than $84,000. For the 2024 model year, GMC is planning to offer the base model EV2 starting at $79,995—$5 below the cutoff. Similarly, Tesla currently set the all-wheel-drive Cybertruck's retail price at $79,990.
The post Tesla's Expensive New Cybertruck May Qualify for $7,500 in E.V. Tax Credits appeared first on Reason.com.
]]>On Saturday morning, SpaceX conducted its second test launch of Starship, the super heavy–lift rocket that could one day carry astronauts to the moon and Mars. The vehicle lifted off without incident from SpaceX's Starbase, on the southern tip of Texas, just after 7 a.m. local time. A new water-deluge system deflected the heat of the booster's 33 Raptor engines, preventing the kind of launchpad damage that occurred during the first launch last April—that test ended in self-detonation four minutes into flight, when the ship and the booster failed to separate. For the second run, SpaceX converted the rocket to a "hot staging" system, with the ship's six Raptor engines starting to fire, blasting the top of the booster, as the separation process began. This time, the uncoupling was successful. The booster broke apart shortly thereafter. The second stage carried on another five minutes, rising 90 miles skyward before exploding.
More than twice as powerful as the Apollo program's Saturn V—and designed to be reusable to boot—Starship is already a marvel of human planning and perseverance. But many more launches must occur before it can carry humans into space. It will be a challenge to establish that Starship is reliable, that it can refuel in orbit (a key part of the plan), and that it can safely land on, and take off from, the moon.
Yet all that might be the easy part. Everything depends on SpaceX's ability to jump through regulatory hoops, and the federal bureaucracy's ability to keep pace with a driven private company.
SpaceX was ready for the second test of Starship by early September. Two weeks later, the U.S. Fish and Wildlife Service acknowledged that it had yet to begin the environmental review needed for a launch license. "That is unacceptable," Elon Musk fumed on X. "It is absurd that SpaceX can build a giant rocket faster than they can shuffle paperwork!" Absurd, yes—but hardly unexpected.
The agency moved swiftly—by the standards of the federal government—issuing its review eight weeks later. The main revelation, in line with prior such reviews, is that the Starship program has remarkably little impact on the environment. The new deluge system is of a piece. Most of the more than 300,000 gallons of water emitted during a launch is vaporized by the booster's flame and floats harmlessly away. Most of the water that's left is collected in containment vats. The small quantity of remaining runoff would probably be safe to drink.
Reading these reports, you learn not that SpaceX poses a risk to the environment, but that over-the-top environmental regulations pose a risk to SpaceX. A firm devoted to building rockets finds itself counting birds, combing the beach for sea turtle eggs, trying to calculate the (obviously miniscule) odds that one of its projectiles will hit a whale, and measuring for vibrations near decrepit stone pilings. And woe unto SpaceX should one of its launches, or even one of its trucks, somehow kill one of the dozens of (not endangered) piping plovers known to inhabit the area. "I don't think the public is aware of the madness that goes on," Musk said during a recent interview. He claimed that, as part of an environmental review for launches on the West Coast, SpaceX had to kidnap a seal, strap it down, put headphones on it, and see if the sound of sonic booms made it upset. ("The amazing part," Musk reported, with a chuckle and a photograph, "is how calm the seal was.")
Even if SpaceX can keep the fish and wildlife regulators satisfied and moving—and litigious environmental activists at bay—it could still find itself thwarted by the Federal Aviation Administration (FAA). As Musk noted in 2021, the FAA has historically needed to issue licenses only "for a handful of expendable launches per year." Under the rules designed for that launch pattern, he complained, "humanity will never get to Mars." Although it has sought to improve its approval process, the agency is still struggling. In order to accelerate work on the second Starship launch, officials had to delay work on launches of SpaceX's Falcon 9 rocket—at present the firm's crucial driver of revenue. The problem is set to get worse, as both SpaceX and its competitors step up their launch frequencies.
None of this might seem that important. Doesn't SpaceX dominate the launch market? Isn't the European space industry in shambles? Hasn't the Russian space program become an embarrassment? But as William Gerstenmaier, a SpaceX vice president, points out, the regulatory delays add up. "And eventually," he warns, "we will lose our lead and we will see China land on the moon before we do." The Chinese Communist Party wants, it seems, to seize exclusive control of the moon's water-rich south pole. Along the way, it will probably not slow down for the sake of a few members of a vulnerable bird species.
The United States can uphold rigorous safety standards, guard the environment, and beat China to the moon all the same. The latest Starship test launch shows as much. Unless, that is, it doesn't. Many things could go wrong. NASA could fail to do its part, for example, or Musk's impulsive social media posts could come to haunt SpaceX. But the biggest danger may be a toxic combination of too much red tape, too little state capacity, and a lack of political will to address either one.
The post SpaceX Makes Progress on Second Test of Starship appeared first on Reason.com.
]]>Most Americans want astronauts to get back to the moon and eventually to Mars, but those expeditions will keep getting pushed back unless the Federal Aviation Administration (FAA) can get its act together. Every rocket—regardless of who makes it—that soars into space must get licensing and approval from the FAA, but SpaceX claims that the government agency is both understaffed and too slow-moving.
SpaceX launches Falcon rockets roughly every four days, they are the industry leader by far. But as SpaceX and their competition—Blue Origin, United Launch Alliance, and other smaller companies—increase their flight rates, the industry is witnessing firsthand how the government agency is hindering NASA's ability to get back to the moon.
Back in April, SpaceX did the first test launch of Starship, which is the vehicle that NASA is relying on for the later stages of its Artemis program, which seeks to explore the lunar surface. Starship successfully made it airborne but then started shifting uncontrollably, forcing SpaceX to use the onboard flight termination system. This resulted in the destruction of the launch pad in addition to debris flying nearby to the South Texas launch facility. After assessing the destruction, the FAA then required the company to address a multitude of issues before SpaceX would be permitted to launch Starship again.
Elon Musk now claims that all of those concerns were addressed, but the FAA has still failed to approve another Starship launch. It took more than two years to get the first launch of Starship approved and there is no telling how long it could take to get the second one through. Not only is this making SpaceX rethink trying to get other rockets approved for takeoff at the moment, but it is also delaying the Artemis program. Ultimately, Artemis III will not happen without Starship—which means the hope of getting back to the moon by the end of 2025 is looking more and more unlikely.
Tim Hughes, senior vice president at SpaceX, told the Washington Post, "We'd very much like the government to be able to move as quickly as we are. If you're able to build a rocket faster than the government can regulate it, that's upside down, and that needs to be addressed. So we think some regulatory reforms are needed."
Regulatory reform of the FAA could take on a few different forms, but SpaceX suggests that the government agency double its licensing staff. Anything that would streamline the cumbersome approval process at this point would help—even if that means throwing the process out and starting from scratch.
The snail pace at which the FAA gets launch licenses approved is putting the private space industry in jeopardy. Of course, launches should be reviewed for safety, but there's no excuse for why that assessment takes years. "Next year could be a pretty dynamic time with lots of providers in spaceflight," a SpaceX official told ArsTechnica, so let's hope that the FAA speeds up its process or just gets out of the way entirely.
The post SpaceX: FAA Is Slowing Progress to the Moon appeared first on Reason.com.
]]>Yoel Roth, who used to be in charge of "trust and safety" at a social media company that used to be called Twitter, is worried about "coercive influences on platform decision making." But his concern is curiously selective. Unobjectionably, he sees coercion when foreign governments threaten to arrest uncooperative platform employees. More controversially, he also sees coercion when Republicans criticize content moderation decisions. When Democrats in positions of power pressure social media platforms to suppress politically disfavored content, however, Roth sees no cause for concern.
Roth begins his confused and confusing New York Times essay on this subject by airing a personal grievance that the headline also highlights: "Trump Attacked Me. Then Musk Did. It Wasn't an Accident." When Roth worked at Twitter, now known as X, he "led the team that placed a fact-checking label on one of Donald Trump's tweets for the first time." After the January 6, 2021, riot by Trump supporters at the U.S. Capitol, Roth "helped make the call to ban his account from Twitter altogether."
Because of Roth's involvement in that first decision, Trump "publicly attacked" him. After Elon Musk acquired Twitter in 2022, prompting Roth to resign from his position, Musk "added fuel to the fire." As a result, Roth says, "I've lived with armed guards outside my home and have had to upend my family, go into hiding for months and repeatedly move."
It goes without saying that no one should have to worry about threats to his personal safety because he made controversial decisions as an employee of a social media company. And it is certainly true that Trump, both before and after the riot he inspired, has never shown any concern about the risk posed by his combustible combination of inflammatory rhetoric, personal attacks, and reality-defying claims. But Musk's role in all of this is more ambiguous, since the "fuel" he added to "the fire" consisted mainly of internal Twitter communications that he disclosed to several journalists, who presented them as evidence that federal officials had pressured the platform to suppress speech those officials viewed as dangerous. Although a federal judge and an appeals court saw merit in the claim that such meddling violates the First Amendment, Roth conspicuously ignores that concern even as he bemoans "coercive influences on platform decision making."
The real threat, as Roth sees it, is that platforms have abandoned their responsibility to police "misinformation" and "disinformation" in response to conservative criticism, intimidation, and political pressure. He presents his own experience as emblematic of that problem.
That experience began with Roth's decision to slap a warning label on a May 2020 tweet in which Trump claimed that mail-in ballots are an invitation to fraud and that their widespread use would result in a "Rigged Election." After senior White House adviser Kellyanne Conway "publicly identified me as the head of Twitter's site integrity team" and the New York Post "put several of my tweets making fun of Mr. Trump and other Republicans on its cover," Roth notes, the president "tweeted that I was a 'hater.'" That triggered "a campaign of online harassment that lasted months, calling for me to be fired, jailed or killed."
That result, Roth avers, was "part of a well-planned strategy"—"a calculated effort to make Twitter reluctant to moderate Mr. Trump in the future and to dissuade other companies from taking similar steps." The strategy "worked," he says, as evidenced by Twitter CEO Jack Dorsey's reluctance to shut down Trump's account (as Roth recommended) based on mid-riot tweets—including one criticizing Vice President Mike Pence for refusing to interfere in the congressional ratification of Joe Biden's victory—that egged on the rioters rather than trying to calm them. Trump "was given a 12-hour timeout instead" before he was finally banned from Twitter two days after the riot. Roth complains that Twitter also was unjustifiably patient with "prominent right-leaning figures" such as Rep. Marjorie Taylor Green (R–Ga.), who "was permitted to violate Twitter's rules at least five times before one of her accounts was banned in 2022."
Since Trump is a petty, impulsive man who has always been quick to lash out at anyone who irks him, the suggestion that he was executing "a well-planned strategy" probably gives him too much credit. And without minimizing his rhetorical recklessness, which was at the center of the case against him in his well-deserved second impeachment, it is fair to question the comparison that Roth draws between Trump's pique at him and cases in which government officials have explicitly used their coercive powers to impose their will on social media platforms.
"Similar tactics are being deployed around the world to influence platforms' trust and safety efforts," Roth writes. "In India, the police visited two of our offices in 2021 when we fact-checked posts from a politician from the ruling party, and the police showed up at an employee's home after the government asked us to block accounts involved in a series of protests." However unseemly and ill-advised, calling Roth a "hater" on Twitter is qualitatively different from deploying armed agents of the state to intimidate the company.
Roth offers another example of government intimidation that he sees as analogous to what Trump did to him: "In 2021, ahead of Russian legislative elections, officials of a state security service went to the home of a top Google executive in Moscow to demand the removal of an app that was used to protest Vladimir Putin. Officers threatened her with imprisonment if the company failed to comply within 24 hours. Both Apple and Google removed the app from their respective stores, restoring it after elections had concluded." Again, Trump's complaint about Twitter's warning label is not in the same category as threatening tech company employees with imprisonment.
Roth not only glides over the distinction between criticism and threats; he equates constitutionally protected speech with government bullying. "In the United States," he says, "we've seen these forms of coercion carried out not by judges and police officers, but by grass-roots organizations, mobs on social media, cable news talking heads and—in Twitter's case—by the company's new owner."
Before we get into Roth's beef against Musk, it is worth emphasizing that "grass-roots organizations," "mobs on social media," and "cable news talking heads" are not engaging in the same "forms of coercion" as cops dispatched to enforce the government's will. In fact, they are not engaging in "coercion" at all; they are exercising their First Amendment rights. Their criticism may be misguided, unfair, or overheated, but it is undeniably covered by "the freedom of speech" unless it crosses the line into a legal exception such as defamation or "true threats."
As for Musk, Roth complains that he disclosed "a large assortment of company documents"—"many of them sent or received by me during my nearly eight years at Twitter"—to "a handful of selected writers." Although the "Twitter Files" were "hyped by Mr. Musk as a groundbreaking form of transparency," Roth says, they did not reveal anything significant about how Twitter decided which kinds of speech were acceptable. He cites TechDirt founder Mike Masnick's judgment that "in the end 'there was absolutely nothing of interest' in the documents."
Although Roth assures us there is nothing to see here, fair-minded people might disagree. The Twitter Files showed, for example, that the platform eagerly collaborated with the Biden administration's efforts to suppress "misinformation" about COVID-19, which sometimes involved truthful statements that were deemed inconsistent with guidance from the Centers for Disease Control and Prevention (CDC). This month the U.S. Court of Appeals for the 5th Circuit concluded that such automatic deference to the CDC, which also was apparent at other platforms, qualified as "significant encouragement" of censorship by a government agency, "in violation of the First Amendment."
The platforms "came to heavily rely on the CDC," the 5th Circuit noted. "They adopted rule changes meant to implement the CDC's guidance." In many cases, social media companies made moderation decisions "based entirely on the CDC's say-so." In one email, for example, a Facebook official said "there are several claims that we will be able to remove as soon as the CDC debunks them" but "until then, we are unable to remove them."
The 5th Circuit said the CDC's role in content moderation, although inappropriate, was "not plainly coercive," mainly because the agency had no direct authority over the platforms. But when it came to pressure exerted by the White House, the court saw evidence of "coercion" as well as "significant encouragement." According to the 5th Circuit, the administration's relentless demands that Facebook et al. do more to control "misinformation," which were coupled with implicit threats of punishment, crossed the line between permissible government speech and impermissible intrusion on private decisions.
Roth does not even mention that decision, even to criticize it. But if President Trump was abusing his bully pulpit when he called Roth a "hater," what was President Biden doing when he accused social media companies of "killing people" by allowing speech that discouraged vaccination against COVID-19?
Surgeon General Vivek Murthy lodged the same charge while threatening Facebook et al. with "legal and regulatory measures" if they failed to do what the administration wanted. Other administration officials publicly raised the prospect of antitrust action, new privacy regulations, and increased civil liability for user-posted content. Meanwhile, behind the scenes, White House officials were persistently pestering social media companies, demanding that they delete specific posts and banish specific users while alluding to Biden's continuing displeasure at insufficiently strict speech regulation.
Roth portrays Trump's whining, conservative criticism, and Musk's avowed "transparency" as part of a coordinated "campaign" to discourage platforms from suppressing "misinformation." In his view, these are all "coercive influences on platform decision making." But he evidently sees nothing troubling about the Biden administration's crusade against "misinformation," which the 5th Circuit thought plausibly amounted to "coercion" because it was backed by implied threats of government retaliation. That definition of coercion seems a lot more reasonable than Roth's.
Speaking of definitions, Roth seems confident that "misinformation" can be readily identified, although that category is vague and highly contested. Even if he trusts the Biden administration to decide which speech qualifies as "misinformation," he should be concerned about how a second Trump administration—or the foreign authoritarians he mentions—might apply it.
Roth's double standard is also apparent when he decries the intimidating effect of congressional inquiries into the alleged anti-conservative bias of major social media platforms. He does not acknowledge that congressional pressure on tech companies is a bipartisan phenomenon, with Republicans arguing that platforms discriminate against right-wing speech and Democrats arguing that they should be doing more to suppress "misinformation" and "hate speech."
Members of both parties want to override the editorial judgment of social media platforms, which is supposed to be protected by the First Amendment. Their competing demands suggest the wisdom of a general rule against government interference with content moderation decisions, regardless of the ideological motivation behind it. But Roth is so focused on the people who have wronged him and the social media users who offend him that he cannot see the merits of that approach.
The post A Former Twitter Executive's Highly Selective Concern About 'Coercive Influences' on Social Media appeared first on Reason.com.
]]>Google's trial begins today. The Department of Justice has sued the tech giant for cornering more than 90 percent of the search engine market. The government argues that this dominance reflects anticompetitive practices, while Google counters that its search engine is simply a preferred service.
"It's the government's first major monopoly case to make it to trial in decades and the first in the age of the modern internet," notes NPR. "The Justice Department's case hinges on claims that Google illegally orchestrated its business dealings, so that it's the first search engine people see when they turn on their phones and web browsers. The government says Google's goal was to stomp out competition."
The DOJ is beginning its antitrust trial against $GOOGL and it's being billed as the biggest antitrust case in 20 years. @EamonJavers has the details: pic.twitter.com/fn03V9OHlP
— Squawk Box (@SquawkCNBC) September 12, 2023
At issue are the deals in place between Google and cell phone manufacturers like Apple, Samsung, and Verizon. Google pays billions of dollars to these companies to ensure that its search engine is the default on their phones. The federal government claims these arrangements are illegal and unfair to smaller search engines, like DuckDuckGo.
Google has countered that its search engine is more popular by far—in fact, if it was not the default search engine on most cell phones, there is little doubt that the overwhelming majority of customers would choose it anyway. As The Wall Street Journal points out, Windows computers do not come preloaded with Google, but most users swiftly download Google anyway.
The suit was first brought by the Trump administration. Former Attorney General William Barr said the lawsuit "strikes at the heart of Google's grip over the internet for millions of American consumers, advertisers, small businesses and entrepreneurs beholden to an unlawful monopolist."
There is little doubt that Google commands a significant market share and that its deals with cell phone manufacturers come at the expense of rival search engines. But the relevant question is whether this hurts consumers. For most people, Google is the entrance point to the internet. If they wanted a different way to search, there are plenty available. DOJ's suit implicitly argues that the federal government knows us better than we know ourselves—even though our revealed preferences suggest that we like Google perfectly fine.
Foes of government overreach should be rooting for another big loss. The Google trial is set to last for three months. Judge Amit P. Mehta, an Obama appointee, will decide the case.
Sen. Elizabeth Warren (D–Mass.) wants the government to investigate Elon Musk following news that he prevented Ukraine from accessing Starlink when the country tried to attack a Russian warship. According to Bloomberg:
"The Congress needs to investigate what's happened here and whether we have adequate tools to make sure foreign policy is conducted by the government and not by one billionaire," the Massachusetts Democrat said Monday at the Capitol.
Musk, the chief executive officer of SpaceX, is expected to be among the technology industry chiefs to attend a closed-door summit with senators at the Capitol on Wednesday.
Warren, a member of the Armed Services Committee, said she also wants the Defense Department to look into its contractual relationship with the company.
Perhaps Warren could investigate why Congress has outsourced its war-making powers to the executive branch. Technically, it's Warren and her colleagues who are supposed to be responsible for such foreign policy decisions.
The Food and Drug Administration (FDA) has approved an updated coronavirus vaccine, which is designed to counter the XBB.1.5 strain of omicron. That strain is no longer dominant, though the FDA believes the new vaccine will still offer significant cross-protection. "Like earlier versions, they're expected to be most protective against COVID-19's worst consequences rather than mild infection," reports the Associated Press.
That admission, however, is a good reminder that many of the vaccine mandates were premised on the idea vaccination would significantly reduce the spread of COVID-19 by preventing transmission to other people. People were encouraged—and in many cases, forced—to get vaccinated, not just for their own sakes, but also in the service of public health.
The federal government should continue to rapidly approve new COVID-19 vaccines and therapeutics so that anyone who wants them is free to take them. But it must never again go down the dark path of denying this choice to millions of Americans.
The post The Trial Begins: DOJ Sues Google Over Search Engine Dominance appeared first on Reason.com.
]]>'Free speech absolutist' Elon Musk is threatening to sue the Anti-Defamation League (ADL). The group has allegedly tried to "kill" the social media platform X, formerly known as Twitter, with false accusations of antisemitism and advertiser boycotts, according to Musk. In a series of posts, the billionaire said that the ADL's pressure campaign on advertisers to leave X over its content moderation policies was primarily responsible for a 60 percent drop in the site's U.S. ad revenue.
"If this continues, we will have no choice but to file a defamation suit against, ironically, the 'Anti-Defamation' League," said Musk. "If they lose the defamation suit, we will insist that they drop [sic] the 'anti' part of their name."
If this continues, we will have no choice but to file a defamation suit against, ironically, the "Anti-Defamation" League.
If they lose the defamation suit, we will insist that they drop the the "anti" part of their name, since obviously … ????
— Elon Musk (@elonmusk) September 4, 2023
In a subsequent post, Musk suggested that the ADL was responsible for destroying $22 billion of Twitter's value.
Based on what we've heard from advertisers, ADL seems to be responsible for most of our revenue loss.
Giving them maximum benefit of the doubt, I don't see any scenario where they're responsible for less than 10% of the value destruction, so ~$4 billion.
Document discovery of…
— Elon Musk (@elonmusk) September 4, 2023
The dispute between the ADL and X is not new.
Ever since Musk took over the platform late last year, the civil rights organization has accused the company of allowing hateful and antisemitic speech to proliferate through overly lax content moderation policies and practices.
The ADL was one of the groups reporting a dramatic rise in the use of racist and homophobic slurs on Twitter after Musk's acquisition. It also complained that the company was now less responsive to its requests to remove content.
Back in December 2022, Reason's Jacob Sullum argued that the rise in hate speech reported by the ADL and others was being exaggerated. The few thousand additional tweets containing racist and antisemitic slurs were still a tiny fraction of the content on the site.
Nevertheless, the ADL has continued to pressure X to be more aggressive in taking down what it deems hateful content. In a report published last month, the ADL even accused the social media site of running a "hate machine" for suggesting people follow accounts that have tweeted antisemitic content and memes.
The ADL, alongside other civil rights groups, had participated in other pressure campaigns aimed at getting advertisers to leave Facebook over its (supposedly) lax content moderation. In recent years, critics of the ADL also have accused it of being overly partisan and using dodgy methodology to inflate the number of antisemitic incidents it tracks.
In July, X sued the nonprofit Center for Countering Digital Hate over what it claims were baseless accusations of failing to police hate speech.
Musk surely has some cause to dispute a lot of the claims the ADL is making about X. His company is within its rights to decline the group's content moderation demands. Nevertheless, the ADL is also well within its rights to argue Musk is running a "hate machine" and lobbying advertisers to take their business elsewhere. By threatening legal action against the group, Musk is ceding whatever moral high ground he may have had as a defender of free speech.
Instead, he's suggesting he might use the court system to bully the group into silencing their criticism of his company. That's hardly the action of a "free speech absolutist."
The budget deficit is set to double this year, The Washington Post reports:
After the government's record spending in 2020 and 2021 to combat the impact of covid-19, the deficit dropped by the greatest amount ever in 2022, falling from close to $3 trillion to roughly $1 trillion. But rather than continue to fall to its pre-pandemic levels, the deficit then shot upward. Budget experts now project that it will probably rise to about $2 trillion for the fiscal year that ends Sept. 30, according to the Committee for a Responsible Federal Budget, a nonpartisan group that advocates for lower deficits.
Explosive piece in the Washington Post (by @JStein_WaPo assisted by @BudgetHawks) showing that the budget deficit is set to ???????????????????????? to $2 trillion this year.
This is basically unprecedented in U.S. history during relative peace and prosperity.????https://t.co/Kzfs78J5jr pic.twitter.com/KmZYXi2Emo
— Brian Riedl ???? ???????? (@Brian_Riedl) September 4, 2023
This explosion in debt is coming despite President Joe Biden's repeated claims that he's actually cutting the federal government's fiscal deficit.
Americans are increasingly saying "skool suks." Recent public opinion polls show that young Americans' attitudes toward college are turning increasingly negative, according to The New York Times:
The percentage of young adults who said that a college degree is very important fell to 41 percent from 74 percent. Only about a third of Americans now say they have a lot of confidence in higher education. Among young Americans in Generation Z, 45 percent say that a high school diploma is all you need today to "ensure financial security." And in contrast to the college-focused parents of a decade ago, now almost half of American parents say they'd prefer that their children not enroll in a four-year college.
Perhaps colleges being some of the last institutions to cling to insane COVID restrictions is playing a role:
At the @UMich, students testing covid positive must leave their dorms for 5 days & live in the community. A hotel room or a relative's house is ok.
This cruel policy is designed to spread covid from the university into the wild. It won't stop covid from spreading @umich. pic.twitter.com/Yfn58QKcut
— Jay Bhattacharya (@DrJBhattacharya) September 3, 2023
The post 'Free Speech Absolutist' Elon Musk Threatens Anti-Defamation League With Defamation Lawsuit appeared first on Reason.com.
]]>Special counsel Jack Smith subpoenaed Twitter for records related to former President Donald Trump's account, as part of Smith's investigation into Trump's actions surrounding the 2020 election and January 6 (actions that prompted a federal indictment against Trump last week). Smith obtained a search warrant for the records back in January of this year but Twitter missed the compliance deadline and was fined $350,000, a federal court decision unsealed yesterday shows. Twitter was also barred from telling anyone about the existence or contents of the warrant.
Subpoenaing social media records for law enforcement purposes isn't in itself problematic, of course. But there are potentially troubling elements to Smith's investigative demands and process in this case.
At this point, Trump's account had been reinstated by Twitter and was once again viewable by the public.
"It's unclear what information Smith may have sought from Trump's account," notes the Associated Press. "Possibilities include data about when and where the posts were written, their engagement and the identities of other accounts that reposted Trump's content."
I'm not sure exactly what could be gleaned from location and time-stamp data from Trump's tweets, but I'll grant that there could be information important to the indictment in there. However, seeking to see who engaged with Trump's tweets or the identities of accounts that reposted them raises more red flags. What could federal prosecutors possibly need with information like that?
Granted, the kinds of information Smith sought are speculative at this point. What's not speculative is the secretive process by which Smith's subpoenaing of Twitter played out.
"The government also obtained a nondisclosure agreement that had prohibited Twitter from disclosing the search warrant," reports the A.P. So, Twitter was not allowed to even reveal—publicly or to Trump—that federal prosecutors were requesting records at all.
"The court found that disclosing the warrant could risk that Trump could jeopardize the ongoing investigation by giving him 'an opportunity to destroy evidence, change patterns of behavior' or notify his allies," notes the A.P.
But Smith was very publicly appointed to investigate Trump back in November 2022. By mid-January 2023, when Smith obtained the Twitter search warrant, Trump had already long known that he was under investigation—which makes the idea that this was about stopping Trump from destroying evidence seem suspect.
It's this nondisclosure order that riled up the folks at Twitter, per the A.P.:
Twitter objected to the nondisclosure agreement, saying four days after the compliance deadline that it would not produce any of the account information, according to the ruling. The judges wrote that Twitter "did not question the validity of the search warrant" but argued that the nondisclosure agreement violated its First Amendment right to communicate with Trump
Twitter said if it had to turn over the records before the judge assessed the legality of the nondisclosure agreement, it would prevent Trump "from asserting executive privilege to shield communications made using his Twitter account," the document says.
When Twitter failed to turn over data by the January 27 deadline, a court found Twitter in contempt. "Although Twitter ultimately complied with the warrant, the company did not fully produce the requested information until three days after a court-ordered deadline," according to judges with the U.S. Court of Appeals for the District of Columbia Circuit. "The district court thus held Twitter in contempt and imposed a $350,000 sanction for its delay."
Details about the search warrant and Twitter's response were kept under seal for months but recently came out in a July appeals court decision unsealed Wednesday. The appeals court rejected Twitter's arguments that the nondisclosure order violated the First Amendment and the Stored Communications Act, that the district court shouldn't have enforced the search warrant until resolving Twitter's objections, and that Twitter shouldn't have been found in contempt and sanctioned.
A warning about trigger warnings. As a writer for the blog Feministe back in 2008, Jill Filipovic believed in trigger warnings. "Back then, I was convinced that such warnings were sometimes necessary to convey the seriousness of the topics at hand (the term deeply problematic appears a mortifying number of times under my byline)," writes Filipovic at The Atlantic, while noting that she "chafed at the demands to add ever more trigger warnings, especially when the headline already made clear what the post was about." Since then, trigger warnings have become "the norm in online feminist spaces" and also "migrated from feminist websites and blogs to college campuses and progressive groups." These days, Filipovic is worried they can do more harm than good:
Around 2016, Richard Friedman, who ran the student mental-health program at Cornell for 22 years, started seeing the number of people seeking help each year increase by 10 or 15 percent. "Not just that," he told me, "but the way young people were talking about upsetting events changed." He described "this sense of being harmed by things that were unfamiliar and uncomfortable. The language that was being used seemed inflated relative to the actual harm that could be done. I mean, I was surprised—people were very upset about things that we would never have thought would be dangerous." Some students, for instance, complained about lecturers who'd made comments they disliked, or teachers whose beliefs contradicted their personal values.
To a certain degree, Friedman said, this represented a positive change. Mental illness was becoming less stigmatized than ever before, and seeking care was more common. But Friedman worried that students also saw themselves as fragile, and seemed to believe that coming into contact with offensive or challenging information was psychologically detrimental. In asking for more robust warnings about potentially upsetting classroom material, the students seemed to be saying: This could hurt us, and this institution owes us protection from distress.
Trigger warnings were only one part of a larger shift. Complaints quickly entered the wider culture, and were applied to "toxic" workplaces and "problematic" colleagues; students decried the "potential trauma" caused by ideas and objected to the presence of some speakers and works of art.
My own doubts about all of this came, ironically, from reporting on trauma. I've interviewed women around the world about the worst things human beings do to one another. I started to notice a concerning dissonance between what researchers understand about trauma and resilience, and the ways in which the concepts were being wielded in progressive institutions. And I began to question my own role in all of it.
Feminist writers were trying to make our little corner of the internet a gentler place, while also giving appropriate recognition to appallingly common female experiences that had been pushed into the shadows. To some extent, those efforts worked. But as the mental health of adolescent girls and college students crumbles, and as activist organizations, including feminist ones, find themselves repeatedly embroiled in internecine debates over power and language, a question nags: In giving greater weight to claims of individual hurt and victimization, have we inadvertently raised a generation that has fewer tools to manage hardship and transform adversity into agency?
Read the whole thing here.
National monument designation near the Grand Canyon means no more uranium mining, ever. President Joe Biden is designating almost 1 million acres of land near the Grand Canyon as a national monument. Uranium mining is already restricted in the Baaj Nwaavjo I'tah Kukveni area through 2032, but Biden's designation makes the mining moratorium—initially enacted in 2012—permanent.
Designating this area as a national monument will protect "landscape sacred to Tribal Nations and Indigenous peoples and advance President Biden's historic climate and conservation agenda," said the White House in a statement.
The Biden administration has portrayed the decision as a small trade-off, saying the area contains just 1.3 percent of known U.S. uranium reserves. But critics of Biden's decision say it will further U.S. reliance on foreign uranium from places like Russia.
"President Biden and his radical advisers won't be satisfied until the entire federal estate is off limits and America is mired in dependency on our adversaries for our natural resources," said Rep. Bruce Westerman (R–Ark.) in a statement.
The Wall Street Journal editorial board called the designation "a gift to Putin," writing that the land in question "includes America's only source of high-grade uranium ore that is economically competitive on the global market." At present, "the U.S. imports about 95% of uranium used for nuclear power reactors, mostly from Kazakhstan, Canada, Russia and Australia," the Journal points out.
The Journal editorial board also pushes back against the idea that uranium mining in the area would contaminate local waters and wildlife. "A U.S. Geological Survey in 2021 found springs and wells in the region met federal drinking-water standards despite decades of uranium mining," it states.
It also warns that the designation is just one part of a larger push "to block all mining in the U.S."—even though this "means mining will occur in countries with fewer environmental protections"—and of an overly expansive reading of the president's power under the Antiquities Act of 1906, which lets presidents designate federal land for national monuments. "Environmental groups even argue that Presidents can't roll back predecessors' designation," the board writes. "This interpretation of one-way executive power is more sweeping than the Grand Canyon and is crying out for a legal challenge."
• Read Reason's Matt Welch on "why Kamala Harris won't be asked about the suicide of a newspaperman she persecuted."
• The House Judiciary Committee is seeking more information about FBI efforts to investigate "violent extremists in radical-traditionalist Catholic ideology." While the FBI "cast it as the work of a single rogue field office…it looks like the effort was more widespread than our G-men admitted to the public," notes The Wall Street Journal.
• "The U.S. State Department encouraged the Pakistani government in a March 7, 2022, meeting to remove Imran Khan as prime minister over his neutrality on the Russian invasion of Ukraine," reports The Intercept, which obtained a classified Pakistani government document about the meeting. "The cable, known internally as a 'cypher,' reveals both the carrots and the sticks that the State Department deployed in its push against Khan, promising warmer relations if Khan was removed, and isolation if he was not."
• "NIMBYs have throttled the supply of homes across the country. In Montana, the state government was not just paying attention but primed to do something about it," notes Annie Lowrey at The Atlantic. That included transforming its land-use policies and allowing dense development—policies it ushered in "on a bipartisan basis and at warp speed." And the result may have been enough "to fix its housing crisis," writes Lowrey.
• How Biden is escalating the trade war with China.
• Nick Gillespie talks to Tara Isabella Burton, author of Self-Made: Creating Our Identities From Da Vinci to the Kardashians.
The post Twitter Fined for Failing To Quickly Turn Over Trump Data to Jack Smith appeared first on Reason.com.
]]>Before Matt Taibbi was sparring with Democratic members of Congress on Capitol Hill earlier this year over the Twitter Files, he was a darling of the progressive left, appearing regularly on shows like Democracy Now! and others hosted by Bill Moyers and Rachel Maddow.
Though he was always a fierce critic of the Democratic establishment, the rise of Donald Trump suddenly meant that anyone nominally left of center—including progressive journalists like Taibbi—was expected to support Hillary Clinton unconditionally. So when he attacked her as a sellout, argued that the Russiagate narrative was mostly bullshit, and equated the manipulative tactics of right and left media personalities, progressives gave him the cold shoulder. Elected Democrats started treating him like a puppet of the right.
In 2020, Taibbi started publishing his work on Substack and quickly became one of the platform's most popular writers, earning far more than he ever did at Rolling Stone, where he had been chief political reporter. He became even more of a pariah by publishing exhaustive reports that documented how the government sought to control what was said on Twitter about COVID-19 and efforts by Russia to influence U.S. elections. Congressional Democrats unconvincingly pilloried him as a fake journalist, an apologist for Vladimir Putin, and a stooge for Elon Musk.
I caught up with Taibbi at FreedomFest, an annual gathering held this year in Memphis, to talk about the new challenges to free speech, why legacy media is dying, and how identity politics are poisoning political discourse.
The post Matt Taibbi: How the Left Lost Its Mind and Legacy Media Its Audience appeared first on Reason.com.
]]>Public funding for private development projects is a bad deal, even according to state governments' own numbers. A new Wall Street Journal report shows that even the world's richest people are not immune from the lure of other people's money—nor are they particularly good stewards of it.
The paper reported on a solar panel factory in Buffalo, New York, operated by SolarCity, a company co-founded by Tesla CEO Elon Musk. When the factory was announced in 2013, then-Gov. Andrew Cuomo offered to help out. At the time, the state hoped to revitalize Buffalo as a manufacturing hub, planning "to incubate a handful of small startups in promising economic niches."
Instead, the state blew its entire budget on one project, pledging $750 million to build a 1.2 million-square-foot factory, which it would lease to SolarCity for $1 a year, plus another $240 million to purchase manufacturing equipment. It further agreed to exempt the facility from property taxes in its first 10 years, a savings estimated at $260 million.
At that time, the federal government also covered 30 percent of the cost of solar installations through grants and tax credits; in 2015, SolarCity reported having received $497.5 million in direct grants, though the Los Angeles Times estimated that the payout could actually have been as much as $1.5 billion.
In exchange, officials expected the Buffalo factory to create as many as 5,000 jobs and attract further development to the area. Musk promised that by 2020, the factory would make enough products to cover 1,000 roofs per week. But barely a decade since the deal was struck, the factory averages 21 installations per week, and the only new business that has moved into the area is a single coffee shop. And while Tesla reported in February that it had created 1,700 jobs—as the Journal notes, "enough to meet its obligations to the state and avoid a $41 million annual penalty"—the majority of those are Tesla employees who don't work on solar projects. The state has adjusted the terms of the deal 12 times in eight years, lowering job requirements and extending deadlines.
The Journal article adds that Cuomo's "spokesman defended the project, saying the factory site has more jobs on it now than when it was an empty lot where a steel mill once stood," though it's arguable whether $1 billion for 1,700 mostly unrelated jobs is, in fact, better than nothing.
E.J. McMahon, founding senior fellow of the fiscally conservative Empire Center for Public Policy, told the Journal that the Buffalo project could be "the single biggest economic development boondoggle in American history," adding that "the state became a direct investor in that project under the worst possible terms."
This is far from the first sign of trouble: In 2016, Tesla acquired SolarCity, which was out of cash with over $3 billion in debt. In 2018, Oregon clawed back $13 million after it found that SolarCity, now known as Tesla Energy, had overstated the costs of several projects in order to qualify for tax credits. New York auditors later wrote down $883.8 million in state investment after determining that "it will not likely receive the direct financial benefits associated with ownership of the manufacturing facility and equipment."
The enormous expenditure on a private company is dispiriting for many reasons, not the least of which is Musk's public rhetoric on subsidies. In the past, he has spoken quite eloquently about the problems with taxpayer funding for private projects, saying in 2021 that "the role of the government should be that of a referee, but not a player on the field." And yet he and his companies have benefited significantly from government largesse, to the tune of billions of dollars since 2010.
But while Musk's record is disappointing, his rhetoric is right on the money: "Government should try to get out of the way and not impede progress." If only policy makers would listen.
The post Tesla Solar Factory Not Living Up to New York's $1 Billion Investment appeared first on Reason.com.
]]>Tens of millions of people have now signed up for Threads, the new app meant to compete with Twitter. Whether this is the start of a new social media trend or a short-term blip is anyone's guess. As Twitter CEO Elon Musk and his new policies continue to alienate many longtime users of the site, an onslaught of apps with Twitter-like functions have been vying for the short-form, text-based platform crown. Several stand a decent chance—although it may be more likely that no social platform will again occupy the rarified space that Twitter and Facebook did.
Some of these apps, such as Post and T2, experienced very short-lived and relatively small waves of migration from Twitter. No one seems to think they are worth considering in the quest for a Twitter heir apparent.
Nor does Mastodon—a decentralized "fediverse" of Twitter-esque networks—appear capable of becoming The Next Twitter. It's a swell platform for fragmented, substantive, ongoing discourse among various niche audiences (much like Reddit) and folks who like the appeal of a non-corporate entity, with all that entails (no ads; less data collection; varying levels of content moderation, depending on which server you join). But many found the decentralized setup confusing, and Mastodon (intentionally) lacks many features that helped Twitter drive news and cultural outrage cycles.
The three most impactful Twitter competitors, right now, seem to be Threads, BlueSky, and Substack Notes. Each occupies a slightly different space, offering benefits and drawbacks for different audiences.
Threads is the newest entry here, having just launched on July 5, and it's the most corporate of the bunch, coming from Meta (the company behind Facebook, Instagram, and WhatsApp). Having the backing and expertise of a big and established tech company comes with some pluses and some minuses. Threads—which mimics Twitter in functionality (more on the nitty gritty of its functioning here)—looks great, has a big team in place to address issues, and has a built-in audience for attracting and onboarding users. Instagram this week has been prompting its users to join Threads, and anyone with an Instagram account can sign up for Threads with extremely minimal effort, automatically importing their Instagram username, photo, bio, and follows if they wish. By yesterday morning, the network it had 30 million sign-ups, according to Meta CEO Mark Zuckerberg.
But the same issues that plague Facebook and Instagram could be barriers to long-term success for Threads. Many people perceive the company and its platforms as uncool. They worry about its extensive harvesting of user data. They don't like Meta's heavy-handed content moderation policies (no female nipples is already a rule). Threads also doesn't have a version for desktop browsers yet, and it's not very customizable: You can't set things up so that you only see the accounts you follow. And on Facebook and Instagram, brand accounts are served up frequently while newsy and political content is downgraded—a strategy that, if carried over, could seriously hinder Threads becoming the discourse-driving, news-cycle-setting behemoth that Twitter has been.
BlueSky comes the closest to emulating the look and feel of early Twitter. Still in invite-only mode, the platform has a reputation for attracting weirdos (I say that lovingly), media types, the extremely online, and marginalized groups of the left-leaning sort (for instance, it's become immensely popular among transgender communities). Its base doesn't seem to take itself too seriously—posts are called "skeets," for example—and the aesthetic is "anything goes": Absurdist humor and racy pictures are both common. The vibe is communal, zany, fun, and a little anarchic, while its design and functionality mirror Twitter's very closely (unsurprisingly, since BlueSky was spun off from Twitter and since Twitter founder Jack Dorsey sits on its board). News and policy-related content is also prominent.
BlueSky certainly has the right audience, infrastructure, and backing to become big. But its growth is currently hampered by its invite-only status; its insular vibe may—as with early Twitter—be off-putting for mainstream audiences; and the wilderness spirit that users currently embrace may be hard to maintain at scale (at least not without attracting a boatload of ire from politicians and activists).
Notes is a new-ish function of the content-distribution platform Substack. Substack started out serving up newsletters—and this is still its core function—but it now offers other tools, including Notes and a way for content creators to create and distribute podcasts. It's a lot like blogging platforms of yore, except with a built-in option to charge a fee for subscriptions and a lot of easy-to-use options for different subscription levels. (Substack itself also strikes up content deals with a select cadre of newsletter creators.) The Notes interface is clean and simple and, like Twitter and BlueSky, Notes offers users the ability to post snippets of text and imagery; to follow and be followed; and to comment on, like, and "restack" others' posts. It also integrates seamlessly with Substack's other functions, so that it's easy (for instance) to share via Notes a Substack newsletter entry.
Notes has become popular among Substack newsletter creators and reader, and so the vibe tends to be collegial, intimate, and focused around discussion of writing and ideas. This makes it very appealing for a certain sort of social media user but perhaps unlikely to become a mass public square like Twitter, for both better and worse. And Substack seems more invested in supporting profitable and interesting newsletters and discussions about them than in finding any way possible to keep eyeballs endlessly on the platform. (To put it in extremely online terms: Notes has more Google Reader energy than Main Character of the Day energy.)
Independent of any characteristics of these particular platforms, there are barriers to any of them taking on the role that Twitter has held the past decade. First and foremost may be social media fatigue. A lot of people in 2023 feel like they're already on enough (too many?) different platforms. Rather than sign up for anything new, they may opt simply to stick it out with Twitter and/or Facebook—which, of course, many people still enjoy and derive value from—or taper off social media entirely.
Something with a clear-cut new angle (like TikTok when it came along) can clearly break through. Once established as a major player, a Twitter clone could probably do so too. But as things stand, how does anyone know which new platforms will and won't succeed? And until then, who wants to invest time and energy developing a community/following on an app that may fizzle out soon? Or hedge their bets and establish their presence on multiple apps, new and old?
The existence of so many options could keep any one platform from dominating. And perhaps that's for the best.
Part of what made Twitter and Facebook so appealing—and addicting—was the sense that its use was ubiquitous (even if, for Twitter especially, this was never the case). To opt out felt like you might miss out entirely on what was happening in politics, culture, and some of your favorite cultural communities. This same vibe made these mega-sites feel unbearable at times—they were engines of toxicity, pointless outrage loops and pile-ons, petty fights, misunderstandings, wasted time. It's what made politicians and institutions woefully attune to the whims and passions of Twitter and Facebook users, helping fuel cancel culture, corporate missteps, and bad political calculations.
The dominance of a few big tech platforms also helped drive political witch hunts. They enabled censorship (pressure a few big platforms to quash certain information, and a lot of the work was done) and government surveillance. And in the backlash, bad policy ideas were sold as sticking it to "big tech" even when their negative effects would actually reverberate around the rest of the internet too.
Twitter may never return to its old glory and ignominy. While Threads or BlueSky or one of the others may yet become the "new Twitter," there's a strong possibility that none of these Twitter clones will achieve a place of dominance either. And individuals, politics, culture, and liberty may all be better off for it.
What people are saying:
"It's not hard to figure out" why people are migrating to Threads, suggests Ben Dreyfuss. "Elon Musk has spent the last 7 months making a huge chunk of Twitter's users hate him. He has wasted the benefit of the doubt another huge chunk extended to him. Meta is a real company! Not a bunch of drunks kicking over trash cans."
"Bluesky and Mastodon have higher chances of developing sustainable models that don't end up in a surveillance advertising hellscape, which is inevitable for Threads because Meta can't do it any other way," writes Colorado Law Professor Blake E. Reid.
"If you're the sort of person who wants a quiet timeline comprised only of posts from carefully curated accounts, Threads is not for you, and probably never will be," writes John Gruber. "But the sort of people who like Twitter's 'For You' feed and trending topics in the sidebar might find Threads more fun."
"On Twitter in one of my threads someone just said 'Threads had 30M signups in one day, Bluesky has lost its chance to become the next Twitter' and he may be right, but what he doesn't know is that people here are like 'cool, dodged THAT bullet,'" John Scalzi posted to BlueSky.
"Had Meta launched [Threads] in 2019, it seems safe to say, everyone would have rolled their eyes. Its big new feature is…logging in with Instagram? Come on," comments Platformer's Casey Newton. "By the standards of Twitter 2.0, though, it can feel like a miracle."
"Of the available sites I would bet that BlueSky winds up being more important for setting media narratives than the available alternatives," writes Dan Drezner on Substack. "But this leads to the most important point: the hard-working staff here at Drezner's World has serious doubts that social media will have any effect on the 2024 outcome."
"I think the age of the centralized newsfeed that gave everyone a general sense of what's going on on the internet is coming to an end," comments Reason's Christian Britschgi.
Can government officials block people on social media? People have a First Amendment right to access content posted to social media by elected officials, argue the Electronic Frontier Foundation (EFF), the Knight First Amendment Institute, and the Woodhull Freedom Foundation in a brief filed with the U.S. Supreme Court. The Court is currently reviewing two cases—Lindke v. Freed and O'Connor-Ratcliff v. Garnier—concerning the issue.
It's not enough to consider whether an official's account is delineated as a government account or a personal account, they argue. Instead, courts must look at how the account is actually used. If the "personal" account posts concern government business, the official should not be able to block people from accessing it, they say.
"We are asking the Court to find that the ultimate test is how an account is used," explained EFF attorney Sophia Cope in a statement. "If officials choose to mix government and nongovernment content on their account, they must accept the First Amendment obligations that go with using their account for governmental purposes."
You can find their full amicus brief here.
Latest jobs report shows openings down, quitting up. May saw 9.8 million job openings, down from 10.3 million in April, according to the latest data released by the U.S. Labor Department. Meanwhile, "the quits rate, which is often used to gauge a worker's confidence in the job market, increased in May, particularly in the health care, social assistance and construction industries," reports The New York Times:
A rise in quitting often signals workers' confidence that they will be able to find other work, often better paying. But fewer workers are quitting their jobs than were doing so last year at the height of what was called the "great resignation."
6th Cir. holds that a prosecutor pressuring a key witness to destroy potentially exculpatory evidence in violation of a court order came within the traditional role of a prosecutor, so the prosecutor has absolute immunity from a civil suit. https://t.co/ybfOjsLPs3 pic.twitter.com/MOhrlzpBll
— Gabriel Malor (@gabrielmalor) July 5, 2023
• Twitter is threatening Threads in court. Semafor reports that Twitter lawyer Alex Spiro sent a letter to Meta accusing it of hiring former Twitter employees who "had and continue to have access to Twitter's trade secrets and other highly confidential information" in order to create a "copycat" app. But "no one on the Threads engineering team is a former Twitter employee—that's just not a thing," a source inside Meta told Semafor.
• Techdirt's Mike Masnick weighs in on the recent court decision barring the federal government from urging social media companies to suppress or remove content.
• A backdoor route to student loan debt forgiveness? "While everyone's focus has been on the administration's outrageous cancellation stunt, the [Department of Education] has been working tirelessly to accomplish an even more disastrous policy: a new Income-Driven Repayment rule," writes the Pacific Legal Foundation's Caleb Kruckenberg in the New York Post. "While styled as a rule that simply tinkers with the details of existing income-based repayment programs, it effectively does the same work as the cancellation effort: It writes off the debts of millions of college-educated borrowers."
• "Former President Donald Trump posted on his social media platform what he claimed was the home address of former President Barack Obama on the same day that a man with guns in his van was arrested near the property," reports the Associated Press. The June 29 arrest of Taylor Taranto was revealed as part of a federal court filing related to his participation in the January 6 riot at the U.S. Capitol.
• "While Salon Magazine declares that we all live in a 'libertarian dystopia,' and a new brand of big-government conservatives promise to free the Republican party and American government from their libertarian captivity, Barton Swaim declares in the Wall Street Journal that a new book 'works as an obituary' for libertarianism," comments David Boaz at the Cato at Liberty blog. "That's not a characterization that I think the authors—Matt Zwolinski and John Tomasi—would accept of their book, The Individualists: Radicals, Reactionaries, and the Struggle for the Soul of Libertarianism." (Zwolinski offers his own thoughts here.)
The post Attack of the Twitter Clones appeared first on Reason.com.
]]>As elections approach, sweeping generalizations have a certain allure that often energizes the frustrated and captivates the hopeful. However, it's essential that we as voters remember that things that seem too good to be true typically are. Here are a few warnings.
First, as far as our finances go, beware of politicians promising that they won't touch Social Security and Medicare. In reality, they'll have no choice. For one thing, if they keep this hollow promise, Social Security benefits will be cut across the board in 2033 by over 20 percent. According to the Committee for a Responsible Budget, that's a cut of between $12,000 and $17,000 annually for a traditional retired couple. Medicare faces the same predicament for a variety of reasons.
The only workaround from this reality, which has been known for decades, is for Democrats and Republicans to finally come together for serious reform. That will likely result in a reduction of benefits and an increase in taxes. As unpleasant as it will be, we'd better hope that politicians don't take the cowardly path and resort to shoving the problem onto Uncle Sam's proverbial credit card (by paying all benefits that exceed payroll-tax receipts out of general revenues).
As the Manhattan Institute's Brian Riedl noted recently, "Social Security and Medicare are projected by the CBO to spend $156 trillion in benefits but collect only $87 trillion in payroll taxes and premiums. This $69 trillion cash shortfall will have to be financed by budget deficits, which will in turn be responsible for $47 trillion of interest costs on the national debt." Who will lend the U.S. government $114 trillion, even at unprecedentedly high interest rates?
That's a question voters should ask politicians who promise never to touch entitlement programs. Those who claim it's an easy fix by taxing the rich should be immediately dismissed as unserious. The numbers don't add up. Any other one-sided ideological answers to an accounting question won't cut it, either.
Politicians are also masters of making complex societal problems appear as if they can be solved easily with a single piece of legislation. For instance, voters should beware of politicians promising to improve social media and online retailing by hammering Big Tech with antitrust lawsuits, as if these companies represent true monopolies. Google, Amazon, and today's other large tech firms grew so successfully only because consumers chose to buy their services, and they will remain successful and large only as long as consumers continue to do so.
Every allegedly "dominant" tech firm has competitors just waiting for it to get lazy or fail. In such a fast-changing industry, these competitors will swoop in and quickly take market share. Or a firm that makes too many mistakes will be bought out by investors who aim to improve its performance. Think here of Elon Musk purchasing Twitter.
To use antitrust against successful firms is to obstruct the operation of very complex patterns of commercial organization that no politician or government lawyer can hope to understand. The kind of antitrust interventions now demanded by populists on the left and right would be like angry bulls in a china shop. They'll be able to destroy, but all that they'll create is rubble.
Finally, be careful as politicians skillfully play the populist card, painting a picture of "us" against "them" and tapping into deep-seated fears and frustrations. For instance, beware of the claim that many economic problems stem from foreign competition and can easily be solved by applying a blanket 10 percent tariff across all imports. These tariffs are supposed to encourage firms to source their inputs domestically and to incentivize consumers to buy American. That won't work, as we should know by now after the Trump/Biden protectionist fiascos.
Because tariffs raise prices, they reduce the purchasing power not only of American consumers, but also of American producers who need inputs. What follows are a series of adjustments making everyone worse off without addressing the problem at hand. For instance, protecting American sugar with tariffs and quotas results in more imports of candy. Protecting aluminum with tariffs results in more imported garbage disposals and other products made with aluminum.
Politicians' messages offer a simplified view of the world—one in which government interventions are all benefits and no costs. But life, as we know, is anything but simple, and Uncle Sam's intervention can be quite destructive. Therefore, it's incumbent upon us to demand from our politicians more than charismatic speeches and lofty promises. We must demand clear, implementable, and serious policy proposals along with the acknowledgement of trade-offs.
COPYRIGHT 2023 CREATORS.COM.
The post This Election Season, Beware of These False Promises appeared first on Reason.com.
]]>Twitter is in the midst of a transformation: It is becoming a right-leaning news platform. This will have profound consequences for the debate about Section 230, the federal statute that shields websites from some liability created by users, which effectively allows the internet to function as a lightly restricted space.
Elon Musk purchased Twitter last year with the stated goal of making the site more hospitable to all kinds of political expression. But Musk is himself increasingly associated with the right; on Wednesday, Republican Florida Gov. Ron DeSantis announced his bid for the 2024 Republican presidential nomination during a Twitter Spaces live event alongside Musk, an avowed supporter of DeSantis.
Musk's Twitter is also attracting top conservative talent. After acrimoniously parting ways with Fox News, conservative superstar Tucker Carlson declared that he would relaunch his show on Twitter. The Daily Wire, a conservative media empire, recently decided to release all of its podcast videos on the site as well. "If Elon Musk stands by his commitment to make Twitter a home for free speech and delivers on monetization opportunities and more sophisticated analytics for content creators, I imagine we will invest even more into the platform," said Daily Wire CEO Jeremy Boreing.
Neither The Daily Wire nor Carlson appears to have special deals with Musk, who has said that they will be bound to the same rules as anyone else posting content on the site. But consider Musk's situation with respect to Carlson, and compare it to the host's previous relationship with Fox News.
After all, Carlson's departure from Fox News followed the resolution of the company's legal dispute with Dominion Voting Systems. Dominion sued Fox, arguing that its programming had been defamatory; guests who appeared on Fox News shows, including Carlson's, were accused of making false statements about Dominion. Under traditional defamation law, Fox News was liable for statements on Carlson's show.
Twitter is not.
As an online platform, Twitter cannot be sued in most cases for its users' speech. That's the entire point of Section 230: to empower websites to craft whatever speech policies are best for them, without forcing them to incur risk. Without Section 230, social media sites would have to moderate content far more aggressively. If Facebook and Twitter could be sued for users' speech, then they wouldn't be able to let users post at will, without approval or review.
"One advantage of Section 230 is it allows different platforms to try different models of online content moderation to serve specific audiences," says Jennifer Huddleston, a technology policy research fellow at the Cato Institute. "Recently, we've seen this with Elon Musk's changes to Twitter's content moderation rules in ways that create new opportunities for conservative news content."
At least prior to Musk's Twitter takeover, this underlying reality has vexed some on the right. Leading Republicans—including former President Donald Trump, Sen. Josh Hawley (R–Mo.), Sen. Ted Cruz (R–Texas), and others—have alleged unfairness, arguing that social media sites do not deserve this liability shield if they operate in a politically biased manner.
"With Section 230, tech companies get a sweetheart deal that no other industry enjoys," said Hawley.
While Republicans have railed against Section 230 for empowering social media sites to moderate too much content, Democrats have criticized it for the opposite reason: They want to punish Meta, Twitter, and Google for not censoring more content. Sen. Elizabeth Warren (D–Mass.) has proposed various schemes to make the sites liable for user content, on the theory that this would force the companies to take down alleged right-wing misinformation with greater vehemence.
Here's a prediction: If Twitter becomes the new home for right-leaning content, Republicans will have to rethink their ire toward Section 230; Twitter cannot exist without it. Democrats, on the other hand, will become much more vocal about the need to rein in tech and subject the platforms to the same liability standards as noninternet publishers. They will cast Section 230 as a giant loophole that allows Musk to evade responsibility for Carlson's speech. They will say that misinformation—the great crisis of our age, according to Democratic thinking—cannot be stopped so long as the platforms themselves are immune.
Section 230, of course, provides protection to everyone who wishes to express himself online. It is the reason that the internet has remained such a liberating, freewheeling place. Repealing or replacing it would risk damaging the very foundations of the largely unfettered discourse that exists on social media.
"Section 230 has been critical to protecting the speech of all users online by allowing platforms to carry user-generated content without the fear of business-ending litigation as result of how a user used their platform," says Huddleston. "The result is that a variety of voices, including conservative news voices, have more opportunities to connect with their audience thanks to online platforms."
Nevertheless, expect these foundations to come under even more strenuous attack from politicians and the mainstream media as Twitter becomes the right's new home.
The post How Ron DeSantis, Tucker Carlson, and Elon Musk Will Change the Section 230 Debate appeared first on Reason.com.
]]>After months of speculation—and some brief technical difficulties—Republican Florida Gov. Ron DeSantis made it official: He's running for president.
DeSantis filed the necessary paperwork with the Federal Election Commission on Wednesday afternoon and launched his campaign by joining Elon Musk for a Twitter Spaces interview a few hours later. The rollout was a bit rocky: DeSantis' speech was delayed for about 20 minutes as Twitter navigated issues apparently caused by too many users jamming into the virtual room. "The servers are straining somewhat," Musk said at one point. For much of that time, users (those who weren't dropped from the Space) heard a combination of silence, hold music, and intermittent crosstalk.
Once he got going, however, DeSantis rattled through a shortened version of the stump speech that he's been workshopping at conservative confabs around the country for the past several months—stressing his military background, his gubernatorial record of attacking anything deemed "woke," and his handling of the COVID-19 pandemic. "I am running for president of the United States to lead our great American comeback," he promised.
After first half hour of disaster, it was an awkward speech/podcast hybrid. But DeSantis was able to give long substantive answers on issues he wants to highlight: COVID, education, immigration, big tech. We'll see how they play in formats/news cycles that aren't this.
— Benjy Sarlin (@BenjySarlin) May 24, 2023
There will be plenty of time to dissect DeSantis' record and policy aims. What's most interesting, right now, is the decision to hold the long-awaited launch of his presidential campaign in such an unorthodox setting. Asked by Musk, after the technical glitches abated, why he would want to "take the chance" of holding the announcement via Twitter, DeSantis pivoted awkwardly to talk about COVID-19 again: "Do you go with the crowd," he said, or "cut against the grain?"
Upending the tired norms of campaigning is a fine goal, but there seems to be more than that happening here. Announcing a bid for the White House on Twitter, alongside Musk—who has become a cult hero to a certain type of conservative who spends too much time worrying about trolling the political left—is a deliberate choice, and one that tells you something about how DeSantis is framing his candidacy.
Because it's not as if there weren't other options available to him—options that might have provided a sunnier, more optimistic setting for an "American comeback" message than what effectively amounted to a conference call on a social media platform that most Americans don't use.
If I were the Republicans governor of Florida and I were running for President I'd announce at one of the many massive football stadiums in my state because I am tethered to the people of my state and they care about fun, faith and football, and I am a normal joy filled person
— Jane Coaston (@janecoaston) May 23, 2023
But it's conservatives on Twitter who DeSantis views as his national base, as he sets to the task of expanding his brand beyond Florida. As governor, DeSantis has cultivated support among what we might call the "Too Online" faction of conservative politics by engaging in a number of high-profile stunts seemingly designed to appeal to exactly that crowd. Whether feuding with Disney, banning critical race theory in schools, or cruelly shipping undocumented migrants to Martha's Vineyard, DeSantis has mastered the use of state power to raise his profile within the subset of Americans who get their jollies by liking and retweeting content that "owns the libs."
That doesn't mean it won't work, of course. Former President Donald Trump wielded Twitter as a powerful tool during his rise to stardom (and the presidency) within Republican politics. DeSantis probably isn't wrong to sense that cultivating support on Twitter matters—not least because the platform provides a conduit to speak directly to the fans, without having to use the traditional media. Other prominent conservatives have too:
went around giving a talk. They said that they held precisely one press conference at the launch of their campaign. Virtually of the questions were about Trump, nothing about VA, nothing about policy, nothing about contrasting him with his opponent. As a result…
— Kenneth Monahan (@Foudroyant) May 23, 2023
The downside, as anyone who has used Twitter can tell you, is that it's not a particularly serious place. If DeSantis' pitch is that he can be a more competent version of Trump, then Wednesday's rollout was a failure and not just because of the technical issues. Playing to the Twitter crowd is not a way to demonstrate leadership. If anything, it shows the opposite: that DeSantis is willing to let the online culture wars and political trolling guide his campaign, as it has guided his time as governor.
Throw Trump into the mix, and that's a recipe for a Republican presidential primary that seems unlikely to move the country closer to solving the big issues. A fight over who is more well-liked among the Too Online Republicans is going to be ugly and dumb:
Don't expect a campaign that appeals to suburban moderates.
Very conservative Republicans are the most up-for-grabs between Trump and DeSantis, currently solidly in Trump's camp but DeSantis previously tied post-midterms.
They are the key battleground in this primary. pic.twitter.com/Hlt6Qg8e0u
— Patrick Ruffini (@PatrickRuffini) May 24, 2023
And what to make of those technical glitches that derailed the start of DeSantis' Twitter town hall? Did we just witness another moment like Howard Dean's infamous "scream" or Michael Dukakis' tank—a campaign event so cringeworthy that it doesn't just sink a promising candidate, but becomes a permanent part of the political lexicon? Unlikely. Despite the overblown reactions from some in the media, this whole incident will be mostly forgotten by the end of the week and probably rates as little more than a humorous anecdote in the books that will be written about the 2024 campaign.
Here's the thing: The Republican primary contest is going to feature such a cacophony of terrible ideas—war with China, more immigration restrictions, banning gender transition treatments for adults, war with Mexico, more government control over the internet, etc.—that we'll probably end up fondly remembering Wednesday's mess, which denied freedom to no one except those of us professionally obligated to sit through it.
A man paralyzed below the hips in a 2011 motorcycle accident has control over his lower body for the first time in 12 years, thanks to an "artificial intelligence thought decoder" that translates signals from his brain into electrical signals that move muscles, The New York Times reports. If it's true that any sufficiently advanced technology is indistinguishable from magic, then what neuroscientists have accomplished with Gert-Jan Oskam should be regarded as miraculous:
In a study published on Wednesday in the journal Nature, researchers in Switzerland described implants that provided a "digital bridge" between Mr. Oskam's brain and his spinal cord, bypassing injured sections. The discovery allowed Mr. Oskam, 40, to stand, walk and ascend a steep ramp with only the assistance of a walker. More than a year after the implant was inserted, he has retained these abilities and has actually showed signs of neurological recovery, walking with crutches even when the implant was switched off.…
Andrew Jackson, a neuroscientist at Newcastle University who was not involved in the study, said: "It raises interesting questions about autonomy, and the source of commands. You're continuing to blur the philosophical boundary between what's the brain and what's the technology."
Dr. Jackson added that scientists in the field had been theorizing about connecting the brain to spinal cord stimulators for decades, but that this represented the first time they had achieved such success in a human patient.
This is exactly the sort of thing that's at stake in the debate over how to handle the future development of artificial intelligence.
Amazing! ???? And each time you read another article about another little #AI or robotic-assisted miracle like this -- and it IS a miracle -- ask yourself how an "AI pause" might stop the next one from happening. https://t.co/IgueGXNW80
— Adam Thierer (@AdamThierer) May 25, 2023
If the U.S. defaults on its debt payments next month, the stability of the country's financial markets could hinge on…"a series of conference calls." The Wall Street Journal takes a look at how investors are preparing for the possibility of an unprecedented event:
Under Wall Street's plan, though, investors would be able to keep trading all U.S. Treasurys, even those with past-due interest or principal payments. Chaos and confusion would be kept at bay through a series of conference calls, each with an agenda already organized by the Securities Industry and Financial Markets Association trade group….
The main purpose of both calls would be to answer a key question: whether the Treasury Department had decided to delay, by a single day, a principal payment due the following morning. Were that to happen, trading of affected U.S. Treasurys could take place essentially as normal.
• Speaker of the House Kevin McCarthy (R–Calif.) says "some progress" has been made toward reaching a deal with the White House to avoid a debt default early next month.
• On the day he announced his run for the presidency, DeSantis also signed a bill changing Florida's law that required office holders to resign before running for another office.
• Inflation has slowed. Is it time to worry about deflation instead?
• Why people like sad songs.
The post DeSantis Announces Too-Online Campaign in Most Online Way Imaginable appeared first on Reason.com.
]]>Twitter CEO Elon Musk is facing a barrage of media criticism for acquiescing to demands from the Turkish government to censor content on the site. The acts of censorship took place last week, just days before the country's presidential election; unsurprisingly, the restricted accounts had expressed criticism of autocratic Turkish leader Recep Tayyip Erdogan.
Given that Musk has promised to make Twitter a platform for free speech—indeed, his stated rationale for buying the site was to make it more protective of political expression—his kowtowing to Erdogan has struck many commentators as hypocritical. "Elon Musk Doesn't Care About Free Speech," declared The New Republic. NBA star Enes Kanter Freedom, a Turkish dissident who has frequently criticized the Erdogan regime, said "I don't want to hear about Elon Musk talking about free speech ever again."
Reason's Elizabeth Nolan Brown also chided Musk for "making a dictator's job easier." And Wikipedia founder Jimmy Wales tweeted that treating freedom of speech "as a principle rather than a slogan" would have meant fighting back harder.
What Wikipedia did: we stood strong for our principles and fought to the Supreme Court of Turkey and won. This is what it means to treat freedom of expression as a principle rather than a slogan. https://t.co/tHkx1Wa06r
— Jimmy Wales (@jimmy_wales) May 13, 2023
It's absolutely true that there's a certain incoherence to Musk's approach. He croons about free speech, while also pledging to follow applicable local laws. He has said he is willing to lose money on Twitter if it means protecting free speech, but he has also said that Twitter will not try to impose its values (free speech, one assumes) on the rest of the world.
Most countries, unfortunately, do not have free speech protections that are as robust as the U.S.'s First Amendment—and even in the U.S., social media companies have faced tremendous pressure from federal government agencies to censor speech. Musk is well aware of this, having green-lit the Twitter Files. Perhaps he should have anticipated that his various pledges—allow free speech, obey the law, be willing to lose money, don't impose values—would swiftly come into conflict.
But some of the criticism seems to suggest that Musk's decision to heed Turkey is some new low for social media platforms. Ryan Mac, a tech reporter for The New York Times, frets that Musk has provided "a blueprint for repressive governments everywhere."
"If Twitter doesn't censor the content you want, simply threaten to cut off the service," says Mac, summarizing the aforementioned blueprint. "Its owner just put it in writing."
This blueprint already exists: Musk is not remotely the first social media CEO to begrudgingly accede to an authoritarian government's demands.
In 2007, a Turkish court ordered the country's internet service provider to take down YouTube over videos that mocked Turkey's founder, Mustafa Kemal Ataturk. YouTube complied within hours, removing the videos in order to restore YouTube access to citizens of Turkey. In subsequent years, Turkey's government used similar threats to force Facebook, Periscope, and yes, Twitter, to comply with demands for censorship.
It's true that Wikipedia fought back against Turkey's demands for censorship, resulting in the site not being available in Turkey at all from 2017 to 2019.
"I'd have liked to see everyone resist more over the years, but Turkey is a pretty important market," says Will Duffield, a policy analyst at the Cato Institute. "Wikipedia is probably the most successful example of resistance, after taking the block for two years a Turkish court ordered it reinstated on human rights grounds. Musk seems to be punished as much for tweeting it out as for complying."
It's not just Turkey, of course. Social media companies have had to deal with demands for content takedowns all over the globe. During the 2000s, the version of Google that was available in China included all sorts of compromises with the Chinese Communist Party's tyranny. Eventually, Google stopped complying, so it got the boot. In 2018, Google had plans to relaunch its censored search engine in China, but when the details leaked, the company faced so much criticism in the U.S. that it had to abandon course.
The point is that these are not always easy calls. When a repressive government orders a private company to restrict content, it is the government—not the company—that has decided to violate the human rights of its citizens. The companies should resist wherever they can, but resisting to the point at which the government shuts down their service is neither a moral requirement nor a course of action that obviously maximizes freedom. It's perfectly legitimate to think that a CCP-approved version of Google—while far from ideal—is better for the people of China than no Google at all. In either case, the villain is the CCP, not Google.
Which brings us back to Musk and Turkey. Twitter claims that it has fought Erdogan's takedown request to the maximally practical extent.
"We were in negotiation with the Turkish Government throughout last week, who made clear to us Twitter was the only social media service not complying in full with existing court orders," said a spokesperson for Twitter in a tweet. "We received what we believed to be a final threat to throttle the service—after several such warnings—and so in order to keep Twitter available over the election weekend, took action on four accounts and 409 Tweets identified by court order."
The spokesperson noted that the company will continue to fight the demands in court, and subsequently released the written orders for all the world to see.
Refusing to comply would have meant a total Twitter outage in Turkey on the even of its election. This is a development that should make everyone very angry with the Turkish government—Musk is not the correct object of scorn, though it's obviously fair to note he has not yet delivered on his promise of a free speech platform.
"The situation illustrates the dangers of letting autocrats control market access," says Duffield. "The best solution is to treat such demands as non-tariff barriers to trade. Our friends and allies should not demand that American firms neuter their products in accordance with local whims."
The post Don't Blame Elon Musk for Turkey's Authoritarian Twitter Censorship appeared first on Reason.com.
]]>Elon Musk defends suppression of Turkish posts. "In response to legal process and to ensure Twitter remains available to the people of Turkey, we have taken action to restrict access to some content in Turkey today," Twitter's Global Government Affairs account tweeted on Friday.
The tweet didn't say what kind of content had been suppressed nor who had requested this suppression. But if it was done to ensure that Turkish leaders would allow Twitter to remain available, these were probably not posts flattering to Turkish President Recep Tayyip Erdogan, who is currently up for reelection. And further comment from CEO Elon Musk suggests content was removed to appease Turkey's authoritarian leadership.
The move has been widely criticized, especially in light of CEO Elon Musk's ongoing insistence (despite ample evidence to the contrary) that he is a stalwart champion of free speech.
Turkey is in the midst of a presidential election—one that seems to be headed for a runoff, since Erdogan does not appear to have captured a majority of the vote. (The final results are not in. But as of now, challenger Kemal Kilicdaroglu has 44.8 percent of the vote, while Erdogan got 49.54 percent.)
"The Turkish government asked Twitter to censor its opponents right before an election and @elonmusk," tweeted journalist Matthew Yglesias on Saturday, adding that this "should generate some interesting Twitter Files reporting." In the Twitter Files, journalists with access to the company's internal documents detailed censorship requests by the U.S. government and others, and the ways previous Twitter leadership dealt with them.
Musk responded: "Did your brain fall out of your head, Yglesias? The choice is have Twitter throttled in its entirety or limit access to some tweets. Which one do you want?"
But providing people in authoritarian regimes with a false sense of public opinion while doing their leaders' bidding hardly helps foster civil liberties.
If Erdogan wanted to suppress opposition content and his only way to make that happen was to throttle all of Twitter, it would presumably be a more difficult call. And if he still chose censorship, it would also force him to alert—and often anger—Turkish citizens in the process.
Instead, Musk made a quasi-dictator's job easier.
Condemnation of Musk's defense flowed from across the political spectrum.
"If Twitter were throttled in its entirety perhaps Turkish voters would better understand the authoritarian tendencies of their government and the costs it's imposing on the nation as a whole," commented conservative journalist Jonah Goldberg. "Throttling the opposition to that govt on election eve doesn't seem optimal imho."
"From a speech perspective the answer is clear: You publicly tell Erdogan no and dare him to throttle the whole app," tweeted Ryan Grim of The Intercept.
It's not just journalists who have been criticizing Musk:
All or nothing, and let the government face the backlash. If you can't stand on this basic principle, what can you stand on?
— Just A Guy In Texas (@Just_A_Guy_N_TX) May 13, 2023
We want Twitter to not throttle tweets based on a government's request it throttle those tweets. If that's a hard ask Twitter has serious problems.
— Alex Shepard ???????? (@Sinnersaint39) May 13, 2023
You should not censor - ever.
You're the king of technology. Use it to stop the censorship. Keep twitter open and push to all Turkish users's phones information on how to log in via VPN and set up Starlink for Turkish dissenters.
With the amount of power and money you have,…
— Martha Bueno (@BuenoForMiami) May 13, 2023
Honestly. Let them turn twitter off. Better to let the censorship writ large on the part of the government be exposed than to let twitter carry on doing the bidding of that government. Then make a banner at the top of twitter pointing out what that government is doing. Bring the…
— Robert Whitelaw,CRS—Realtor,Broker,Podcast Host (@rebelbroker) May 13, 2023
Some suggested (not for the first time) that Musk's business interests in Turkey (and other censorship-happy regimes) make him to this sort of pressure.
Wikipedia founder Jimmy Wales commented:
What Wikipedia did: we stood strong for our principles and fought to the Supreme Court of Turkey and won. This is what it means to treat freedom of expression as a principle rather than a slogan. https://t.co/tHkx1Wa06r
— Jimmy Wales (@jimmy_wales) May 13, 2023
American Airlines flight attendants falsely reported a father as a human trafficker. The man, Francisco De Jesus, was flying with his 13-year-old daughter. "As we're deplaning, we're greeted by several individuals. One of them who introduced himself as the head of security for the Charlotte International Airport," De Jesus told King 5 news:
My question that I would like to have answered is how did they get to label me as a human trafficker? I had my iPad; we were watching a movie. She had her phone. I mean, these are things that I thought a dad and a daughter traveling do.
It's common for airline crews these days to receive highly dubious training about how to detect human traffickers. "Airline and hotel employees are taught to use their prejudices to spot and report human trafficking, and this often works out badly," writes Gary Leff at View From the Wing:
And that gives you situations like,
- An African American social service worker was traveling with a white baby and accused of kidnapping by an American Airlines flight attendant as a result.
- Armed Port Authority police boarded an American Airlines plane at New York JFK because a flight attendant saw an Asian American woman follow her hispanic husband to the lavatory (he was feeling unwell) and saw that they shared an orange juice. The flight attendant called for a sex trafficking investigation. It found their drivers licenses displayed the same home address because they were married, just different races.
- Southwest Airlines demanded to see Facebook posts when a white mother checked in with her mixed-race son, claiming this was 'federal law'.
And who could forget Cindy McCain reporting a woman because she was "a different ethnicity" from her child?
This sort of spot-a-trafficker "training"—often conducted by agents of Homeland Security and other federal agencies—has extended to hotel staff, to hairdressers, and to other industries, sometimes funded by the government or even required by law.
American suburbs are thriving. They were already gaining population in the 2010s, and that only increased during the pandemic. This is leading to a surge in new retail and restaurants. In "2022, urban retail vacancies surpassed suburban retail vacancies for the first time since 2013," reports Axios. And today's suburbs are more diverse than those of yore.
A recent Bank of America survey found 45 percent of millennial respondents expect to buy a suburban home. "We expect the ability to [work from home] to remain an incentive for young families to seek out more remote suburban and rural markets where housing may be more affordable," the bank's analysts write.
• Over the past decade the share of Americans who associate with any religion decreased by 11 percentage points. "It's a development of tremendous impact, one that will ripple across the political landscape at every level—and especially in presidential politics," suggests Ryan Burge, an Eastern Illinois University political scientist.
• "Thai voters have delivered a stunning verdict in favour of an opposition party that is calling for radical reform of the country's institutions," concludes the BBC. "Analysts are calling this a political earthquake that represents a significant shift in public opinion" and "a clear repudiation of the two military-aligned parties of the current government, and Prime Minister Prayuth Chan-ocha, who led a coup that ousted an elected government in 2014."
• What the heck does "displaying traits of an armed person" mean?
NEW: Baltimore Police say the person shot was 17 years old, saying he was displaying traits of an armed person before he took off running. Officers rendered aid and the 17 year old is in critical condition. @wjz pic.twitter.com/A56EDDBmzp
— Stephon Dingle WJZ (@Stephon_Dingle) May 11, 2023
• North Carolina's Democratic governor, Roy Cooper, vetoed a bill banning abortion after 12 weeks. "But to prevent the legislature from using its razor-thin supermajority to override his veto, Mr. Cooper is asking voters to pressure Republican lawmakers," reports The New York Times. "Convincing just one legislator will keep the state's current abortion law—allowing it up to 20 weeks—in place."
• "Ed Sheeran, Vanilla Ice, and a monkey walk into A.I. copyright law…" Carl Szabo of the tech trade association NetChoice argues against creating a new legal framework for content generated by artificial intelligence.
The post Twitter Caved to Censorship Requests From Turkey's Authoritarian Government appeared first on Reason.com.
]]>In the wake of another SpaceX rocket explosion, a coalition of environmental groups and nonprofits have filed a lawsuit against the Federal Aviation Administration (FAA) claiming that the agency failed to perform adequate environmental analysis of private space company SpaceX's operations at its Boca Chica, Texas, launch site.
They're demanding that the existing environmental approvals be thrown out and redone, potentially delaying future SpaceX launches for years.
"Federal officials should defend vulnerable wildlife and frontline communities, not give a pass to corporate interests that want to use treasured coastal landscapes as a dumping ground for space waste," said Jared Margolis, a senior attorney at the Center for Biological Diversity, one the plaintiffs, in a press release.
The first test launch of the company's Starship vehicle last month saw it make it off the pad only to explode over the Gulf of Mexico. SpaceX CEO Elon Musk says the launch was a qualified success that provided the company with useful data and that they should be ready to launch again within a few weeks, reports CNBC.
The FAA has grounded SpaceX Starship rockets while it conducts a "mishap" investigation, reports Reuters.
The Center for Biodiversity and their co-plaintiffs are demanding significantly more federal scrutiny.
The lawsuit filed by the Center for Biodiversity, the American Bird Conservancy, the Surfrider Foundation, Save RVG, and the Carrizo/Comecrudo Nation of Texas in the U.S. District Court for the District of Columbia claims that the FAA failed to follow the requirements of the National Environmental Policy Act (NEPA) before signing off on SpaceX's Starship/Super Heavy launch project in June last year.
NEPA requires that federal agencies study the environmental impacts of the projects they take on, whether that's the decision to fund the widening of a highway or signing off on a new space launch facility.
In June 2022, the FAA released a 40-page environmental assessment of SpaceX's launch program that found no evidence it would have significant impacts provided it adopted 75 mitigation actions.
Ars Technica described a couple of those mitigation actions in an article from last year. SpaceX would have to shield its artificial lights to prevent interference with turtles nesting on nearby beaches, prevent falcons from nesting on site, and hire a biologist to monitor impacts to flora and fauna from its launches and other activities.
To placate humans who might want to vacation in the area, SpaceX also agreed to not launch rockets on various federal and Texas holiday weekends. SpaceX launches require the closure of access roads to public beaches in the area.
There were also more unusual demands made of SpaceX, including that it make annual $5,000 payments to an "adopt-an-ocelot" fund and pay for the installation of placards in the area detailing the local history of the Mexican-American and Civil Wars.
In addition to these mitigation measures, SpaceX also significantly scaled back the planned facilities at its Boca Chica site, reports Ars Technica. It ditched proposals for an on-site fuel manufacturing facility, a desalination plant, and a power plant.
SpaceX's hope was that the reduced footprint of the site would enable the company to avoid a full-blown Environmental Impact Statement (EIS). That's the most severe form of NEPA review. The documents themselves often run hundreds of pages. They take 4.5 years to complete, on average, and run over 600 pages. Some statements for large and complex projects like highways or power transmission lines can take over a decade.
Plaintiffs argue in their lawsuit that the SpaceX Starship launch program is the exact kind of program that requires an EIS, given the launch site's agency to several state parks and wildlife areas and the size of the rockets being fired.
"Permitting SpaceX to launch the largest rockets known to humankind is a type of significant federal action that requires full analysis in an EIS," reads the lawsuit.
The NEPA suit raises a number of impacts that it claims were not sufficiently studied or mitigated in last year's assessment. While SpaceX is limiting its launches on and around particular holidays to reduce necessary road closures, plaintiffs argue that the 500 hours of road closures that the company is still allowed need to be analyzed further.
While SpaceX analyzed the effects of its own greenhouse gas emissions, it failed to address the cumulative harms of rising greenhouse gas emissions for the entire planet as required by NEPA, the lawsuit argues.
The June 2022 environmental assessment required SpaceX to consult with biologists about wildlife impacts after a launch "anomaly" and/or explosion. Plaintiffs are demanding that mitigations to prevent explosions in the first place should be adopted.
That seemingly misses the point of the launches. SpaceX is testing rockets to gather data so that the risk of future explosions can be reduced.
If mitigation measures existed to prevent these problems today, surely the company would have adopted them. SpaceX having to wait years to start launches again while an EIS is created means the company will lose years of data from additional test flights.
It's one example of how environmental review laws, and lawsuits citing them, can impinge on a goal that environmentalists and SpaceX are both seemingly in favor of: fewer SpaceX rockets exploding.
The post Environmentalist Lawsuit Could Delay SpaceX's Starship Launches for Years appeared first on Reason.com.
]]>In this week's The Reason Roundtable, Katherine Mangu-Ward is back alongside editors Matt Welch, Nick Gillespie, and Peter Suderman as they contextualize Fox News' defamation settlement with Dominion Voting Systems and Tucker Carlson's sudden departure from the network.
01:20: Fox News' defamation settlement
15:55: BuzzFeed News shuts down
29:55: Tucker Carlson out at Fox News
44:35: Weekly Listener Question
50:45: Elon Musk and rapid unscheduled disassembly
52:35: This week's cultural recommendations
Mentioned in this podcast:
"The Fox/Dominion Settlement Highlights the Importance of Discovery in Proving 'Actual Malice,'" by Jacob Sullum
"In a $788 Million Defamation Settlement, Fox News Admits That It Spread False Claims About Election Fraud," by Jacob Sullum
"Fox's Excuses Reinforce Dominion's Defamation Case," by Jacob Sullum
"Tucker Carlson on The Daily Caller, His Infamous Jon Stewart Debate, and Walking out of the Ron Paul Convention," by Michael C. Moynihan
"Tucker Carlson Describes the Capitol Riot as 'Mostly Peaceful Chaos.' Is He Wrong?" by Jacob Sullum
"Does Tucker Carlson Get Anything Right About Libertarians?" by Matt Welch
"Tucker: The Man and His Dream," by Matt Welch
"The Death of BuzzFeed News Was a Facebook Murder-Suicide," by Robby Soave
"Democrats Threaten Matt Taibbi With Jail Time Over Twitter Files Testimony," by Robby Soave
Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.
Today's sponsor:
Audio production by Tarcisio Longobardi; assistant production by Hunt Beaty.
Music: "Angeline," by The Brothers Steve
The post What Does Tucker Carlson's Sudden Schism With Fox News Mean? appeared first on Reason.com.
]]>Who's that sliding into your Twitter DMs? Is it the federal government? Well, according to Elon Musk, who took over the social media platform last year, government bureaucrats were routinely taking a close look at users' content.
Thanks to the Twitter Files, a collaboration between Elon Musk and independent journalists, as well as the Facebook Files, my own investigative project for Reason, we now know that social media companies constantly faced pressure to censor speech—and that pressure was coming from the government. The State Department, the Department of Homeland Security, the FBI, and even the White House all targeted legitimate online speech. Federal law enforcement agents flagged tweets and posts for deletion under the guise of protecting national security.
And no one should be surprised. The feds love to invoke national security, and then take away more of your rights and pretend they're keeping you safe.
Well, guess what: They're at it again. Following a massive leak of U.S. intelligence documents that were posted on Discord, another social media platform, the Biden administration wants more power to monitor online chat rooms.
A senior administration official told NBC News that the government "is now looking at expanding the universe of online sites that intelligence agencies and law enforcement authorities track."
That's bad. We know where it will lead: More government surveillance of the American people, and eventually, more censorship of political speech. In the runup to the 2020 election, for example, the FBI warned social media sites to be wary of Russian-based disinformation. But then they also started flagging joke tweets written by Americans that happened to be about the election.
You see, the feds just can't help themselves: They'll use the new powers we give them to pressure the internet to shut down dissent. We've seen it happen time and again.
So let's keep the government out of our chatrooms. And watch out for your DMs.
Photos: Javier Rojas/ZUMAPRESS/Newscom; Michael Ho Wai Lee/ZUMAPRESS/Newscom; BOB STRONG/UPI/Newscom; Graeme Sloan/Sipa USA/Newscom
Music: "New Car—Instrumental" by Rex Banner via Artlist
The post Elon Musk Warns Tucker Carlson: The Feds Are in Your Twitter DMs appeared first on Reason.com.
]]>Yesterday, NPR announced that it was leaving Twitter after CEO Elon Musk slapped a "state-affiliated media" label on its account—the same label used for outlets like the Chinese propaganda publication Xinhua, which has been described by watchdogs and scholars as the "eyes and tongue" of the Chinese Communist Party and "the world's biggest propaganda agency."
"NPR's organizational accounts will no longer be active on Twitter because the platform is taking actions that undermine our credibility by falsely implying that we are not editorially independent," reads the statement from executives. "We are not putting our journalism on platforms that have demonstrated an interest in undermining our credibility and the public's understanding of our editorial independence." The Public Broadcasting Service (PBS) followed suit, to which Musk replied: "Publicly funded PBS joins publicly funded NPR in leaving Twitter in a huff after being labeled 'Publicly Funded.'"
Musk's decision to slap such a label on NPR's account sure looks like a textbook culture war provocation, eyeing NPR as a target ripe for roasting by his followers. (He tweeted "Defund NPR," sharing a screenshot of an email an NPR reporter had sent him asking for comment.) But there's more than a grain of truth to what he's saying, and if NPR stopped taking government funds, it could clear its own name.
So how state-funded is NPR, really? At its inception, it was entirely paid for by the Corporation for Public Broadcasting (CPB), which was created by the Public Broadcasting Act signed by President Lyndon B. Johnson in 1967. Democrats at the time were enamored with the idea of improving the breadth and quality of educational programming. "While we work every day to produce new goods and to create new wealth, we want most of all to enrich man's spirit," said Johnson. "That is the purpose of this act."
So in 1970, the CPB formed NPR, which started with 88 station affiliates; All Things Considered launched a year later. For that first decade and change, NPR happily chomped away at that government teat, ultimately landing itself in major financial trouble in the early '80s, accused of reckless spending by the Government Accountability Office and forced by the CPB to restructure. Instead of the government giving money directly to NPR, which could then be given out to member stations, the CPB decided to give funds to member stations—which now number more than 1,000—to buy NPR content, which remains the structure today (thus making it tough to suss out how much federal funding NPR actually gets).
As of 2017, NPR got 38 percent of its funding from individual donors; 19 percent from corporate sponsors; 10 percent from foundations; 10 percent from university licensing; and about 4 percent from the government. The CPB still funds NPR via the member station funds doled out, to the tune of roughly 10 percent nowadays, but it's tough to say exactly how much tainted taxed-away money is flowing into NPR's now-resplendent coffers. (Do public university funds count, for example?)
PBS, for its part, gets roughly 15 percent of its funding from the CPB, with a hefty chunk coming from individual and corporate donors.
Tea Partiers and Trumpists have all cyclically threatened to ax public radio funding, and they ought to follow through—in part because NPR can surely stand on its own two legs now.
NPR wants to have it both ways: Its supporters say less than 1 percent of its funding comes from the government but its website claims "federal funding is essential to public radio's service to the American public." It wants to continue holding out its hands for government funding, but it also wants to brag about complete and total editorial independence—two ideas held in tension—while developing quite the reputation for lefty bias (backed up by the Knight Foundation and Pew Research Center audience polling), which leaves conservative and libertarian taxpayers' stomachs churning.
But Tea Partiers of yore, Trumpy types today, and Musk fanboys miss that it's not that NPR is totally wrong on everything, or the enemy of the people, or a Pravda-style propaganda arm of the U.S. government. NPR has reported on the Social Security Ponzi scheme; how petty fines lead to driver's license suspensions; not to mention airing the dulcet tones of Reason's own Nick Gillespie making the case for…defunding public radio. ("The idea that we have an inalienable right to Car Talk or Sesame Street to be piped in over tax-supported airwaves strikes me as a stretch," said Gillespie in 2010.) NPR does good work at times and has shifted its funding model over the years to rely much more on individual, corporate, and foundation contributions. Its member stations ought to do the same, and all of them can succeed or fail of their own merit, the way pretty much all other publications and broadcasters do in this country.
What started as, essentially, a Great Society initiative designed to provide a diversity of programming options for the poor and downtrodden has outlasted its usefulness in the era of Netflix, Hulu, Disney+, and those little devices we keep in our pockets at all times that can stream any podcast on demand. The media landscape is totally different from how it was in 1967, and government allocation of funding ought to reflect that.
Why can't NPR and PBS privatize and charge customers a small monthly fee to view their content, the way many other platforms do? You still saw that Succession episode and you still read that New York Times or Bloomberg article even though you had to pay for the content (or prioritize what to consume due to cost).
It's time for the federal government to kick NPR and PBS out of the nest; your taxpayer dollars should never have been subsidizing Big Bird, Tiny Desk concerts, or those insufferable tote bags in the first place, and they certainly shouldn't now in the era of audiovisual abundance.
The post Ax Government Funding for NPR appeared first on Reason.com.
]]>Over the past three years, we reporters learned there were certain things that we weren't allowed to say. Not long ago, in fact, my new video may have been censored.
One dangerous idea, we were told, was that COVID-19 might have been created in a lab at the Wuhan Institute of Virology. That seems very possible, since the institute studied coronaviruses in bats, and America's National Institutes of Health gave the lab money to perform "gain-of-function" research, experiments where scientists try to make a virus more virulent or transmissible.
A Washington Post writer worried the lab leak theory "could increase racist attacks against Chinese people and further fuel anti-Asian hate."
The establishment media fell in line, insisting that COVID most likely came from a local market that sold animals.
Left-wing TV mocked the lab theory as a "fringe idea" that came from "a certain corner of the right."
"This coronavirus was not manmade," said MSNBC's Chris Hayes, confidently, "That is not a possibility."
Not even a possibility?
Debate about it, we were told, posed a new threat: misinformation.
Facebook banned the lab leak theory, calling it a "false claim."
But now the U.S. Department of Energy says the pandemic most likely came from a lab leak. FBI director Christopher Wray now says the origin of the pandemic is "most likely a potential lab incident in Wuhan."
For two years, the most likely explanation was censored.
Do the media gatekeepers apologize for their censorship? No.
The closest to an admission of guilt I found was from Chris Hayes, who eventually said, "There's a kernel of truth to the idea that some folks were too quick to shut down the lab leak theory."
There was more than "a kernel of truth." Again and again, politically correct media silenced people who spoke the truth.
Facebook throttled the reach of science journalist John Tierney's articles simply because he reported, accurately, that requiring masks can hurt kids.
YouTube suspended Sen. Rand Paul (R–Ky.) for saying, "Most of the masks you get over the counter don't work."
But what they said is true. The Centers for Disease Control and Prevention updated its guidance to say cloth masks are not very effective. And now a big study failed to find evidence that wearing even good masks stops the spread of viruses.
Probably the most blatant censorship was Twitter's shutting down the New York Post's reporting about Hunter Biden's laptop.
Twitter wouldn't let users decide for themselves. The company just called the Post's report "potentially harmful" and blocked users from sharing it.
Facebook, as usual, was sneakier, suppressing the story instead of banning it outright. That's what they do to my climate change reporting.
Today, the media admit the Post story is true. But they don't admit they were wrong. Now they just say things like, "Nobody cares about Hunter Biden's laptop."
Bad as the media are, what's worse is that government wanted to censor.
Sen. Mark Warner (D–Va.) complained, "We've done nothing in terms of content regulation!"
Fortunately, his colleagues were not as irresponsible as he; no censorship legislation passed. But government did apply lots of pressure.
The White House asked Facebook to kill what they called "disinformation," even urging them to censor private WhatsApp messages.
Now that Elon Musk owns Twitter and opened up the company's internal files, we know that censorship requests came from "every corner" of government, as journalist Matt Taibbi put it.
Even individual politicians tried to censor.
Sen. Angus King's (I–Maine) staff complained about Twitter accounts that they considered "anti-King." Rep. Adam Schiff's (D–Calif.) office asked Twitter to suppress search results.
Fortunately, Twitter refused.
But the sad truth is that lots of government agencies and media tyrants want to limit what you read and hear.
At least now, we can speak the truth:
COVID probably was created in a Chinese lab.
Masks are unlikely to provide much protection and requiring them can harm kids.
Hunter Biden did lots of sleazy things.
Self-appointed censors tried to shut us up, but eventually, the truth almost always comes out.
COPYRIGHT 2023 BY JFS PRODUCTIONS INC.
The post The Media and Politicians Keep Trying To Censor Things That Turn Out To Be True appeared first on Reason.com.
]]>Well, it was fun while it lasted. The professional relationship between billionaire Elon Musk, the owner of Twitter, and independent journalist Matt Taibbi hit a serious and possibly irreparable snag late last week following Musk's quixotic decision to suppress links to Substack.
Substack is a newsletter company with a stated commitment to the principles of broad free expression; as such, it is a preferred platform for many independent journalists, such as Taibbi. Twitter and Substack share a major financial backer: The venture capital firm Andreessen Horowitz has invested $65 million in Substack and $400 million in Musk-era Twitter.
When he acquired Twitter last year, Musk vowed to correct what he viewed as heavy-handed moderation policies akin to censorship, saying that the social media site should function as a sort of town square. With an eye toward demystifying the site's approach to content moderation, Musk gave several journalists—including Taibbi, Michael Shellenberger, Bari Weiss, Lee Fang, and David Zweig—access to Twitter staff's internal emails. This led to a series of reports, the Twitter Files, that helped explain why Twitter's previous regime had decided to suppress various accounts believed to be part of a Russian influence campaign. Unbeknownst to much of the public, Twitter had faced constant pressure from the FBI, the Department of Homeland Security, the Centers for Disease Control and Prevention, and other government agencies.
Taibbi recently clashed with MSNBC's Mehdi Hasan, who attempted to discredit much of the Twitter Files.
Why did @mtaibbi launch the Twitter Files by suggesting that Biden officials got Twitter to delete tweets… without disclosing that the tweets included non consensual nude images of the president's son Hunter?@mehdirhasan asked him:
Full interview: https://t.co/mheOL0TcSW pic.twitter.com/FEVO13cmju
— The Mehdi Hasan Show (@MehdiHasanShow) April 7, 2023
Hasan did identify some errors in Taibbi's reporting, which first appeared on Twitter itself and was ultimately published by Substack. But one notable complaint of Hasan's—that Taibbi had confused the Cybersecurity and Infrastructure Security Agency (CISA) with the Center for Internet Security (CIS)—is not nearly the gotcha that Hasan seems to think it is:
Mehdi you obviously haven't read the reporting here. CISA is the DHS agency that partnered with EIP. Here's an email of a CISA employee discussing an EIP-ticketed misinfo report to Twitter employees. Matt got it right. https://t.co/WnPyl89LHm pic.twitter.com/ZV1vMqL42b
— Lee Fang (@lhfang) April 7, 2023
Hasan also assailed Taibbi for seemingly refusing to criticize Musk. But a mere 24 hours later, Taibbi did precisely that.
Of all things: I learned earlier today that Substack links were being blocked on this platform.
When I asked why, I was told it's a dispute over the new Substack Notes platform…
— Matt Taibbi (@mtaibbi) April 7, 2023
In response, Musk claimed that Twitter was not throttling Substack per se; rather, "Substack was trying to download a massive portion of the Twitter database to bootstrap their Twitter clone, so their IP address is obviously untrusted." The "clone" in question is Substack Notes, a new feature that is said to mimic Twitter in several ways. Musk also called Taibbi an "employee" of Substack—which is false—and unfollowed his former ally.
These are frustrating developments. Musk's stated plan to make Twitter a more hospitable place for free expression was admirable, as was his decision to provide some transparency about the platform's inner workings. Despite all the dismissive assertions from mainstream media pundits—many of whom assiduously bought into the oversold Russian influence narrative—the Twitter Files were in fact a somethingburger: They revealed a high level of coordination between major tech companies, government agencies, and political figures. It is thanks to Musk and Taibbi that this is now apparent.
Musk's treatment of Substack is a self-sabotaging betrayal of his principles—one that Taibbi has rightly declined to ignore. It is not enough to simply articulate anti-censorship commitments: If you're the sole figure in charge, you really have to live by them.
Riley Gaines, a former college swimmer who argues that it is unfair for transgender women to compete against cisgender women, said last week that she faced violence when speaking at San Francisco State University (SFSU).
"It was terrifying," she elaborated to Fox News. "The police did not inform me of any sort of action plan. Turning Point USA invited me to the campus. I delivered a very civil and respectful speech where I had great dialog with even protesters who were participating in a sit-in. All of a sudden, after my speech, the room was stormed, the lights were turned off, and I was rushed with no one there to escort me to a safe place."
The Foundation for Individual Rights and Expression (FIRE) called on SFSU to protect free speech rights on the campus. FIRE notes that the public university recently subjected a professor to an investigation for showing a picture of Muhammed in class—an action that is obviously protected by the First Amendment.
Dear @SFSU,
Stop investigating the professor who showed a picture of Muhammed in class and investigate the students who accosted @Riley_Gaines_ at a @TPUSA event instead.
TAKE ACTION: Demand that SFSU investigate the shoutdown, not the professor https://t.co/gskvbM02Cu pic.twitter.com/NiuVkqNgmG
— FIRE (@TheFIREorg) April 7, 2023
Two courts have issued conflicting rulings on an abortion-related issue: access to the drug mifepristone. A Texas judge, Matthew Kacsmaryk, has invalidated the Food and Drug Administration's approval of the pill, which occurred in the year 2000. According to The New York Times:
Mifepristone is the first pill in the two-drug medication abortion regimen that is used in over half of pregnancy terminations in the United States. It blocks a hormone that allows a pregnancy to develop. For now, it is still available.
Judge Kacsmaryk immediately stayed his ruling for seven days to give the Department of Justice…a chance to appeal it to the U.S. Court of Appeals for the Fifth Circuit, and the Justice Department has already filed notice of its appeal.
If the appeals court upholds the judge's order or declines to put it on pause until the full case is heard, the Justice Department will most likely appeal that decision to the Supreme Court, which could quickly decide whether or not to suspend the injunction. The Supreme Court would also take into account the contradictory ruling by the federal judge in the Washington district court case, legal experts said.
The second ruling, by Judge Thomas Rice in Washington, prohibits the Food and Drug Administration from restricting the supply of mifepristone.
The Volokh Conspiracy's Jonathan Adler argues that both rulings were legally wrong. He expects "quick action from appellate courts, if not the Supreme Court itself."
The post Elon Musk Throttles Substack, Clashing With Twitter Files' Matt Taibbi appeared first on Reason.com.
]]>"AI systems with human-competitive intelligence can pose profound risks to society and humanity," asserts an open letter signed by Twitter's Elon Musk, universal basic income advocate Andrew Yang, Apple co-founder Steve Wozniak, DeepMind researcher Victoria Krakovna, Machine Intelligence Research Institute co-founder Brian Atkins, and hundreds of other tech luminaries. The letter calls "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." If "all key actors" will not voluntarily go along with a "public and verifiable" pause, the letter's signatories argue that "governments should step in and institute a moratorium."
The signatories further demand that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." This amounts to a requirement for nearly perfect foresight before allowing the development of artificial intelligence (A.I.) systems to go forward.
Human beings are really, really terrible at foresight—especially apocalyptic foresight. Hundreds of millions of people did not die from famine in the 1970s; 75 percent of all living animal species did not go extinct before the year 2000; and "war, starvation, economic recession, possibly even the extinction of homo sapiens" did not happen since global petroleum production failed to peak in 2006.
Nonapocalyptic technological predictions do not fare much better. Moon colonies were not established during the 1970s. Nuclear power, unfortunately, does not generate most of the world's electricity. The advent of microelectronics did not result in rising unemployment. Some 10 million driverless cars are not now on our roads. As OpenAI (the company that developed GPT-4) CEO Sam Altman argues, "The optimal decisions [about how to proceed] will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far."
Still, some of the signatories are serious people and the outputs of generative A.I. and large language models like ChatGPT and GPT-4 can be amazing—e.g., doing better on the bar exam than 90 percent of current human test takers. They can also be confounding.
Some segments of the transhumanist community have been greatly worried for a while about an artificial super-intelligence getting out of our control. However, as capable (and quirky) as it is, GPT-4 is not that. And yet, a team of researchers at Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and in a pre-print reported, "The central claim of our work is that GPT-4 attains a form of general intelligence, indeed showing sparks of artificial general intelligence."
As it happens, OpenAI is also concerned about the dangers of A.I. development—however, the company wants to proceed cautiously rather than pause. "We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice," wrote Altman in an OpenAI statement about planning for the advent of artificial general intelligence. "We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize 'one shot to get it right' scenarios."
In other words, OpenAI is properly pursuing the usual human path for gaining new knowledge and developing new technologies—that is, learning from trial and error, not "one shot to get it right" through the exercise of preternatural foresight. Altman is right when he points out that "democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas."
A moratorium imposed by U.S. and European governments, as called for in the open letter, would certainly delay access to the possibly quite substantial benefits of new A.I. systems while doubtfully increasing A.I. safety. In addition, it seems unlikely that the Chinese government and A.I. developers in that country would agree to the proposed moratorium anyway. Surely, the safe development of powerful A.I. systems is more likely to occur in American and European laboratories than those overseen by authoritarian regimes.
The post Elon Musk, Andrew Yang, and Steve Wozniak Propose an A.I. 'Pause.' It's a Bad Idea and Won't Work Anyway. appeared first on Reason.com.
]]>Benoit Blanc (Daniel Craig) is back. The eccentric detective and his endearing, exaggerated Southern accent have a new case to solve in Glass Onion, the sequel to 2019's beloved whodunit Knives Out.
The mystery unfolds in the midst of the COVID-19 pandemic, bringing Blanc to a private island owned by the tech billionaire Miles Bron (Edward Norton). Bron is clearly a broad—sometimes very broad—caricature of Elon Musk, and he's not alone: A men's rights YouTuber played by Dave Bautista is channeling Joe Rogan, or perhaps the accused sex trafficker Andrew Tate. A bought-and-paid-for Democratic politician (Kathryn Hahn), a clueless fashion influencer (Kate Hudson), an amoral scientist (Leslie Odom Jr.), and a rival tech mogul (Janelle Monáe) round out the cast of characters/list of suspects.
Writer-director Rian Johnson has once again assembled all the elements of a classic Agatha Christie–style detective story, added modern trappings, and poured glitter all over it. Glass Onion is a bit more straightforward than its predecessor, but there are still plenty of fake-outs, surprise killings, and flashbacks that reveal everything you thought you knew was a lie.
All that said, some viewers won't like this entry as much; its self-indulgent ending falls comparatively flat. And there's no getting around Glass Onion's hectoring progressive agenda: a ceaseless, flashy reminder that hating white male tech billionaires is The Current Thing. The film's politics date it, even more so than the COVID masks the characters wear before arriving at the crime scene.
The post Review: The Clumsy Anti-Muskism of <i>Glass Onion</i> appeared first on Reason.com.
]]>The Alliance for Securing Democracy (ASD) is a nonprofit organization that leverages the purported expertise of former U.S. national intelligence officials to identify Russian influence on social media. Its advisory council includes the neoconservative writer Bill Kristol, Hillary Clinton campaign official John Podesta, and various former employees of national security agencies.
ASD maintains Hamilton 68, a dashboard that monitors the accounts of 600 Twitter accounts alleged to be Russian bots. The dashboard was highly regarded by the mainstream media: Favorable coverage of ASD's work appeared in Politico ("The Russian Bots Are Coming. This Bipartisan Duo Is On It."), The Washington Post ("Russia-linked accounts are tweeting their support of embattled Fox News host Laura Ingraham"), and elsewhere.
But according to new revelations uncovered by independent journalist Matt Taibbi as part of the Twitter Files, the accounts on ASD's list weren't Russian bots. Moreover, Twitter content moderators knew the list was inaccurate but were reluctant to criticize it due to fears of bad press.
Indeed, Taibbi published screenshots of several emails that show Twitter's former trust and safety czar, Yoel Roth, discovering the list was wrong. The dashboard "falsely accuses a bunch of legitimate right-leaning accounts of being Russian bots," he wrote. "I think we need to just call this out on the bullshit it is."
3."Falsely accuses a bunch of legitimate right-leaning accounts of being Russian bots." pic.twitter.com/EHRWACkZu4
— Matt Taibbi (@mtaibbi) January 27, 2023
Ultimately, Roth did not publicly denounce the list as bullshit—in part because other Twitter employees dissuaded him. "We have to be careful in how much we push back on ASD publicly," said one communications official.
Taibbi notes that several of the officials who advised against a public confrontation with ASD eventually left Twitter to work for Democratic political figures.
This is all extremely damning. An organization with ties to the U.S. national security apparatus falsely portrayed a bunch of mostly right-leaning, Trump-supporting Twitter content as nefarious and Russian in origin. The mainstream media eagerly peddled this incorrect narrative. And Twitter wavered on pushing back because elite sentiment was so disposed to imagine Russian operatives hiding behind every curtain.
The post Twitter Files: Employees Knew the Media's Favorite Russian Bots List Was Fake appeared first on Reason.com.
]]>The so-called Twitter Files, written by a group of independent journalists given access to internal company documents, offer a behind-the-scenes glimpse at how the federal government shaped the flow of information on one of the world's largest social media platforms.
Some tech pundits say that the Twitter Files contain no secrets: they knew about the thousands of takedown requests the company receives every month from law enforcement agencies and the courts, or they had already opined about the immense challenges of content moderation. However, the Twitter Files have brought important new information to light. They show that the company stifled debate over important policy issues by shadowbanning certain accounts for no good reason and then misleading the public. They show that Twitter was routinely strongarmed by the White House and the FBI into complying with frivolous takedown requests. And they provide evidence that the intelligence community likely influenced the decision to suppress the Hunter Biden laptop story during Joe Biden's 2020 presidential campaign.
"Almost every conspiracy theory that people had about Twitter turned out to be true," Elon Musk said on the All-In podcast in late December. "Is there a conspiracy theory about Twitter that didn't turn out to be true?"
Conspiracy theorists are often sloppy with the facts and exaggerate what actually happened. But the information brought to light by the Twitter Files should be alarming to anyone who cares about free speech and a free society. Is the government meddling similarly with YouTube, Facebook, Instagram, and Google search? How can we prevent the internet from becoming a centralized apparatus through which state actors shape and censor public debate? Here are three major takeaways from the Twitter Files:
1. Twitter distorted the conversation and misled the public.
Twitter had a system of "whitelists" that allowed its algorithms and human moderators to turn engagement dials up and down based on what a user said. It used this power to limit the ability of certain groups and individuals to reach an audience, including conservative commentator Dan Bongino, Stanford economist and medical school professor Jay Bhattacharya, mRNA vaccine critic Alex Berenson, and the Libs of TikTok account.
The company regularly tap-danced around the meaning of "shadowbanning" to maintain plausible deniability. In a 2018 blog post, Twitter's Trust and Safety team wrote, "We do not shadow ban. You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile)."
Needless to say, making Tweets so hard to find that digging through someone's profile is the only way to unearth them is what's commonly known as "shadow banning," or, as Twitter employees termed it with an Orwellian flair, "visibility filtering."
The Twitter Files show that company staff became increasingly comfortable using these tools to manage the flow of information and political discourse around the 2020 election, regularly deploying filters to limit the visibility of Trump's tweets and many others pertaining to election results in the weeks preceding the January 6 riot and the decision to evict the president from the platform.
Of course, Twitter is a private company, and it has every right to label the tweets of Harvard epidemiologist Martin Kuldorff as "misleading" when he tweets statements such as, "Thinking that everyone must be vaccinated is as scientifically flawed as thinking that nobody should."
But Twitter is still worthy of our condemnation. Stanford physician and economist Jay Bhattacharya was shadowbanned despite being a respected epidemiologist from a prestigious university, and many of his warnings during the pandemic turned out to be correct.
And you can acknowledge serious problems with the work of former New York Times reporter Alex Berenson—who, for instance, badly misinterpreted data to infer a spike in "vaccine-caused mortality"—while still believing it's preferable to have a public airing of controversial and deeply flawed arguments.
A better way to deal with speech you disagree with is to respond to it, as Derek Thompson attempted to do in The Atlantic when he called Berenson "The Pandemic's Wrongest Man." Ironically, Twitter raised Berenson's profile by allowing him to inhabit the role of the oppressed truth-seeker.
2. The government is secretly policing speech.
The most troubling thing about the Berenson de-platforming isn't Twitter's decision per se, but whether it made that decision freely. Was it done at the behest of the federal government? The Twitter files provide circumstantial evidence that the White House played a role.
"When the Biden admin took over, one of their first meeting requests with Twitter executives was on Covid," writes journalist David Zweig in the Twitter files. "The focus was on 'anti-vaxxer accounts.' Especially Alex Berenson."
Berenson was suspended hours after Biden said to a reporter that social media companies were "killing people" by failing to police pandemic-related misinformation.
Zweig also revealed that a series of meetings took place last December in which an "angry" Biden team excoriated Twitter executives because they were "not satisfied " with its "enforcement approach" and wanted "Twitter to do more and to de-platform several accounts."
In Twitter Files 6, Matt Taibbi described Twitter as an "FBI subsidiary."
Agents from a dedicated task force would regularly send lists of accounts—some with fewer than 1,000 followers—for Twitter to look at for terms-of-service violations, such as this left-leaning account jokingly telling Republicans to vote a day late.
Former Twitter's former head of Trust and Safety, Yoel Roth, was in weekly meetings with the FBI, the Department of Homeland Security, and the Office of the Director of National Intelligence. In return for the company's work handling FBI requests, Twitter received $3.4 million between October 2019 and early 2021.
Complying with somewhere in the range of 8,000 requests would have required significant resources from Twitter, and there's no reason the government shouldn't have to pick up the tab. But should this arrangement exist in a free society, given the mission creep that the Twitter Files exposed?
Another alarming secret revealed by the Twitter Files: what led Twitter to block users from sharing a major New York Post story about the contents of Hunter Biden's laptop. The files reveal that Jim Baker, the former FBI lawyer then working at Twitter, leaned on Roth to treat the laptop as the likely result of a Russian hack-and-leak operation, despite little evidence for that claim. The FBI had told Roth to expect just such a foreign operation to drop in October and that Hunter Biden would be a likely target. A month before the laptop story broke, Roth even participated in a tabletop simulation at the Aspen Institute about handling a Hunter Biden data dump.
Publicly, 51 former intelligence officials, including James Clapper, Michael Hayden, and John Brennan, published a letter claiming that the laptop story "has all the classic earmarks of a Russian information operation." The twist? The New York Post story turned out to be completely true.
With all that pressure coming from supposedly reputable and knowledgeable sources, how independently was Twitter acting when it suppressed the story?
Giving the government unfettered access to exert pressure behind the scenes turned a forum for free discussion—with all the unavoidable messiness and misinformation that free speech entails—into something much worse: a state-approved narrative generator.
#3 Twitter permitted covert state propaganda on its platform.
The U.S. ran sock-puppet accounts on Twitter and then may have tried to shut them down secretly when it looked like it was caught in the act.
After Trump won the 2016 election, and Hillary Clinton blamed Russia for her loss, Congress began to focus intently on the role of foreign misinformation on the internet.
Twitter began transparently labeling accounts associated with any government, whether it be a politician or a state-run media outlet. It also pledged to Congress to "rapidly identify and shut down all state-backed covert information operations & deceptive propaganda."
But apparently, that didn't always extend to propaganda disseminated by the U.S. government.
As The Intercept's Lee Fang revealed, the U.S. Central Command (CENTCOM) asked Twitter to "whitelist" several Arabic-language accounts spreading messages in support of the U.S.-backed Yemen War so that they would get the same special treatment that verified accounts receive. Twitter complied.
One whitelisted account described "accurate" U.S. drone strikes that "only hit terrorists." At first, the accounts had an attached disclosure noting that they were linked to the U.S. government. But that disclosure was eventually dropped from many accounts, and at least one "whitelisted" government sockpuppet account used an A.I.-generated image as a profile picture.
An internal email thread shows that the Department of Defense wanted to meet with Twitter's legal team in a secure facility. One member of Twitter's team speculated that the agency wanted to classify its work with Twitter to "avoid embarrassment." Baker suspected that the fake accounts were set up using "poor tradecraft" and that they want to wind down the operation without revealing the accounts' "connection to the DoD."
So what should we demand of Congress in light of the Twitter Files revelations?
Lawmakers should rein in the FBI and other executive agencies with stricter reporting requirements and defund any federal task force whose mission includes fighting the slippery concept of "misinformation." And yet that will probably never happen—the new omnibus bill just increased the FBI's funding. Unfortunately, if a tool of centralized speech control can be abused by a government, it's a virtual certainty that it will be abused eventually.
We should follow the advice of Julian Assange and the cypherpunk movement that shaped his thinking, which maintains that technology, not government policy, is the only effective check on authoritarian tendencies in the long run.
"You cannot trust a government to implement the policies it says it's implementing," Assange said in a filmed discussion with his fellow cypherpunks in 2017. "We must provide the underlying tools, cryptographic tools that we control, as a sort of use of force."
The good news is that many people are gradually waking up to the dangers of allowing a company to maintain centralized control over public conversation. After Trump's de-platforming, many conservatives migrated to the encrypted network Telegram. Since Elon Musk's takeover of Twitter, many progressives have joined the decentralized platform Mastodon. Former Twitter CEO Jack Dorsey has invested in Nostr, an open protocol that allows users to transmit a post over a decentralized network of relays using cryptographic keys. Dorsey and published an article calling for "a native internet protocol for social media," arguing that "Social media must be resilient to corporate and government control" and "moderation is best implemented by algorithmic choice."
And suppose Elon Musk wants to keep Twitter relevant and competitive in this continually fracturing landscape and live up to his promise of bringing more free speech to the platform. In that case, he will have to take these developments seriously and think about ways that Twitter 2.0 can avoid the fate of its predecessor by once again falling victim to state interference.
He should aim to design a platform that makes this kind of meddling impossible so that we don't have to trust any tech executive, including Elon Musk, not to censor speech on behalf of the government.
Produced by Zach Weissmueller; edited by Regan Taylor; sound editing by John Osterhoudt.
Photo credits: PIERRE VILLARD/SIPA/Newscom; Adrien Fillon/ZUMAPRESS/Newscom; Al Drago—Pool via CNP/SIPA/Newscom; Alisdare Hickson, CC BY-SA 4.0, via Wikimedia Commons; CelebrityHomePhotos/Newscom; Daniel Oberhaus, CC BY-SA 4.0, via Wikimedia Commons; Dominick Sokotoff/ZUMAPRESS/Newscom; Dylan Stewart/Image of Sport/Newscom; Gage Skidmore, CC BY-SA 3.0, via Wikimedia Commons; I, Aude, CC BY-SA 3.0, via Wikimedia Commons; Image of Sport/Newscom; imageBROKER/Markus Mainka/Newscom; Jarek Tuszyński / CC-BY-SA-3.0 & GDFL, CC BY-SA 3.0, via Wikimedia Commons; Kyodo/Newscom; Kris Tripplaar/Sipa USA/Newscom; Lordalpha1, CC BY 2.5, via Wikimedia Commons; LiPo Ching/TNS/Newscom; Michael Ho Wai Lee/ZUMAPRESS/Newscom ;Ministério Das Comunicações, CC BY 2.0, via Wikimedia Commons; New Media Daysderivative work:—Cirt, CC BY-SA 2.0, via Wikimedia Commons; Oliver Contreras—Pool via CNP/picture alliance / Consolidated News Photos/Newscom; Paul E Boucher/ZUMA Press/Newscom; picture alliance / Frank Duenzl/Newscom; Ron Sachs/picture alliance / Consolidated News Photos/Newscom; Ser Amantio di Nicolao, CC BY 3.0, via Wikimedia Commons; Tesla Owners Club Belgium, CC BY 2.0, via Wikimedia Commons; Tom Williams/CQ Roll Call/Newscom; Whoisjohngalt, CC BY-SA 4.0, via Wikimedia Commons; Yichuan Cao/Sipa USA/Newscom; ZUMAPRESS/Newscom
The post Did 'Every Conspiracy Theory' About Twitter Turn Out To Be True? appeared first on Reason.com.
]]>When Russian tanks rolled across the border into Ukraine on February 23, much of the world seemed to believe the conflict would be over quickly—that the brave Ukrainians would be crushed by the Russian war machine, one of the world's largest and most expensive militaries.
Whether Russian President Vladimir Putin shared that opinion is unclear, but there's no doubt that the invasion was a display of his own overconfidence: in his military, in the expected acquiescence of Ukraine, and in the world's willingness to let a dictator redraw the boundaries of a well-established, independent European country. It hasn't gone as planned. Instead of a quick victory, the war has turned into a churning, bloody stalemate. The U.S. estimates that Putin has lost 100,000 of his own troops while conquering relatively little new territory (to say nothing of the massive humanitarian disaster the war has unleashed).
Hubris—the arrogant pride that goeth before the fall—has been a fundamental part of the human experience since long before the ancient Greeks wove it into stories as the metaphorical Achilles heel that toppled heroes and villains alike. It's always been with us, but hubris had a real moment in 2022. Politicians insisted that government action was necessary to rein in social media platforms, but the market has largely done that on its own. This year also saw the spectacular collapse of Sam Bankman-Fried's crypto-trading-platform-turned-Ponzi-scheme, and with it the exposure of the hollowness of Bankman-Fried's "effective altruism" ideology. An anticipated "red wave" failed to materialize for Republicans in the midterm elections, the promise of "transitory" inflation was punctured on almost a monthly basis, and China was forced to ease up on its "zero COVID" strategy after lockdowns triggered protests and demands for freedom unlike anything seen there in decades.
On a moral scale, everything on that list pales in comparison to Russia's invasion of Ukraine, of course. And there's a big difference between overconfidently risking your own property and destroying what belongs to someone else.
Even so, the tendency of those in power to topple themselves (or at least lose a lot of money and support) by overreaching should provide a lesson to policy makers and the rest of us as we head into 2023 about the volatility and unpredictability of the modern world. It's not just those in power who are susceptible to hubris; it's anyone who thinks he knows what's going to happen next.
Indeed, what happened to Facebook and Twitter in 2022 should put to bed the hubristic declarations of people like Sen. Elizabeth Warren (D–Mass.), who has argued for government action to break up Big Tech. The market is beating her to it.
Facebook's revenue and user base are spiraling downward, and the site's parent company, Meta, announced in November that it was laying off 11,000 workers. Zuckerberg had a rough year, but Musk might have had it even worse. He lost $25 billion on January 27 after Tesla announced that it would delay delivering some vehicles due to supply chain issues. That was the fourth-steepest one-day drop in a person's wealth in the history of Bloomberg's personal wealth index. And that was well before Musk decided to go on his ego-driven Twitter misadventure, which has blown another $39 billion hole in his net worth, according to Bloomberg's numbers. Users unhappy with Musk's leadership have fled to other platforms, accelerating an existing trend of greater competition and fragmentation in social media.
Call it creative destruction. Call it voting with your virtual feet—like the ones that might someday be attached to those weird avatars in the metaverse. Either way, it should inject some much-needed humility into the debate over the future of Big Tech.
Meanwhile, Putin wasn't the only foreign leader left exposed by overconfidence this year. China's handling of the COVID-19 pandemic had been a point of national pride, and had been offered as evidence that authoritarian countries were better equipped to handle modern crises than democracies. Then, a new wave of the virus exposed the problems with China's zero COVID strategy—and protests against the resulting lockdowns forced the government to ease some of its strictest rules, including allowing people with mild cases to quarantine at home rather than in mass quarantine facilities. "Even in a historically authoritarian country, you can only keep your boot on someone's neck for so long," Reason's Emma Camp observed earlier this month.
American leaders who fret about the threat of a rising China—and who use that threat to counterproductively make America more like China—should take notice. China's zero COVID strategy has also sunk its economy and lowered expectations for the near future. "Until recently, many economists assumed China's gross domestic product measured in U.S. dollars would surpass that of the U.S. by the end of the decade," The Wall Street Journal reported in October. Now, more are wondering "if it ever will."
Politically, it seems like the year of hubris has once again demonstrated why democracy is the least-bad form of government. Democratic countries still make big mistakes of course, but it's more difficult to launch a massively destructive war or lock your own citizens inside their homes for extended periods of time when you have to occasionally gain the consent of the governed to stay in power.
China's economic issues are tied up in its authoritarian ones. Simply put, democracies seem better positioned to handle a world where everything is in constant flux because elections serve as a sort of market mechanism.
"Authoritarian systems have a tendency toward groupthink and ideological rigidity, frequently proving unwilling or unable to properly assess information and change course when existing policies prove disastrous," Vox senior correspondent Zach Beauchamp wrote earlier this month. During 2022, he wrote, "authoritarian governments that were supposed to outcompete democracy floundered, while some of the biggest democracies staved off major internal challenges."
All governments are susceptible to hubris by their very nature. But democracy, like a free market, injects humility and uncertainty into the system. Those are benefits, not flaws. That's something I can say with utter confidence.
The post 2022 Was the Year of Hubris appeared first on Reason.com.
]]>It's the season for giving.
I'll give!
I'll donate to the Doe Fund, a charity that helps ex-cons find purpose in life through work. "Work works!" they say. It does. Doe Fund graduates are less likely to go back to jail.
I'll donate to Student Sponsor Partners (SSP), which helps at-risk kids escape bad "public [government-run]" schools.
SSP sends the kids to Catholic schools. I'm not Catholic, but I donate because the Catholic schools do better at half the cost. Thousands of families break the cycle of poverty thanks to SSP.
When I was young, I assumed government would lift people out of poverty. "Government programs, a 'war on poverty' will give a leg up to the poor," said my Princeton professors. I believed. But then I watched the programs fail.
Now I understand that government actions do as much harm as good. Sometimes, much more harm.
Take that "war on poverty." When it began, Americans were lifting themselves out of poverty. Year by year, the number of families below the poverty line decreased.
Then came our government bureaucrats with their rules and programs. So far, they've spent $25 trillion on programs for the poor.
The money helped some people. The poverty rate dropped for the first seven years of the "war."
But then progress stopped. Government's handouts encouraged people to became dependent on handouts.
Learned helplessness, it's called.
Welfare created an "underclass," generations of people who don't work. They'd lose benefits if they do.
Generations of people bear children but don't marry. They'd lose benefits if they do.
Government taught people to be passive. This passivity was something new and bad.
That's why charity is better. Charity workers can make judgments about who needs help and who needs a push.
Not all charities do good. Some are as bad as government. But when they are well run, charities encourage independence.
They also don't force us to give them money.
There's an even better way to help people: capitalism. Not that I'll convince most people.
Sen. Elizabeth Warren (D–Mass.) complained that Elon Musk should "pay taxes and stop freeloading off everyone else."
Freeloading? I like Musk's answer.
"I will pay more taxes than any American in history." (He paid $11 billion that year.) "Don't spend it all at once … oh wait you did already."
They sure did. The feds burn through $11 billion every 15 hours. Now Republicans and Democrats will spend even more.
Musk, meanwhile, is trying to make Twitter profitable. Some of his ideas are bad. Some will fail. But at least Musk spends his own money or money people willingly loan him.
Warren and her fellow politicians take money from us by force.
I prefer Musk's way.
Billionaires sometimes do nasty things. Facebook's Mark Zuckerberg censors truthful reporting. Zuckerberg and Amazon's Jeff Bezos sneakily lobby for regulations (like a higher minimum wage) that give them advantages over their competitors. Former President Donald Trump and Oprah Winfrey bought gross polluting yachts.
But it's their own money. They are free to spend it on whatever they want. Most do better things with it than government would.
Zuckerberg invented better ways to connect with people. Bezos makes shopping easier and cheaper. Musk stopped socialist idiots from censoring my Twitter account, created better electric cars, and gave satellite internet to poor people.
Businesses do better things because competition forces them to spend money well. If they don't spend well, they disappear.
Government never disappears. When politicians fail, they force us to give them more of our money so they can do it again.
People hate capitalists, but it's the capitalists who create the jobs, lift people out of poverty, and feed the world.
I'm a reporter, not an entrepreneur. I'm not likely to invent something new and useful. So today, I'll give money to charity.
It makes me feel good.
But the world benefits more from people like Musk—and the millions of entrepreneurs who try new things.
COPYRIGHT 2022 BY JFS PRODUCTIONS INC.
The post Charity and Capitalism Are Better Than Government appeared first on Reason.com.
]]>"My pronouns are Prosecute/Fauci," Elon Musk tweeted early Sunday, launching a new round of condemnation from his progressive critics and glee from his growing base of right-wing fans. But Musk, while admittedly a skilled pot-stirrer, is little more than that where the substantive demand of the post is concerned: He can't investigate—let alone prosecute—retiring National Institute of Allergy and Infectious Diseases chief Anthony Fauci.
The Republican lawmakers who cheered his tweet, however, can do exactly that, particularly after the GOP regains control of the House in less than a month. Yet there's a crucial piece missing in the vast majority of "prosecute Fauci" discourse, from legislators as much as from Musk: What, exactly, is the anticipated charge?
Reason reached out to the offices of six GOP lawmakers who had positively engaged with Musk's post and/or previously expressed intent to investigate Fauci next year: Sens. Ted Cruz (R–Texas) and Rand Paul (R–Ky.) and Reps. Andy Biggs (R–Ariz.), Marjorie Taylor Greene (R–Ga.), Ronny Jackson (R-Texas), and Kevin McCarthy (R–Calif.), the latter of whom is the embattled front-runner to be the new House speaker. Are there specific charges, I asked, which the lawmakers expect to be brought against Fauci after a probe by Congress or some other investigatory body?
All four representatives' offices ignored the question. But the senators' offices did reply and, in doing so, staked out the better of two ways this cause célèbre could be pursued.
Cruz's team gave the less specific answer of the two. They pointed me to a recent interview the senator gave on Fox, his face placed next to a photo of Musk. "Dr. Fauci lied to Congress—he flat-out lied to Congress when he said that, No, the federal government had not funded gain-of-function research at the Wuhan Institute for Virology," Cruz said. "Subsequently the NIH [National Institutes of Health] has made clear that was a lie, and yet, you know what, [Attorney General] Merrick Garland and the Biden DOJ, they won't prosecute him, so when [Twitter staff] come in front of Congress and perjure themselves, I am sadly confident that Merrick Garland will not prosecute them."
Dr. Fauci flat-out lied to Congress.
Yet Merrick Garland and the Biden DOJ won't prosecute. #JusticeCorrupted https://t.co/ZI4PFOs9iC pic.twitter.com/nH3YU9mBPQ
— Ted Cruz (@tedcruz) December 6, 2022
I asked if it would be fair to say Cruz has a federal perjury charge (or something in that ballpark) in mind for Fauci, but his office was unwilling to commit to the specific statute.
Paul, however, was willing to commit. His office referred me to a July letter he sent to Garland pressing for investigation into Fauci's congressional testimony about National Institutes of Health (NIH) funding for gain-of-function research in Wuhan, China, the city where the novel coronavirus first infected the public in late 2019. Paul's letter specified 18 U.S. Code § 1001, a portion of federal criminal code concerning fraud and false statements knowingly made to a congressional investigation, as the statute Fauci may have violated.
In a comment sent by email, Paul reiterated his interest in pursuing "a robust and bipartisan investigation into the origins of COVID" in his new role as ranking member of the Senate Homeland Security and Governmental Affairs Committee next year. "Daily we are discovering new information that we will reveal as the investigation continues," he wrote.
Whether a perjury, fraud, or similar charge would stick is far from certain. As Reason science correspondent Ronald Bailey has detailed, whether the research in question is actually gain-of-function work is legitimately disputable, though the more recent NIH statement Cruz seemed to reference may indeed tip the balance of evidence toward "yes." Still, that statement describes the NIH grant recipient, EcoHealth, failing to make a required report, which means it's possible the funding did go to gain-of-function research and that Fauci sincerely believed it didn't.
That scenario points to a tricky aspect of this sort of allegation: It requires proving knowledge more than action, and it's not always feasible to prove what someone knew and when. But Fauci's long public employment means he has reams of correspondence in federal records, so testing a charge of this sort may be easier than it would be in most circumstances. If he's committed fraud or perjury as a high-ranking federal official, by all means, let's prosecute (but not imprison).
The trouble is this sort of specific, dis/provable, and fairly narrow allegation of willful deception is not where the great bulk of the "prosecute Fauci" conversation is going. (None of those four House members, for example, clearly have it in view. "I affirm your pronouns, Elon," said Greene's reply.) What Cruz and Paul are trying is the better of two versions of this cause—but it also seems to be, by far, the less popular.
More common is something in the realm of Musk's follow-up tweet, which cast Fauci as the corrupt Wormtongue to President Joe Biden's decaying King Théoden in a scene from The Lord of the Rings. "Just one more lockdown, my king," Fauci whispers.
But giving bad advice is not a crime—and baseless calls for prosecution are no improvement over failure to prosecute powerful figures at all. They might be good for collecting likes and donations, but that doesn't make them good for bolstering the rule of law.
The post What Do Republicans Want To 'Prosecute Fauci' for, Exactly? appeared first on Reason.com.
]]>Twitter expands definition of doxxing to include links to publicly available information. Twitter CEO Elon Musk has suspended the accounts of multiple tech journalists. The suspended journalists include reporters for Mashable, CNN, The New York Times, and The Washington Post. Musk also suspended the account of the decentralized social network Mastodon.
Musk's move comes after he suspended an account (@ElonJet) that shared publicly available information about his flight plans. A few weeks ago, Musk made a big deal about not banning the account, tweeting "my commitment to free speech extends even to not banning the account following my plane." But like so many of Musk's free speech principles, this one appears to have been short-lived.
Musk claimed the @ElonJet account violated Twitter's policy against doxxing—revealing private information about another person. That's a stretch, since Musk's flight plans come from public records. But doxxing tends to get defined vaguely and broadly these days, and at least the @ElonJet account could plausibly fall somewhere within these parameters.
Meanwhile, Mastodon was banned simply for linking to an account on its platform that tracked Musk's jet. Now, "many links to Mastodon no longer work on Twitter, which flags them as 'potentially harmful,'" notes TechCrunch.
Washington Post journalist Drew Harwell then shared a screenshot of Mastodon's tweet; he was suspended. Pundit Keith Olbermann called on people to retweet Harwell; Olbermann was then suspended, too. It's unclear for how long any of these accounts will be suspended.
It's also unclear precisely what got each of the suspended journalists the boot, but Musk implied they were all guilty of violating Twitter's anti-doxxing policy. "So far, i've been able to confirm about half the accounts suspended posted links to the jet tracker thing," tweeted Founders Fund Vice President Mike Solana, to which Musk responded: "Same doxxing rules apply to 'journalists' as to everyone else."
The suspended tech reporters don't seem to have actually doxxed Musk by any normal understanding of the word. But earlier this week, Twitter's doxxing policy was updated to make the @ElonJet account and people sharing links to Musk's (public!) flight information elsewhere a direct violation. As recently as Tuesday, it said that you couldn't share "home address or physical location information, including street addresses, GPS coordinates or other identifying information related to locations that are considered private." Now, it includes "live location information, including information shared on Twitter directly or links to 3rd-party URL(s) of travel routes, actual physical location, or other identifying information that would reveal a person's location, regardless if this information is publicly available." In other words: Twitter changed its doxxing policy precisely to target activity that seemed to bother Musk.
Twitter is a private company, so it can certainly ban users—including journalists—for any or no reason. But Musk's decision to do so is one more nail in the coffin of the "free speech absolutist" label he was posturing toward.
It's also yet more evidence that the new Twitter isn't going to be better than the old Twitter, which was given to silly-seeming reasons for bans and sometimes stretching its terms and conditions to justify action against inflammatory or controversial accounts. Arguably, Musk's Twitter is even worse, since these moves are not being made by misguided, low-level content moderators or a team of deliberating individuals but the whims of one thin-skinned dude who also happens to be the head of the company.
Say what you will about Jack Dorsey, Twitter's founder and former head, but he stayed relatively removed from day-to-day suspension and banning decisions, and was never known to take action against anyone for criticizing, mocking, or reporting badly on him.
Meanwhile, Musk is establishing a pattern of lashing out against people who seem to anger or offend him personally—and changing Twitter's official policies in order to justify it. Last month, Musk suspended the accounts of people who were parodying him after changing Twitter's rules to say that parody must be clearly labeled as such in account handles.
Further showcasing a lack of principles, Musk ran a Twitter poll yesterday asking when he should allow the suspended tech journalists to return to Twitter—then balked at following through when people voted that they should be reinstated immediately. ("Sorry, too many options. Will redo poll," Musk tweeted.) When Musk polled people about whether to let former President Donald Trump back on and the answer was yes, Musk respected the results and reinstated Trump's account (tweeting "Vox Populi, Vox Dei," a.k.a. "the voice of the people is the voice of God").
It seems Musk's "power to the people" mantra only applies when the people agree with what Musk himself wants.
This is the song that never ends…
???? @SenMikeLee has introduced a bill that would remove porn's First Amendment protections, and effectively prohibit distribution of adult material in the US. FSC is monitoring the bill, and will continue to do so in the new Congress. https://t.co/ofx2kCsacA
— Free Speech Coalition (@FSCArmy) December 15, 2022
IRS releases private data about taxpayers … again. "Confidential data of about 112,000 taxpayers inadvertently published by the IRS over the summer was mistakenly republished in late November and remained online until early December," reports Bloomberg:
Form 990-T data that was supposed to stay private had been taken offline but made its way back to the IRS site when a contractor uploaded an old file that still included most of the private information, a letter sent Thursday to congressional leaders said. The agency is required to make Form 990-Ts filed by nonprofit groups available online but is supposed to keep the form filed by individuals private; in both cases, the agency made that information available too.
An internal programming error caused the September release of private forms along with the ones filed by nonprofit groups, the letter said. This time, the contractor tasked with managing the database reuploaded the older file with the original data instead of a new file that filtered out the forms that needed to be kept private.
As of December 1, the information that was supposed to be taken down was back online and the IRS only learned about it after being alerted by a third-party researcher. The forms did not contain Social Security numbers, but some contained names and contact information.
• "The U.S. Senate approved a one-week extension of federal government funding, averting a partial government shutdown that was scheduled to begin Saturday," reports CNBC. "The measure, which passed 71 to 19, gives lawmakers an additional week to negotiate and pass a comprehensive bill to fund federal agencies through the fiscal year, which ends Sept. 30."
• Tomorrow is International Day To End Violence Against Sex Workers. More on the day's history, events, and mission here.
• Internal memos from the Food and Drug Administration show the agency's regulation of electronic cigarettes is based more on dubious value judgments rather than science, suggests Reason Senior Editor Jacob Sullum. "The memos came to light thanks to a lawsuit that Logic Technology filed against the FDA after the agency approved the marketing of the company's tobacco-flavored e-cigarettes but rejected applications for menthol-flavored versions. The documents show that higher-ups in the FDA overrode staff scientists who initially recommended approval of the latter applications."
• Hmm…
OK, here's a First Amendment lawsuit for you:
A woman is suing the city of Tallahassee after she was removed from the Citizen Police Review Board for having a cup with an "abolish the police" sticker on it https://t.co/s1xmtMw53D pic.twitter.com/KqYt8O1xf2
— CJ Ciaramella (@cjciaramella) December 15, 2022
• Sen. Elizabeth Warren's (D–Mass.) crypto bill targets financial freedom, not fraud, writes J.D. Tuccille for Reason.
• Politics is getting in the way of what makes cities great.
• PSA:
This is your periodic reminder to never take a police polygraph.
Let's talk about why. /1
— Doug Gladden (@DougtheLawyer) December 14, 2022
The post Elon Musk Kicks Tech Journalists, Mastodon Off Twitter appeared first on Reason.com.
]]>Independent journalist Matt Taibbi dropped the first version of the Twitter Files as a thread on the social media site on December 2.
1. Thread: THE TWITTER FILES
— Matt Taibbi (@mtaibbi) December 2, 2022
Since then, Taibbi and his collaborators Bari Weiss and Michael Shellenberger have published several more threads on topics ranging from Twitter's suspension of former President Donald Trump's account to the systematic, secret suppression of topics and accounts like those of Stanford University economist and medical school professor Jay Bhattacharya, conservative commentator Charlie Kirk, and Libs of TikTok.
Join Nick Gillespie and Zach Weissmueller at 1 p.m. Eastern time for a live discussion of the Twitter Files, Elon Musk's reboot of Twitter, the cozy relationship between social media companies and federal law enforcement, and what it all means for the future of online speech.
Ask questions or leave comments ahead of or during the stream on the YouTube video above or at Reason's Facebook page here.
The post Sifting Through the Twitter Files: Live With Nick Gillespie and Zach Weissmueller appeared first on Reason.com.
]]>A California city will have to spend nearly $1 million remedying the harms of its allegedly illegal and unconstitutional "crime-free" rental housing program. On Wednesday, the U.S. Department of Justice (DOJ) announced a settlement agreement with Hesperia, California, and the San Bernardino County Sheriff's Department that unwinds the last vestiges of a local program that required landlords to evict tenants suspected of committing crimes—regardless of whether they'd been convicted or even charged with anything.
The settlement will require the city and sheriff's department to spend $670,000 compensating people harmed by the ordinance. The defendants will have to spend another $300,000 in fines and on community programs and marketing to promote fair housing.
"'Crime-free' ordinances may also constitute a discriminatory solution in search of a problem and run afoul of the core goals underlying the Fair Housing Act," said the DOJ's Assistant Attorney General for Civil Rights Kristen Clarke in a press release announcing the settlement.
The original crime-free program passed by the Hesperia City Council in 2016 required landlords to evict whole families upon notification by the sheriff's department that someone in the household had committed a crime.
Additionally, it required rental property owners to register their units with the city, submit the names of all adult tenant applicants to the sheriff's department for criminal background checks, and to include crime-free addendums to lease agreements. Property owners also had to submit to annual police inspections.
Noncompliant landlords were threatened with fines and, in at least one case, criminal charges.
At the time, city leaders said the ordinance would "improve our demographic" and keep out nonnatives who come "from somewhere else with their tainted history." One council member equated the law to pest control, saying "you would call an exterminator to kill roaches."
The Hesperia program was bad enough to unite some strange bedfellows in opposition. The American Civil Liberties Union (ACLU) and the Trump-era DOJ both sued Hesperia in 2016 and 2019, respectively. The former alleged a number of violations of the equal protection guarantees of the California and U.S. constitutions. The DOJ lawsuit accused the city and sheriff's department of Fair Housing Act violations.
The DOJ lawsuit notes that one woman who called the police on her abusive husband was forced to be evicted under the ordinance. In another case, an elderly couple was forced to be evicted after their adult son—who didn't live with them—was arrested.
The California Apartment Association, a landlord trade association, also said the Hesperia law was unconstitutional.
The ACLU lawsuit forced major changes to Hesperia's rental housing program. In response to the suit, the city begrudgingly made landlord registration and evictions for alleged criminal activity voluntary in 2017.
Ideally, it would be up to property owners to decide who they want to rent to and on what conditions.
Some cities have restricted landlords' right to make those decisions with laws banning them from screening tenants' criminal backgrounds. Proponents of those policies argue that without these protections, former convicts won't be able to find safe and secure housing.
Hesperia had the opposite concern, believing that people convicted (or just suspected) of crimes were having too easy of a time securing housing on the private market. Its crime-free housing program attempted to put a stop to that. The city would have saved itself a lot of trouble, and legal fees, had it just let willing landlords and tenants make whatever kinds of arrangements both parties could agree to.
A Twitter account tracking Elon Musk's private jet has been suspended. In the latest dust-up at Musk-run Twitter, the company also axed the account of college student Jack Sweeney, who ran the ElonJet account that used public flight data to keep tabs on the billionaire's air travel.
In tweets on Wednesday night, Musk appeared to blame Sweeney's account for an incident in which a "crazy stalker" followed a car containing the billionaire's son X. Musk said the pursuer blocked the car and climbed on the hood. He's threatened legal action against Sweeney.
Last night, car carrying lil X in LA was followed by crazy stalker (thinking it was me), who later blocked car from moving & climbed onto hood.
Legal action is being taken against Sweeney & organizations who supported harm to my family.
— Elon Musk (@elonmusk) December 15, 2022
Axios reports that the unapologetic Sweeney is taking his operation over to Mastodon. Musk has said that sharing someone's location on a delayed basis will still be allowed, but real-time sharing of someone's location will be considered prohibited doxxing.
Real-time posting of someone else's location violates doxxing policy, but delayed posting of locations are ok
— Elon Musk (@elonmusk) December 14, 2022
It takes two years to get approval for a new apartment building in the nation's "not in my backyard" (NIMBY) capital. A new analysis from the San Fransisco Chronicle shows that the typical applicant waits 627 days in order to get a building permit for a multifamily development. A single-family development takes a whopping 861 days.
"The biggest impact is cost," one architect told the Chronicle. "This costs money for everybody in the city. It affects the quality of life for everybody, because everybody lives somewhere."
Delays often kill new housing entirely. Developers will apply for permits for a project that pencils out under one set of economic conditions, and then not receive approval until after construction material and financing costs have made building anything infeasible.
State law requires San Francisco to reduce constraints on new housing. If it doesn't come up with a plan to do that within the next few months, the city could be effectively stripped of its power to enforce its local zoning code against new housing. The city's frustrated builders are positively salivating at the potential projects this would allow.
• The Federal Reserve has hiked interest rates by 0.5 percentage points in an effort to combat stubbornly high inflation. Markets retreated afterward slightly following the announcement, reports The Wall Street Journal.
• The Biden administration announced that it will restart a program that provides free at-home COVID-19 tests to head off a winter surge of the virus. Federal public health officials are once again recommending people wear masks in cities where reported COVID-19 cases are increasing.
• They don't make monarchical power struggles like they used to.
• The Washington Post has confirmed that it will be laying off staff to cope with declining subscriptions.
• France beats Morocco in 2–0 World Cup match.
• Russia warns the U.S. to not provide more missiles to Ukraine.
The post City That Forced Landlords To Evict Tenants Suspected of Crimes Will Pay $1 Million To Settle DOJ Lawsuit appeared first on Reason.com.
]]>From the outside, Twitter's content moderation decisions look haphazard at best. From the inside, they look worse, especially because government officials play an unseemly and arguably unconstitutional role in shaping those decisions.
The internal communications that Elon Musk, Twitter's new owner, has been gradually revealing to a select few journalists show that the company's former executives arbitrarily applied the platform's vague rules and surreptitiously suppressed content from disfavored accounts. The "Twitter Files" also confirm that the company had a cozy relationship with federal agencies, allowing them to indirectly censor speech they deemed dangerous.
Musk, a self-described "free speech absolutist," is trying to signal that things will be different under his ownership. He faces a daunting challenge as he attempts to implement lighter moderation policies without abandoning all content restrictions, lest Twitter become a "free-for-all hellscape" that alienates users and advertisers.
One part of that mission should be relatively straightforward. Musk could make it clear that neither government bureaucrats nor elected officials have any business dictating what Twitter's rules should be or how they should be enforced.
Musk took a significant step in that direction last month by rescinding Twitter's ban on "COVID-19 misinformation," a nebulous category that ranged from verifiably false assertions of fact to demonstrably or arguably true statements that were deemed "misleading" or contrary to government advice. That policy invited censorship by proxy, giving the Biden administration an excuse to enforce obeisance to an ever-evolving "scientific consensus" by publicly and privately pressuring Twitter to crack down on speech that officials viewed as a threat to public health.
The Twitter Files show that the company also collaborated with the FBI, the Department of Homeland Security, and the Office of the Director of National Intelligence in identifying and suppressing "election misinformation," another ill-defined category open to wide interpretation. Executives enforcing that policy regularly conferred with those agencies, and they privately recognized that such coziness would be controversial if it were publicly acknowledged.
The reason for that reticence should be obvious. It is one thing for a platform to enforce its own content restrictions, even if it does so in a way that is widely viewed as unfair, inconsistent, or politically biased. But when that platform takes its cues from the government, private moderation decisions can easily become a cover for unconstitutional speech restrictions.
Because the government has the power to make life difficult for social media companies through castigation, regulation, litigation, and legislation, its "requests" always carry an implicit threat. It is therefore not surprising that Twitter and other major platforms have been eager to fall in line.
Musk himself seems confused about the issues at stake here. He tends to conflate "freedom of speech" with freedom from private content restrictions and misleadingly implies that Twitter's broad ban on "hateful conduct," which he is still avowedly committed to enforcing, applies only to speech that fits within judicially recognized exceptions to the First Amendment.
Musk's confusion was apparent last week, when Rep. Adam Schiff (D‒Calif.) said he was "demanding action" in response to an "unacceptable" rise in "hate speech" on Twitter since Musk took over the platform in late October. Musk responded by questioning the evidence that Schiff cited, saying "hate speech impressions are actually down by 1/3 for Twitter now vs prior to acquisition."
Instead of getting bogged down in a debate about whether Twitter has in fact been overrun by bigots on his watch, Musk should have asked why Schiff thinks he has the authority to demand the censorship of speech that offends him. The First Amendment, which bars Congress from "abridging the freedom of speech," is pretty clear on that point.
Independent journalist Glenn Greenwald laments that "dictating to social media companies what they can and can't platform, how they must censor, the role Democratic politicians play in all this, is just assumed as normal." Musk is well-positioned to challenge that assumption, and he could start by telling Schiff to mind his own business.
© Copyright 2022 by Creators Syndicate Inc.
The post Elon Musk Should Take a Clear Stand Against Censorship by Proxy appeared first on Reason.com.
]]>In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Peter Suderman, and Nick Gillespie discuss the Brittney Griner prisoner swap and the Twitter Files 2.0.
0:27: Brittney Griner released
9:46: New Twitter Files drop
32:43: Weekly Listener Question:
Should Libertarian candidates throw the election one way or the other in exchange for a public pledge by one of the two large-party candidates? For example, adopting an official public policy and/or naming them to a Cabinet position? Obviously, the candidate couldn't force voters to cooperate, but with the large number of libertarians in excess of the number needed suggests that enough would go along with the scheme to make it work… and of course, no one could prove that it didn't. Whaddya think?
42:55: Kyrsten Sinema leaves the Democratic party.
48:58: This week's cultural recommendations
Mentioned in this podcast:
"Brittney Griner Has Finally Been Released," by Emma Camp
"The Brutality of Brittney Griner's New Home," by Emma Camp
"Twitter Files: FBI, DHS Reported Tweets for Election Misinformation," by Robby Soave
"Bari Weiss Twitter Files Reveal Systematic 'Blacklisting' of Disfavored Content," by Robby Soave
"Sinema's Defection from Democrats Sows Welcome Chaos Among the Political Class," by J.D. Tuccille
Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.
Today's sponsor:
Audio production by Ian Keyser
Assistant production by Hunt Beaty
Music: "Angeline," by The Brothers Steve
The post Merchants of Death, Swaps, and Shake-ups appeared first on Reason.com.
]]>Law enforcement officials in the Department of Homeland Security (DHS) and Federal Bureau of Investigation (FBI) met regularly with top content moderators at Twitter during the 2020 presidential election, independent journalist Matt Taibbi revealed in the latest dispatches from the Twitter Files.
Taibbi describes an "erosion of standards within the company" that took place between October 2020 and January 6th, 2021, as Twitter's Trust & Safety team, headed by Yoel Roth and Vijaya Gadde, took a more active—and, according to Taibbi, arbitrary—role in moderating election-related content. Taibbi contrasts their moderation decisions with calls made by "Safety Operations," a broader team "whose staffers used a more rules-based process for addressing issues like porn, scams, and threats."
15. There was at least some tension between Safety Operations – a larger department whose staffers used a more rules-based process for addressing issues like porn, scams, and threats – and a smaller, more powerful cadre of senior policy execs like Roth and Gadde.
— Matt Taibbi (@mtaibbi) December 9, 2022
Notably, Twitter's moderation decisions during this time period increasingly relied on input from the FBI and DHS. In internal Slack conversations, Twitter policy director Nick Pickles floated the idea of publicly admitting that the company's misinformation policies were partly based on feedback from experts in law enforcement; he eventually decided just to call them "partnerships."
Other Slack messages suggest that Roth met regularly, even weekly, with the FBI and DHS. The FBI also reported tweets for spreading election misinformation, sometimes prompting Twitter to take action.
This is the aspect of content moderation that should provoke the greatest concern from the general public. While Twitter's opaque and inconsistent policies are undoubtedly enraging, they are a private company's terms of service; ultimately, users can (and should) complain, and encourage new leadership—i.e. Elon Musk—to change course, but there isn't a strong public policy connection.
Dictates from law enforcement, on the other hand, are absolutely matters of public policy. Is it proper for agents of the state to encourage private entities to suppress misinformation, even as national political figures excoriate these entities for not moderating more aggressively? The First Amendment might have something to say about that.
The post Twitter Files: FBI, DHS Reported Tweets for Election Misinformation appeared first on Reason.com.
]]>Rep. Adam Schiff (D–Calif.) yesterday said he is "demanding action" in response to an "unacceptable" rise in bigoted slurs on Twitter since Elon Musk took over the platform in late October. Musk responded by taking issue with the evidence that Schiff cited, saying "hate speech impressions are actually down by 1/3 for Twitter now vs prior to acquisition." What he should have said is that government officials in a free society have no business demanding the suppression of speech they do not like.
"We are deeply concerned about the recent rise in hate speech on Twitter," Schiff and Rep. Mark Takano (D–Calif.) write in a letter to Musk. "Analysis by independent researchers indicates Twitter has become an increasingly toxic place for our constituents, and we are reaching out to you to understand the actions Twitter is taking to combat this increase in harmful content."
Schiff and Takano ostensibly are just asking questions and urging Musk to step up enforcement of Twitter's ban on "hateful conduct." But they are doing that in their official capacity as members of Congress, a job that gives them no authority to police speech or insist that anyone else do so. To the contrary, the First Amendment explicitly bars Congress from "abridging the freedom of speech." By publicly pressuring Musk to censor "hate speech," which is indisputably covered by the First Amendment, Schiff and Takano are trying to indirectly accomplish something that the Constitution forbids.
Because government officials have the power to make life difficult for social media companies through regulation, litigation, and legislation, their demands for "action" always carry an implicit threat. Schiff and Takano's letter is an example of the "jawboning against speech" that Cato Institute policy analyst Will Duffield describes in a recent report. "Government officials can use informal pressure—bullying, threatening, and cajoling—to sway the decisions of private platforms and limit the publication of disfavored speech," Duffield notes. "The use of this informal pressure, known as jawboning, is growing. Left unchecked, it threatens to become normalized as an extraconstitutional method of speech regulation."
Schiff is working hard to advance that process. "What steps is your company taking in response to the recent rise in hate speech on your platform and how do you plan to make these decisions available to the public?" he and Takano ask. "Additionally, what is your timeline for rolling out any of these changes?" They also want information about "Twitter's plan to increase safety for its users, and more specifically the LGBTQ+ community and the Jewish community"; "the current process for enforcing content moderation on your platform"; Twitter's "current capability and capacity to handle the risks arising from the extreme rise in hate speech"; and its "current risk-assessment process and response timeline for viral hate speech and disinformation."
Musk could reasonably respond that none of this is any of Schiff and Takano's business, a stance that would strike a blow against this "extraconstitutional method of speech regulation." That position would be consistent with Musk's decision to rescind Twitter's ban on "COVID-19 misinformation," a nebulous policy that invited censorship by proxy. Instead, Musk is getting bogged down in a debate that pits the evidence Schiff favors, which comes mainly from tallies of slur-containing tweets and retweets by the Center for Countering Digital Hate, against internal Twitter data on "hate speech impressions."
Musk himself seems confused about the issues at stake here. He tends to conflate "freedom of speech" with freedom from private content restrictions, which puts him in a bind as a self-declared "free speech absolutist" who is not willing to let people say whatever they want on Twitter, lest the platform become a "free-for-all hellscape." Here he is trying to reconcile that perceived contradiction: "New Twitter policy is freedom of speech, but not freedom of reach. Negative/hate tweets will be max deboosted & demonetized, so no ads or other revenue to Twitter. You won't find the tweet unless you specifically seek it out, which is no different from rest of Internet."
While Musk is inclined toward lighter content moderation in some areas, he is still avowedly determined to enforce the "hateful conduct" ban. "Twitter's strong commitment to content moderation remains absolutely unchanged," he tweeted last month. "In fact, we have actually seen hateful speech at times this week decline *below* our prior norms, contrary to what you may read in the press."
The suspension of Ye, the music and fashion tycoon formerly known as Kanye West, for a series of antisemitic tweets is the most conspicuous example of Musk's "strong commitment" to eliminating "hateful speech." But Musk's explanation of that decision suggests he does not really understand how free speech works.
"At a certain point, you have to say what is incitement to violence," Musk said during a Twitter Spaces conversation on Saturday, "because that is against the law in the U.S. You can't just have a 'let's go murder someone' club. That's not actually legal."
That gloss implies that Twitter's content moderation is limited to speech that fits judicially recognized exceptions to the First Amendment, which is plainly not true. Ye's anti-Jewish remarks and imagery did not qualify as illegal "incitement" by any stretch of the imagination. Under the test that the Supreme Court established in the 1969 case Brandenburg v. Ohio, even advocating criminal conduct (which Ye did not do) is constitutionally protected unless it is both "directed" at inciting "imminent lawless action" and "likely" to do so.
Twitter's "hateful conduct" policy sweeps much more broadly. "You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease," it says. "We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories." Also forbidden: "hateful imagery and display names."
Ye's tweets, which included a Star of David with a swastika at its center and his announcement that it was time to go "death con 3 On JEWISH PEOPLE," pretty clearly ran afoul of that policy. But they did not even come close to meeting the Brandenburg test.
Musk's confusion only encourages critics like Schiff and Takano, who portray bigoted speech as a "human safety" issue and warn that "harmful rhetoric" fosters violence. "After the Colorado Springs Shooting, in which the LGBTQ+ community was specifically targeted, we saw anti-LGBTQ+ hate become viral on Twitter," they say. "We are concerned about the individual and community harm arising from Twitter, including how that could spill from online into real life." This is the same connection that Musk draws when he describes Ye's antisemitic remarks as "incitement to violence."
Since Twitter, as a private business, is not bound by the First Amendment, Musk is free to make any decisions he wants about content moderation. We, in turn, are free to criticize those decisions, free to debate the significance of the data cited by the Center for Countering Digital Hate, free even to wonder whether it is wise or practical to enforce a policy like Twitter's. We are also free to eschew Twitter because we don't like its moderation practices, whether because they are too strict, because they are not strict enough, or because they are too arbitrary and haphazard.
Schiff, by contrast, is bound by the First Amendment when he flexes the powers of his office, which he is clearly trying to do when he writes this sort of letter on official stationery in his official capacity. Likewise when legislators castigate the executives of social media companies at congressional hearings for failing to crack down hard enough on "hate speech" or when executive-branch officials publicly and privately pressure those companies to censor "misinformation" they view as a threat to public health.
Glenn Greenwald laments that "dictating to social media companies what they can and can't platform, how they must censor, the role Democratic politicians play in all this, is just assumed as normal." Musk is well-positioned to challenge that assumption, and he could start by telling Schiff and Takano to mind their own business.
The post Adam Schiff Attempts Censorship by Proxy, 'Demanding Action' To Suppress 'Hate Speech' on Twitter appeared first on Reason.com.
]]>The second installment of the Twitter Files—disclosures about politically motivated content suppression on the platform—has been released, this time from independent journalist Bari Weiss.
"A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users," writes Weiss.
1. A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users.
— Bari Weiss (@bariweiss) December 9, 2022
The previous installment, released by independent journalist Matt Taibbi, focused on the confused and chaotic decision on the part of Twitter executives to offer a "hacked materials" rationale for suppressing the New York Post's Hunter Biden laptop story; as such, the files mostly provided more evidence of what was already fairly well-known.
The Weiss installment, on the other hand, offers significant evidence of something that many people merely suspected was taking place: wholesale blacklisting of Twitter accounts that were perceived to be causing harm.
Weiss provides several examples of ways in which the platform limited the reach of various high-profile users: Jay Bhattacharya, a Stanford University professor of medicine who opposed various COVID-19 mandates and lockdowns, was on a "trends blacklist," which meant that his tweets would not appear in the trending topics section; right-wing radio host Dan Bongino landed on a "search blacklist," which meant that he did not show up in searches; and conservative activist and media personality Charlie Kirk was slapped with a "do not amplify" label. At no point did anyone at Twitter communicate to these individuals that their content was being limited in such a manner.
These actions, of course, sound a lot like "shadow banning," which is the theory that Twitter surreptitiously restricts users' content, even in cases where the platform has not formally issued a ban or suspension. For years, various figures on the right and contrarian left have complained that the reach of their tweets had substantially and artificially diminished for nonobvious reasons, contrary to the stated claims of top-level Twitter staffers who steadfastly asserted: "We do not shadow ban."
This claim depends upon how the term is defined. To be clear, Twitter has publicly admitted that it suppresses tweets that "detract from the conversation," though the platform's plan was to eventually move toward a policy of informing users about suppression efforts—a move that never took place.
If shadow banning is defined as secretly making a user's content utterly undiscoverable, even by visiting his own page, then it's technically true that there is no shadow banning. (Twitter seems to have defined the term that way.) But when most people complain about shadow banning, they are objecting to a secretive process of restricting, hiding, and limiting content in general, without informing users about why these decisions have been made. Under this definition, it's crystal clear that Twitter engages in shadow banning.
As a private company, Twitter was obviously within its rights to do this, but users also have the right to be furious. The lack of transparency is startling, and the rationale behind the policy is wholly contrary to a culture of free speech. According to Weiss, previous Twitter Head of Trust and Safety Yoel Roth admitted in internal Slack conversations that "the hypothesis underlying much of what we've implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that."
That's a very fraught proposition that would appear to justify an extreme amount of censorship. As always, it's important to keep in mind that social media moderators, misinformation beat reporters, federal health advisers, and national intelligence officials have all failed to correctly distinguish actual misinformation from true information—the New York Post laptop story being just one prominent example.
The post Bari Weiss Twitter Files Reveal Systematic 'Blacklisting' of Disfavored Content appeared first on Reason.com.
]]>In an apparent effort to complete his bingo card of regulatory agencies he has feuds with, Elon Musk is now fighting with the San Francisco Department of Building Inspections (DBI).
The drama started Monday when Forbes reported that Musk-owned social media company Twitter had converted rooms in its San Francisco headquarters to sleeping quarters for use by weary employees. By Tuesday, DBI was confirming to reporters that it was investigating Twitter for building code violations.
"We need to make sure the building is being used as intended," Patrick Hannan, a spokesman for the department, said in an email to The Washington Post. "There are different building code requirements for residential buildings, including those being used for short-term stays. These codes make sure people are using spaces safely."
The inspection got some heated responses from Musk, who suggested the city spend less time policing Twitter's office space and more time preventing fentanyl overdoses.
So city of SF attacks companies providing beds for tired employees instead of making sure kids are safe from fentanyl. Where are your priorities @LondonBreed!?https://t.co/M7QJWP7u0N
— Elon Musk (@elonmusk) December 6, 2022
But San Francisco is hardly the only city that makes repurposing office space needlessly difficult.
Across the country, once-bustling central business districts are suffering a post-pandemic slump as downtown workers have switched to fully remote work or hybrid schedules. Office occupancy rates have fallen to half their pre-pandemic levels, according to data tracked by Kastle Systems.
At the same time, residential rents have returned to, or even exceeded, their pre-pandemic levels in most cities.
The dynamic is piquing developers' interest in repurposing empty offices for apartments.
A recent study from RentCafe found that there were 28,000 office-to-apartment conversions in 2020 and 2021. That's up from the years immediately before the pandemic. These conversions are also growing faster than new apartment construction.
But that still represents a small portion of the nearly 800,000 apartments built in the same period. Myriad regulations help prevent these conversions from being more common.
Building code provisions about light and air are among the most common hurdles, according to a new policy brief from Up for Growth. These codes will typically require that bedrooms come with windows that can open. That often can't be accommodated in office buildings with "deep" floor plans with lots of interior, windowless space. One solution is to install an atrium in the middle of the building. But those turn into dark tunnels after five stories and require giving up leasable square footage, notes the Up for Growth report.
Requirements about how close residential units have to be to a stairwell can also trip up the conversion of large, single-stairwell office buildings.
Offices often aren't zoned for residential use either. And getting a property rezoned is a long, expensive process with no guarantee that zoning officials will agree to the proposed use change. Even if they do, rezonings can trigger expensive additional requirements.
In Washington, D.C., for instance, rezoning a property from non-residential to residential use requires the developer to offer some units in the new building at below-market rates to lower-income residents.
These policy headwinds are in addition to practical and financial barriers to office-to-apartment conversions. Vacancy rates are up across offices, but few office buildings are totally empty. That means a developer would have to buy out existing tenants.
Office rents are also typically higher than residential rents, meaning demand for office space has to be really low and residential rents high to make the math work.
The aforementioned Up For Growth report found that in Denver, only 12 buildings, making up 6 percent of downtown office space, were suitable for apartment conversions. A report from Moody's Analytics in April dismissed office-to-apartment conversions as a passing fad.
On the flip side, major New York developer Silverstein Properties is trying to raise $1.5 billion from investors for office-to-apartment conversions.
Policy makers have also expressed an interest in scaling back regulations to ease these conversions.
New York Gov. Kathy Hochul included a preliminary proposal to give Manhattan developers more flexibility on light and air requirements in her budget proposal earlier this year. New York City Mayor Eric Adams has also endorsed the idea. A commission convened by Hochul and Adams is supposed to make recommendations on which rules need to change by the end of the year.
More heavy-handedly, U.S. Rep. Debbie Stabenow (D–Mich.) proposed subsidizing office-to-apartment conversions in exchange for making some of the units affordable.
None of this will help Musk or his employees looking to nap in comfort. The billionaire's fight with city regulators is a good reminder of how regulation has made downtowns so unadaptable.
The post Elon Musk Isn't the Only One Fighting Regulators for Turning Offices Into Bedrooms appeared first on Reason.com.
]]>In today's WSJ, Democratic Representative Ro Khanna explains why he laments Twitter's handling of the controversy, even though it may have benefitted his political party.
Twitter's suppression [of the Hunter Biden story] violated the First Amendment principles Brennan articulated in [New York Times v.] Sullivan. Twitter banned links to the story and suspended accounts that shared it, including President Trump's press secretary and the New York Post itself—arguing that the story violated company policy because it contained information obtained through illegal means. Under the same logic, they'd have to suspend any account that posted the Pentagon Papers, which is protected by New York Times Co. v. U.S. (1971), or the story of Mr. Trump's leaked tax returns.
As Silicon Valley's representative in Congress, I reached out to Twitter at the time to share these concerns. In an email meant to be private, but recently made public by Matt Taibbi's "Twitter Files" thread, I wrote to Twitter's general counsel that the company's actions "seemed to be a violation of First Amendment principles." Although Twitter is a private actor not legally bound by the First Amendment, Twitter has come to function as a modern public square. As such, Twitter has a responsibility to the public to allow the free exchange of ideas and open debate.
Unlike many who comment on such controversies, Rep. Khanna recognizes that whether a company like Twitter is legally obligated to respect free speech principles is a seprate question from whether it is desirable or beneficial for it to do so. That Twitter is not required to provide a robust forum for divergent views and perspectives does not mean it should not do so. Put another way, pointing out that Twitter is not bound by the First Amendment is no answer to criticism of Twitter for selectively suppressing speech or information that is disagreeable or disfavored.
More from Rep. Khanna:
I agreed with Twitter's decision to take down explicit photos of Hunter Biden and to prevent algorithmic amplification of the Post story. But there's a difference between sharing and artificially amplifying. Social-media companies shouldn't have bots that amplify speech in the first place—they add chaos to the dialogue. They certainly shouldn't be abusing people's data by using it to target them with sensational content. We need to uphold the sovereign right to our data. Even so, the story itself shouldn't have been censored, and those who shared it shouldn't have been suspended. That went too far.
To the extent Twitter makes value decisions on the forum—types of conspiracy theories or hate-inciting content that gets removed or isn't promoted—the company should have clear and public criteria. Elon Musk has said Twitter will be "more aggressive than ever, using technology to reduce the reach of hateful Tweets and prevent their amplification." Transparency into how decisions are made, including some recourse to appeal, is crucial—with some independent committee or board that will thoughtfully consider a complaint of censorship.
A robust defense of First Amendment principles online is more important than ever. Citizens in our polarized country need to have conversations with each other based on mutual respect. Suppressing speech we don't like leaves us blind to alternative perspectives that help us see the whole, complex truth.
The post Rep. Khanna on Twitter, Free Speech, and the Hunter Biden Story appeared first on Reason.com.
]]>
In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Peter Suderman, and Nick Gillespie dig into the release of the Twitter Files by journalist Matt Taibbi and CEO Elon Musk.
1:32: The Twitter Files drop
33:48: Weekly Listener Question:
Here in my hometown of NYC (where apparently the carpetbagger Gillespie has chosen to make his home), just earlier today, the Adams administration stated its intention to increase involuntary hospitalizations for homeless individuals suffering from severe mental illness. Essentially, they are seeking to expand the interpretation of the legal standard from hospitalizing those who are likely to cause "serious harm" to themselves or others to those "whose mental illness prevents them from meeting their basic survival needs of food, clothing, shelter, or medical care." Notwithstanding the legitimate concerns over "the state" and "involuntary hospitalizations" appearing in the same sentence, or nightmare scenarios about who may be labeled "unable to meet their basic needs" (perhaps someone consuming a large soda?), would the Roundtable care to weigh in on where the line may be here? At what point, if ever, should the state involuntarily hospitalize and/or medicate someone to protect themselves or those around them?
52:41: This week's cultural recommendations
Mentioned in this podcast:
"Elon Musk and Matt Taibbi Reveal Why Twitter Censored the Hunter Biden Laptop Story," by Robby Soave
"Twitter Is More Like a Traveling Circus Than a Public Square," by Steven Greenhut
"Twitter Quits the Biden Administration's Ham-Handed Crusade Against COVID-19 'Misinformation'," by Jacob Sullum
"Eric Adams' Plan To Involuntarily Hospitalize Mentally Ill Homeless People Will Face Legal Challenges," by C.J. Ciaramella
Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.
Today's sponsor:
Audio production by Ian Keyser
Assistant production by Hunt Beaty
Music: "Angeline," by The Brothers Steve
The post What Twitter's Suppression of the Hunter Biden Laptop Story Tells Us About the Media appeared first on Reason.com.
]]>On Friday, Elon Musk announced that he would release the Twitter Files: a behind-the-scenes account of why the social media site prevented users from sharing the New York Post's infamous Hunter Biden laptop story. That story, which was erroneously categorized by national intelligence experts as disinformation of dubious and possibly Russian origin, has become the archetypical example of social media moderation gone awry.
Musk gave the scoop to the independent journalist Matt Taibbi, whose report was published on Twitter itself (in a somewhat confusingly ordered thread).
1. Thread: THE TWITTER FILES
— Matt Taibbi (@mtaibbi) December 2, 2022
The thread contains fascinating screenshots of conversations between various content moderators and company executives as the laptop story debacle was unfolding. But given how massively Musk hyped the revelations, the results are a tad disappointing, and mostly confirm what the public already assumed: A (still unidentified) employee or process flagged the story as "unsafe" and suppressed its spread, and then Twitter moderators devised a retroactive justification—violation of a "hacked materials" policy—for having taken such an extraordinary step. Then-CEO Jack Dorsey was largely absent from these conversations; Vijaya Gadde, Twitter's former head of trust and safety played "a key role." None of this material is groundbreaking; it's already well-known.
To be clear, it's useful to see some of these internal messages. They confirm that Twitter's various departments—communications, moderation, senior management—horrendously mismanaged the entire affair. They were not all on the same page: Vice President of Global Communications Brandon Borrman, for example, was immediately unconvinced by the "hacked materials" justification.
27. Former VP of Global Comms Brandon Borrman asks, "Can we truthfully claim that this is part of the policy?" pic.twitter.com/Rh5HL8prOZ
— Matt Taibbi (@mtaibbi) December 3, 2022
Another employee, Deputy General Counsel Jim Baker, essentially felt that erring on the safe side meant suppressing the story until evidence emerged that it wasn't hacked. This turned out to be an extremely bad judgment. But again, the informed public already knew that the mess had been made by some combination of incompetence and employees' anti-Republican biases.
The most interesting revelation in Taibbi's thread is that Twitter's top executives were warned, over and over again, that this decision was going to create a backlash like nothing they had ever seen before. Rep. Ro Khanna (D–Calif.), a progressive lawmaker, repeatedly emailed a Twitter communications staffer to complain that the firm was violating "1st Amendment principles." (He raised some very valid points in his communications with the company, though strictly speaking the First Amendment does not apply in this situation.) NetChoice, a tech industry trade association, explicitly told Twitter that this would be the company's "Access Hollywood moment." (Unlike Twitter, both Khanna and NetChoice come off looking pretty good in all this.)
Taibbi is a good journalist, and should be commended for adding to the public's understanding of what went wrong here. Progressive figures currently slamming him for having participated in Musk's reveal are behaving horribly. There's nothing "disgraceful" whatsoever about Taibbi's reporting.
And kudos to Musk for attempting to shed more light on what truly was a low moment for the culture of free speech on the platform. But we're essentially in the-butler-did-it territory: The mystery's explanation is exactly what everyone expected, and largely already knew.
The post Elon Musk and Matt Taibbi Reveal Why Twitter Censored the Hunter Biden Laptop Story appeared first on Reason.com.
]]>Twitter has seen an "unprecedented" increase in "hate speech" since Elon Musk took over the platform in late October, The New York Times reports. As the paper's fact checkers might say about someone else's scary claims, the Times story is misleading and lacks context. But it highlights the challenges that Musk faces as he tries to implement lighter moderation practices without abandoning all content restrictions.
Before Musk bought Twitter, the Times says, "slurs against Black Americans showed up on the social media service an average of 1,282 times a day. After the billionaire became Twitter's owner, they jumped to 3,876 times a day. Slurs against gay men appeared on Twitter 2,506 times a day on average before Mr. Musk took over. Afterward, their use rose to 3,964 times a day. And antisemitic posts referring to Jews or Judaism soared more than 61 percent in the two weeks after Mr. Musk acquired the site."
Those numbers, which the Times attributes to "the Center for Countering Digital Hate, the Anti-Defamation League and other groups that study online platforms," might seem alarming. But in the context of a platform whose users generate half a billion messages every day, they suggest that explicitly anti-black, anti-gay, and anti-Jewish tweets are pretty rare.
Assuming that "antisemitic posts" are about as common as the two other categories, we are talking about something like 0.0024 percent of daily tweets. The Times concedes that "the numbers are relatively small," which is like saying the risk of dying from a hornet, wasp, or bee sting is relatively small.
For readers who bother to do the math, the Times warns that the increases it cites are only the beginning: "Researchers said the increase in hate speech, antisemitic posts and other troubling content had begun before Mr. Musk loosened the service's content rules. That suggested that a further surge could be coming."
So far, however, Musk has retained Twitter's ban on "hateful conduct." Musk cited that rule yesterday, when he suspended music and fashion tycoon Kanye West's account. West, who recently has dismayed fans and business partners alike with a string of antisemitic remarks, had already run afoul of Twitter's rule by tweeting that he was about to go "death con 3 On JEWISH PEOPLE." His latest offense was tweeting a Star of David with a swastika at its center.
West's earlier comment was either a deliberate play on "DEFCON 3," a military term that denotes an "increase in force readiness above that required for normal readiness," or an accidental mangling of that phrase. Either way, the anti-Jewish hostility was hard to miss.
The image that prompted West's suspension likewise posed a bit of a puzzle. "I like Hitler," West told conspiracist Alex Jones on the latter's podcast yesterday. "Hitler has a lot of redeeming qualities." He added that "we got to stop dissing Nazis all the time." It nevertheless seems fair to assume that adding a swastika to a Jewish star was not meant as a compliment.
While Musk said West had "violated our rule against incitement to violence," the policy is broader than that description implies. "You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease," it says. "We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories." Also forbidden: "hateful imagery and display names."
The official rationale for that policy emphasizes that open expressions of bigotry have a chilling effect on Twitter participation:
Twitter's mission is to give everyone the power to create and share ideas and information, and to express their opinions and beliefs without barriers. Free expression is a human right—we believe that everyone has a voice, and the right to use it. Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.
We recognize that if people experience abuse on Twitter, it can jeopardize their ability to express themselves….
We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals or groups with abuse based on their perceived membership in a protected category.
There is obviously a tension between Twitter's commitment to "free expression" and its prohibition of hate speech. But while even the vilest expressions of bigotry are protected by the First Amendment, that does not mean a private company is obligated to allow them in a forum it owns. Twitter has made a business judgment that the cost of letting people talk about how awful Jews are, in terms of alienating users and advertisers, outweighs any benefit from allowing users to "express their opinions and beliefs" without restriction.
Other social media platforms strike a different balance and advertise lighter moderation as a virtue. But all of them have some sort of ground rules, because a completely unfiltered experience is appealing only in theory.
Parler, for example, describes itself as "the premier global free speech app," a refuge for people frustrated by heavy-handed moderation on other platforms. It nevertheless promises to remove "threatening or inciting content." Parler also offers the option of a "'trolling' filter" (mandatory in the Apple version of the app) that is designed to block "personal attacks based on immutable or otherwise irrelevant characteristics such as race, sex, sexual orientation, or religion." Such content, it explains, "often doesn't contribute to a productive conversation, and so we wanted to provide our users with a way to minimize it in their feeds, should they choose to do so."
Musk describes himself as a "free speech absolutist." That approach is mandatory for the government, except when it comes to judicially recognized exceptions such as fraud, defamation, and true threats. But it is not viable for a social media platform that wants to maintain some semblance of a welcoming environment, and it does not describe Musk's actual practices since he took over Twitter.
Musk has invited back inflammatory political figures such as Donald Trump and Marjorie Taylor Green. He has rescinded Twitter's ban on "COVID-19 misinformation," a fuzzy category that ranged from demonstrably false assertions of fact to arguably or verifiably true statements that were deemed "misleading" or contrary to government advice. At the same time, however, Musk has interpreted Twitter's rule against impersonation as requiring that parody accounts be clearly labeled as such, and he evidently thinks enforcing the ban on "hateful conduct" is important enough to justify banishing a celebrity with a huge following.
As far as the critics quoted by the Times are concerned, that stance does not really matter, because loosening any rules is tantamount to abandoning all of them. "Elon Musk sent up the Bat Signal to every kind of racist, misogynist and homophobe that Twitter was open for business," said Imran Ahmed, chief executive of the Center for Countering Digital Hate. "They have reacted accordingly."
If Twitter is indeed flooded by bigots eager to test the platform's boundaries under Musk, they will compound the already daunting challenge of trying to police an enormous amount of content for rule violations. Even before Musk bought Twitter, critics complained that the platform was failing at that task.
Last July, the Anti-Defamation League (ADL) reported that it had "retrieved 1% of all content on Twitter for a 24-hour window twice a week over nine weeks from February 18 to April 21, 2022." After running that sample through its Online Hate Index algorithm, the ADL found 225 "blatantly antisemitic tweets accusing Jewish people of pedophilia, invoking Holocaust denial, and sharing oft-repeated conspiracy theories."
Given the huge size of the sample, that tally suggests either that a minuscule share of Twitter users is openly antisemitic or that the platform was doing a pretty impressive job of blocking or removing attacks on Jews. The ADL had a different take: "Of the reported tweets, Twitter only removed 11, or 5% of the content. Some additional posts have been taken down, presumably by the user, yet 166 tweets of the 225 we initially reported to Twitter remain active on the platform."
Extrapolating from that 1 percent sample of "all content on Twitter," the total number of antisemitic tweets might have been in the neighborhood of 22,000. Is that a lot? The ADL obviously thinks so. But when a platform is dealing with 238 million users generating nearly 200 billion tweets a year, even a highly effective system for detecting Jew hatred is apt to miss many examples. And that's just one application of one content rule.
"Content moderation at scale is impossible to do well," TechDirt's Mike Masnick argues. Given the amount of content, the subjectivity of moderation decisions, and the fact that they are bound to irritate the authors of targeted messages, he says, such systems "will always end up frustrating very large segments of the population."
That observation, Masnick emphasizes, "is not an argument that we should throw up our hands and do nothing" or "an argument that companies can't do better jobs within their own content moderation efforts." Still, he says, it's "a huge problem" that "many people—including many politicians and journalists—seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach."
The post Elon Musk Enforces Twitter's Ban on 'Hateful Conduct' As Critics Predict a Flood of Bigotry appeared first on Reason.com.
]]>Ye, formerly known as Kanye West, was suspended from Twitter yesterday after posting an image of a swastika inside the Star of David. Ye also defended Nazis and said he likes Hitler during a Thursday live chat with Infowars' Alex Jones.
Twitter CEO Elon Musk said the rapper was suspended because "he again violated our rule against incitement to violence."
That's…a stretch.
It's also another revealing moment in the debate about free speech on Twitter.
I think Ken "Popehat" White says it best: Musk is certainly free to kick off Ye—"just like [he] can boot ANYONE for any reason or none. He owns the platform. That's HIS free speech and free association." Any suggestion that he shouldn't be allowed to give Ye the boot or that this somehow infringes on the rapper's rights is silly. But the move makes even more laughable the idea that Musk is some sort of stalwart defender of free speech.
…his cultists will still eat it up and call him a free speech hero while tying themselves in knots saying "WELL ACTUALLY it's a normal free speech rule that parody has to be EXPLICITLY LABELED AS PARODY IN THE TITLE" and groveling nonsense like that. Gross…..
— Popehat (@Popehat) December 2, 2022
By no reasonable definition was Ye "inciting violence," even if what he shared was really gross. Musk is doing here what so many people complained about Twitter doing in the pre-Musk days: uncomfortably shoving speech that is offensive, ugly, or otherwise undesired into a terms-of-service violation that doesn't really fit.
It's understandable why Twitter might not want to allow antisemitism, pro-Nazi content, or speech reflecting other forms of bigotry. I think there's value in letting people show who they really are and in fighting bad speech with more speech rather than bans. But advertisers and a lot of other folks feel differently.
Musk's move here is more in line with what mainstream observers seem to want with regard to content moderation on Twitter than with the free speech absolutism that both his critics and his supporters attribute to him. It's also unsurprising, given the realities and pressures of running a popular, ad-dependent social platform.
In reality, Musk has made many moves contrary to free speech absolutism since taking over Twitter.
Yes, Musk did say that former President Donald Trump could return to Twitter. And he ended a policy pertaining to COVID-19 misinformation. But he's also cracked down on parody accounts and said anyone doing parody must label it as such not only in their bio but in their Twitter handle. He's kicked off people for parodying him, and for a number of other questionable reasons. Now he's suspended Ye on a very broad reading of incitement to violence claims.
("If anyone can get inside his head, I'd love to hear it," Corbin Barthold of TechFreedom told The New York Times recently. "He seems to shift from free-speech absolutism until he decides he doesn't like something.")
People can argue over whether Musk's various moderation moves are reasonable for a major, mainstream social media site to make. But they are definitely not the moves of an unwavering defender of free speech. Rather, they reflect trade-offs between a commitment to free speech and to other values and concerns.
For all the billionaire's bravado about doing things differently, it turns out Musk's Twitter is…a lot like old Twitter.
Freedom's Furies. A new book by Timothy Sandefur looks at the intellectual and political forces that shaped the thinking of Ayn Rand, Isabel Paterson, and Rose Wilder Lane and how these three women in turn shaped the U.S. liberty movement. Earlier this week, I participated in a panel discussion of the book at the Cato Institute, alongside Sandefur, Cato's Paul Meany and Kat Murti, and longtime Libertarian Party activist Carla Howell. You can watch the discussion here.
President Joe Biden's student loan debt forgiveness plan will face Supreme Court review. The court announced on Thursday that it would hear a case concerning the legality of the Biden administration's plan to waive a lot of student loan debt. The Court said it will expedite the review, holding oral arguments in February.
So far, several lower courts have hit pause on the administration's plan. A U.S. district judge in Texas declared it unconstitutional, and the U.S. Court of Appeals for the 5th Circuit recently denied a request to halt that ruling while the administration's appeal plays out.
Reason's 2022 Webathon is still underway. Here's a thread from Senior Editor Stephanie Slade about reasons to support our work (and how appreciative we are of those who do!):
I'm grateful almost beyond belief to get to work for @reason, alongside amazing friends and colleagues doing excellent and important work like this: https://t.co/MqmYAGpnxU
— (Stephanie) Slade (@sladesr) December 1, 2022
Need more reasons? Nick Gillespie and Zach Weissmueller talked yesterday to Editor in Chief Katherine Mangu-Ward, as well as me, Robby Soave, Billy Binion, and Reason TV's Meredith and Austin Bragg about the work we've been doing at Reason that we couldn't do anywhere else. Check it out below. Go here to donate.
BREAKING: The 11th Circui Court of Appeals has DISMISSED DOnald Trump's lawsuit over the FBI's Mar-a-Lago search.https://t.co/VL0mA59DIQ
— Kyle Cheney (@kyledcheney) December 1, 2022
• James Bernard, the San Antonio cop who shot a teenager eating a burger in a McDonald's parking lot (leaving the teen in the hospital for two months), has been indicted on an attempted murder charge.
• "In a blistering 30-page opinion, a federal judge ordered sanctions against the attorneys of Kari Lake and Mark Finchem in their lawsuit against voting machines, hoping to deter 'similarly baseless suits in the future,'" reports the Arizona Republic.
• A majority of Republicans think the 2022 elections were "free and fair," according to a new poll.
• Amazon will not cave to activist pressure and remove the book Hebrews to Negroes: Wake Up Black America, which came to people's attention after Kyrie Irving of the Brooklyn Nets promoted it.
• "People who haven't met [Ron DeSantis] think he's a hot commodity. People who have met him aren't so sure," writes Mark Leibovich.
• Axios has a good overview of some of the anti-tech lawsuits raging right now.
• China faces a difficult road ahead as it eases COVID-19 restrictions. Years of lockdown and other COVID-19 control measures may mean fewer people got the virus, but they also mean less natural immunity in a country where the vaccines are also weaker than those used here and where many elderly people are not vaccinated.
• "The widow of a popular Kansas City-area musician has filed a lawsuit accusing Anthony Fauci and the National Institutes of Health of indirectly causing her husband's death from COVID-19 by funding coronavirus experimentation at a lab in China," reports The Kansas City Star.
• At The Daily Beast, a good op-ed from First Amendment lawyer Adam Sieff on the dangers of internet censorship.
• CNN has canceled all live programming on its sister network HLN. "HLN crime programming will move under the WBD Networks, led by Kathleen Finch, and will be merged with the ID cable channel," the New York Post reports.
The post Kanye's Suspension Shows Musk Twitter Might Look a Lot Like…Old Twitter appeared first on Reason.com.
]]>