The Food and Drug Administration (FDA) that massively screwed up COVID-19 testing now wants to apply its vast bureaucratic acumen to all other laboratory developed tests (LDTs). By insisting on its recondite approval procedures, the FDA at the beginning of the pandemic stymied the rollout of COVID-19 tests developed by numerous academic and private laboratories. In contrast, public health authorities in South Korea greenlighted an effective COVID-19 test just one week (and many more in the weeks following) after asking representatives from 20 private medical companies to produce such tests.
LDTs are diagnostic in vitro tests for clinical use that are designed, manufactured, and performed by individual laboratories. They can diagnose illnesses and guide treatments by detecting relevant biomarkers in saliva, blood, or tissues; the tests can identify small molecules, proteins, RNA, DNA, cells, and pathogens. For example, some assess the risks of developing Alzheimer's disease or guide the treatment of breast cancer.
The FDA now wants to regulate these tests as medical devices that must undergo premarket agency vetting before clinicians and patients are allowed to use them. The FDA estimates that between 600 and 2,400 laboratories currently offer as many as 40,000 to 160,000 tests. Overall, some 3.3 billion in vitro tests are administered to patients annually.
Out of billions of tests given, how often do laboratory developed tests appear to cause harm? In its submissions, the FDA justifies this burdensome oversight by citing problematic medical device reports and unconfirmed "allegations" for a grand total of nine and four different tests respectively between 2009 and 2023. The remaining examples cited by the FDA are tests that had actually been submitted to the agency for analysis and were subsequently rejected or revised as recommended.
A 2023 study in the American Journal of Clinical Pathology analyzed how frequently these tests were deployed in the University of Utah health system. The researchers reported that out of the more than 3 million lab tests ordered in 2021, 94 percent of them had already been approved by the FDA. Only 4 percent of the lab tests were LDTs. They believe their figures are similar across the U.S. health care landscape.
The FDA has received strong and widespread pushback from laboratories and clinicians. In a letter to the agency, American Hospital Association Executive Vice President Stacey Hughes wrote of the proposed regulations that "the unfortunate outcome likely would be the decline in the rate of clinical innovation, which would negatively impact the U.S.' ability to keep our health care system at the forefront of discovery, provide quality care to patients, and respond quickly to emerging public health risks."
The Biden administration is aiming to have the regulations finalized in April. However, MedTech Dive reports, analysts at the investment bank TD Cowen suggested that it remains "unclear if and when FDA will finalize the rule as it has faced considerable opposition and would likely be challenged in court."
The post FDA Aims To Stifle Medical Innovation Again appeared first on Reason.com.
]]>Earlier this month, the Biden White House issued a statement joining both the World Anti-Doping Agency (WADA) and the International Olympic Committee (IOC) in expressing "its deep concerns over…the planned 'Enhanced Games' without anti-doping requirements." Are such concerns really warranted? No.
"Why not solve the future problem of gene doping and the current problem of steroid use in professional sports by creating two kinds of sports leagues?" I asked way back in 2005. "One would be free of genetic and pharmacologic enhancements—call them the Natural Leagues. The other would allow players to use gene fixes and other enhancements—call them the Enhanced Leagues."
That is the salutary goal of the Enhanced Games, in which competing athletes will be able to use enhancements to improve their performances. "We embrace the inclusion of science in sports, and we fundamentally believe that the choice to use enhancements is a personal one," explain the organizers. "Sports can be safer without drug testing."
In their response to the White House, the organizers of the Enhanced Games point to a WADA-backed 2017 study that reported that nearly 44 percent of elite athletes had used performance enhancements in the past year. Their goal is to frankly acknowledge that reality and enable elite athletes to choose the venues (enhanced or natural) in which they would like to compete.
The organizers of the Enhanced Games additionally suggest that the data that they gather on performance enhancements will ultimately help the IOC to more effectively maintain "natural" Olympic competition.
In the Enhanced Games response, Aron D'Souza, president of the Enhanced Games, said,
We urge President Biden to engage with the Enhanced Games on what is an inevitable evolution in sports. As a champion of science, President Biden has the opportunity to help the 44% of athletes who have used performance enhancements to come out and safely compete. Only with a formal partnership between the IOC and Enhanced Games, can Olympics drug testing finally be effective and ensure the integrity of their games.
During a Zoom press conference earlier this week, D'Souza made it clear that evaluating the effectiveness of performance enhancements used in the competition will also advance transhumanist aims such as increasing healthy longevity. "The Enhanced Games, with our medical oversight and screening apparatus, will create this unique data which will benefit not only the tested sports community but also Longevity Science in general as many compounds can be repurposed to combat illnesses such as frailty which are endemic in modern society," said Michael Sagner, the scientific adviser for Enhanced Games.
Besides welcoming performance enhancements, salient differences between the Olympic Games and the Enhanced Games include the fact that the latter is a privately funded for-profit operation and that participating athletes are paid.
During the Zoom session, D'Souza said that many more details about the Enhanced Games, including a major broadcast deal, will be announced soon.
The post Enhanced Games Head Rebuts Biden: Performance Enhancements in Sports Are 'Inevitable' appeared first on Reason.com.
]]>"U.S. maternal deaths keep rising," reported NPR last year. PBS similarly observed, "U.S. maternal deaths more than doubled over 20 years." CNN also reported, "US maternal death rate rose sharply in 2021, [Centers for Disease Control and Prevention (CDC)] data shows, and experts worry the problem is getting worse."
The increase in maternal deaths is a statistical illusion argues a new study just published in the American Journal of Obstetrics & Gynecology. "Our study, which identified maternal deaths using a definition-based methodology, shows stable rates of maternal mortality in the United States between the 1999–2002 and 2018–2021 periods," conclude the authors. That's wonderful news but what accounts for the headlines that cited a steep rise in maternal deaths?
The researchers note that maternal deaths began to rise in 2003 when a pregnancy check box was added to U.S. death certificates. Consequently, if a woman who was pregnant and died in a car crash, from heart disease, or cancer the box was marked and counted as a maternal death in the CDC's National Vital Statistics System. This statistical misclassification process is what has largely resulted in the reported steep rise in maternal deaths. As the press release accompanying the new study explains:
The CDC method showed maternal death rates of 9.65 per 100,000 live births in the 1999-2002 period and 23.6 in the 2018-2021 period, while the alternative method calculated death rates of 10.2 and 10.4 per 100,000 live births, respectively. These startling statistics discount the previously held belief that the United States maternal death rates have been increasing.
In even better news, the researchers found evidence that health care during pregnancy and after delivery, rather than worsening, has actually improved substantially in many areas. In addition, they found a slight narrowing of differences in maternal death rates by race.
The post CDC Vastly Overestimated U.S. Maternal Death Rates, Says New Study appeared first on Reason.com.
]]>Our secular society has replaced the old arbiters of truth, priests and potentates, with science. One simple definition of the scientific method is that it is the self-correcting process of objectively establishing facts through testing and experimentation. Instead of appealing to the wisdom of divinely ordained scriptures or the pronouncements of princes, sophisticated moderns turn to the peer-reviewed scientific literature in search of reliable information on health, engineering, and, yes, public policy. Therefore, as we saw all too well during the late pandemic, politicians, public health practitioners, potion promoters, and pundits more or less all claim to "follow the science." (One notable recent exception is the ruling in a case involving in vitro fertilization by the chief judge of the Alabama Supreme Court.)
So is the recent retraction of three articles by the scientific journal Health Services Research and Managerial Epidemiology an example of the self-correcting processes of science or something less noble? After all, these articles were prominently cited as evidence in a federal court case that will now be heard by the U.S. Supreme Court later this month.
Sage Publications retracted two articles that suggested that the use of the abortion pill mifepristone significantly increased post-abortion emergency room use. Sage also retracted a third article from the same journal that reported that nearly half of Florida physicians the researchers identified as providing abortions had at least one malpractice claim, public complaint, disciplinary action, or criminal charge. (As background consider that a Florida law firm specializing in medical malpractice reports, "Physicians who provide care for women, particularly pregnant women, are the number one practice area for med mal lawsuits. Of OB-GYNs and related practitioners, 85 percent reported that they have been sued at some point during their career.")
In his April 7, 2023 decision, U.S. District Court for the Northern District of Texas judge Matthew Kacsmaryk "followed the science" by citing the now retracted articles when he overturned the U.S Food and Drug Administration's (FDA) approval back in 2000 of the drug as safe and efficacious. Some of the studies have now been cited and their retraction decried in amicus briefs filed with the Supreme Court.
So why were the articles retracted? And why now? The Sage retraction notice states that "we made this decision with the journal's editor because of undeclared conflicts of interest and after expert reviewers found that the studies demonstrate a lack of scientific rigor that invalidates or renders unreliable the authors' conclusions."
With respect to conflicts of interest, the Sage note observes that all but one of the authors of the studies were affiliated with various "pro-life organizations that explicitly support judicial action to restrict access to mifepristone." This is entirely true. What is puzzling is that these affiliations are acknowledged in all of the articles as well as included in the fairly extensive professional biographies at the conclusions of each article. The authors did however declare for each article that there were "no potential conflicts of interest with respect to the research, authorship, and/or publication of this article."
On the other hand, the authors did disclose in two articles the "receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Charlotte Lozier Institute." According to the mission statement of the Lozier Institute, it "advises and leads the pro-life movement with groundbreaking scientific, statistical, and medical research. We leverage this research to educate policymakers, the media, and the public on the value of life from fertilization to natural death."
In its retraction note Sage argues that the authors should have explicitly declared a conflict of interest—not just mentioned the Lozier Institute's funding—had they properly followed the relevant guidelines issued by the International Committee of Medical Journal Editors (ICMJE). The ICMJE disclosure form does ask all authors "to disclose all relationships/activities/interests listed below that are related to the content of your manuscript. 'Related' means any relation with for-profit or not-for-profit third parties whose interests may be affected by the content of the manuscript."
Given that two of these articles challenged the safety of FDA-approved abortion pills, it's pretty clear that the interests of the anti-abortion Lozier Institute would be affected by their content. It's also pretty clear that the editor of the journal could not have been deceived about the institutional affiliations, financial support, and interests of the authors.
The Sage retraction note also reports that "all three articles were originally reviewed by a researcher who was also affiliated with the Charlotte Lozier Institute at the time of the review." It is worth noting that the Sage guidelines ask reviewers to "carefully consider whether you have any potential conflicts of interest [emphasis in original] relating to the paper before undertaking the review. As an example, you should not be reviewing the paper of anyone you have worked with, taught, and/or published work with in the past." Sage points to the Committee on Publication Ethics guidelines that advise reviewers to "declare all potential competing, or conflicting, interests" specifically noting that reviewers should not agree to review if they are employed by the same institution, or have been recent mentors, mentees, close collaborators or joint grant holders.
The authors' response over at the Lozier Institute counters that Sage uses "'double-anonymized' review, meaning neither the author nor the reviewer knows each other's identities." But just how credible is it that the Lozier Institute-affiliated peer reviewer would not have recognized the provenance of the three articles?
What about the "lack of scientific rigor" that post-publication peer review of the articles found? The Sage retraction reports that two independent subject matter experts "identified fundamental problems with the study design and methodology, unjustified or incorrect factual assumptions, material errors in the authors' analysis of the data, and misleading presentations of the data." The retraction does not detail the experts' findings and their reasoning.
Citing a private letter from the Sage journal to the authors, Science reports:
Sage specifies that they "artificially inflat[ed] the number of adverse events" by counting multiple visits by the same patient; that "conflating" ER visits with adverse events without examining diagnoses or treatments "may not be a valid or rigorous approach"; and that one paper's conclusion that the miscoding of incomplete abortions as miscarriages caused serious adverse events was "inaccurate and unsupported by the data."
Some insight as to the specific concerns of the experts engaged by Sage can be discerned from the articles authors' response to an earlier expression of concern last year. It is noteworthy that the expressions of concern appeared only after the research was cited by Judge Kacsmaryk as evidence for overturning the FDA's approval of the abortifacient.
Will the retractions affect the Supreme Court case? "The whole basis of claims of danger from mifepristone to women sits on these papers. There's nothing else in the literature," says New York University bioethicist Arthur Caplan in Science. "If these papers fall, then the argument that upper courts are reviewing falls apart."
Is Caplan correct that there is nothing else in the literature supporting the epidemiological claims about emergency room visits resulting from the use of mifepristone made by the Lozier Institute affiliated researchers? Basically, yes.
A quick check of Google Scholar finds that the retracted articles are thinly cited and mostly by other researchers who are associated with pro-life organizations. One exception citing their articles is a 2023 Canadian study evaluating the adverse events from using a combination of abortion medications. But even that study reported: "Although rare, short-term adverse events are more likely after mifepristone–misoprostol IA [induced abortion] than procedural IA [induced abortion], especially for less serious adverse outcomes."
When comparing women using abortion pills versus outpatient procedural abortions performed at nine weeks of pregnancy or before, the relative risks of serious adverse events was essentially the same (3.4 versus 3.3 per 1,000). In contrast, an earlier Canadian study published in the New England Journal of Medicine in 2022 evaluating the safety of mifepristone found that the incidence of abortion-related adverse events and complications remained stable as the proportion of abortions provided by medication increased rapidly.
Overall, most studies find that both medication and procedural abortions are safe for the women choosing to end their pregnancies. For example, a 2018 study in BMC Medicine estimating the major incident rate related to abortion care found: "The major incident rate for abortion (0.1%) is lower than the published rates for pregnancy (1.4%), as well as other common procedures such as colonoscopy (0.2%), wisdom tooth removal (1.0%), and tonsillectomy (1.4%). Abortion care is, thus, safer than many other unregulated outpatient procedures."
A 2015 study in Obstetrics & Gynecology reported: "The major complication rate was 0.23%: 0.31% for medication abortion, 0.16% for first-trimester aspiration abortion, and 0.41% for second-trimester or later procedures." And a 2024 study in Nature Medicine evaluating the safety and effectiveness of telehealth medication abortions found that "in total, 0.25% of patients experienced a serious abortion-related adverse event."
In its 2022 evidence-based evaluation of 99 medical abortion studies, the non-profit Cochrane Collaboration concluded, "Medical abortion is a safe and effective way to terminate pregnancy in the first three months." The Cochrane analysis added, "Mifepristone combined with misoprostol is more effective than using these medications on their own."
Toting up all of the cases reported to the FDA since 2000 finds that around 0.07 percent of the 5.9 million American women who have used medications to terminate their pregnancies have experienced any adverse events from taking them. Not surprisingly, researchers associated with various pro-life organizations including the Lozier Institute challenged the FDA figures in a 2021 article in, where else?, Health Services Research and Managerial Epidemiology.
So how could the Lozier Institute researchers come to conclusions contrary to the results found by so many other researchers with respect to the safety of medication abortions? The broken science of epidemiology is perhaps to blame. Epidemiologists anxious to make a significant finding can unconsciously or consciously torture nearly any set of observational data into confessing to whatever correlations that just happen to confirm the researchers' hypotheses.
Stanford statistician John Ioannidis surveys the dire state of epidemiology in his seminal 2005 PLoS Medicine article, "Why Most Published Research Findings Are False." "Any claim coming from an observational study is most likely to be wrong," asserted National Institute of Statistical Sciences researchers Stanley Young and Alan Karr in their 2011 article in the journal Significance. Young has estimated that only 5 to 10 percent of observational studies can be replicated. Of course, all of the researchers seeking to analyze the side effects of mifepristone are parsing observational data with respect to their prevalence and severity. The upshot is that the epidemiological literature is so cluttered with flawed studies that anyone can find some that confirm what they already believe and so assert that they are just "following the science."
The disclosure justifications cited by Sage for retracting the articles are largely spurious since the researchers behind the three retracted studies clearly did not hide their pro-life institutional affiliations. More problematically, the conflicted peer reviewer should have declined to evaluate the studies. And whatever their methodological failings, the three outlier articles from an obscure journal would most likely never have attracted extra scrutiny except for being cited as "follow the science" evidence to challenge the FDA's approval of a widely used abortifacient.
At the center of the case before the Supreme Court later this month is the proposition that any odd federal judge who claims to be "following the science" can overrule the decisions of the FDA that also claims to be "following the science." Will the Supreme Court now "follow the science" and ignore the retracted articles when it rules on that issue later this year?
The post Abortion Pill Studies Retracted: Politics or Science? appeared first on Reason.com.
]]>The sorry history of anti-miscegenation and forced sterilization laws in the U.S. provides ample evidence that preemptive government interference in the reproductive decisions of its citizens should be strongly rejected. In a free society, the default should be that individuals are best situated for weighing the costs and benefits, moral and material, with respect to how, when, with whom, and whether they choose to become parents.
The now infamous Alabama Supreme Court decision earlier this month essentially outlawing the use of in vitro fertilization (IVF) by would-be parents highlights the consequences of unwarranted government meddling in reproductive decisions all too well. At its most basic, IVF is a treatment for infertility involving the fertilization of eggs in a petri dish with the goal of installing them afterward in a woman's womb where they have a chance to implant and hopefully develop into a healthy baby. Since the implantation of any specific embryo is far from guaranteed, IVF often involves creating several embryos that are stored in liquid nitrogen that could be made available for later attempts at achieving pregnancy.
Some 12 to 15 percent of couples in the U.S. experience infertility. Fortunately, since 1981 many infertile folks have been able to avail themselves of IVF and assisted reproduction techniques with the result that more than 1.2 million Americans have been born using it. Currently, about 2 percent of all babies in the U.S. are born through assisted reproduction. A 2023 Pew Research poll reported that "four-in-ten adults (42%) say they have used fertility treatments or personally know someone who has." Given the wide public acceptance and ubiquity of IVF, it is no surprise that a new Axios/Ipsos poll finds that two-thirds of Americans oppose the Alabama court ruling that frozen IVF embryos are the equivalent of born children.
The moral intuition that embryos are not people implied by these poll results reflects what research has revealed about the fraught and complex biology of uterine implantation and pregnancy. In both IVF and natural conception most embryos will not become babies. Research estimates that between 50 to 70 percent of naturally conceived embryos do not make it past the first trimester. In other words, one foreseen consequence of conception through sexual intercourse is the likely loss of numerous embryos.
In his 2012 Journal of Medical Ethics article, University of Illinois Chicago philosopher Timothy Murphy argued that the moral good of the birth of a child counterbalances the unwanted but nevertheless foreseen loss of other embryos in both natural and IVF conception. Again, polling suggests that most Americans endorse this moral reasoning.
In another 2012 article speculating on the metaphysical ramifications of endowing embryos with souls, Murphy basically recapitulates the line of reasoning in my 2004 article asking, "Is Heaven Populated Chiefly with the Souls of Embryos?" There I suggest that "perhaps 40 percent of all the residents of Heaven were never born, never developed brains, and never had thoughts, emotions, experiences, hopes, dreams, or desires."
Murphy similarly concludes, "Since more human zygotes and embryos are lost than survive to birth, conferral of personhood on them would mean—for those believing in personal immortality—that these persons constitute the majority of people living immortally despite having had only the shortest of earthly lives."
Metaphysical conjectures aside, former President Donald Trump clearly knows where most Americans stand on IVF. "We want to make it easier for mothers and fathers to have babies, not harder! That includes supporting the availability of fertility treatments like IVF in every State in America," he posted on Truth Social. He's right.
Now, the 124 denizens of the House of Representatives (all Republicans) who cosponsored just over a month ago the Life at Conception Act are scrambling to explain that, no, they did not really mean that every frozen IVF embryo is a "human person" entitled to the equal protection of the right to life. As a butt-covering move, Rep. Nancy Mace (R–S.C.) is circulating a House resolution "expressing support for continued access to fertility care and assisted reproduction technology, such as in vitro fertilization."
More substantially, Sen. Tammy Duckworth (D–Ill.) is pushing for the adoption of the Right to Build Families Act that states, "No State, or official or employee of a State acting in the scope of such appointment or employment, may prohibit or unreasonably limit…any individual from accessing assisted reproductive technology."
The post Parents, Not the Government, Should Make IVF Decisions appeared first on Reason.com.
]]>Eight years after the Paris climate agreement, where do we stand on global emissions? The title of a new United Nations Environment Programme report sums the situation up: Broken Record: Temperatures hit new highs, yet world fails to cut emissions (again).
The Paris Agreement aims to hold the global average temperature increase to 2 degrees Celsius relative to preindustrial levels. Consequently, the agreement recognizes "deep reductions in global emissions will be required in order to achieve the ultimate objective of the Convention." Each country issues its voluntary nationally determined contribution that outlines its plans and promises to address the problem of man-made climate change. The agreement noted that adding up the projected greenhouse gas emissions in all of the 2015 promises would result in 55 gigatons (billion metric tons) of emissions in 2030, a far higher level than would be required to meet the temperature goal.
Most countries underdelivered in setting their emissions goals. And why not? Their "commitments" are voluntary and there is no mechanism for enforcing them in any case.
The United Nations' 28th Climate Change Conference (COP28) convened in early December with representatives from nearly 200 countries meeting in Dubai, United Arab Emirates. Parties to the 2015 agreement were supposed to take stock of its implementation.
It's way off track. Global greenhouse gas emissions are continuing to rise. In 2015, the world emitted 50.1 gigatons of greenhouse gases. By 2022, that had risen to 53.8 gigatons. Emissions from the main greenhouse gas, carbon dioxide emitted from burning fossil fuels and deforestation, rose from 35.5 gigatons in 2015 to a record high of 40.9 gigatons in 2023. A study in Science published in December also reported that the last time atmospheric carbon dioxide levels were this high was 14 million years ago.
While global temperatures vary depending on natural phenomena (such as the El Niño-Southern Oscillation in the Pacific Ocean), the overall trend remains upward. Given a boost by a strong El Niño, 2023 will be the hottest year in the instrumental record at more than 1.4 degrees Celsius above the preindustrial baseline.
COP28's Global Stocktake asserts that "deep, rapid and sustained reductions in global greenhouse gas emissions of 43 per cent by 2030" will be required to achieve the Paris Agreement goals. This would mean cutting global emissions by about 7 percent per year for the next six years.
To get a sense of just how difficult pursuing this goal would be, consider that the economic disruptions of the COVID-19 pandemic in 2020 yielded a record drop of only 4.4 percent in global greenhouse gas emissions—along with a 3.4 percent fall in global gross domestic product.
Man-made climate change could become a significant problem for humanity over the course of this century, so it is reasonable to develop and deploy low- and no-emissions technologies. But with still-rising emissions, the COP28's Global Stocktake bottom line is correct: "Parties are not yet collectively on track towards achieving the purpose of the Paris Agreement and its long-term goals." And they are not likely to be any time soon.
The post No Progress on Global Emissions 8 Years After Paris Climate Agreement appeared first on Reason.com.
]]>Japanese researchers announced last year that healthy fertile mice had been born using eggs created from male mice's tail-tip cells. The male-derived eggs were fertilized with regular sperm, thus producing pups with two fathers. Reproductive biologist Katsuhiko Hayashi, who led the work at Kyushu University, thinks that it will be technically possible to create a viable human egg from a male skin cell within a decade, according to The Guardian.
This achievement builds on earlier work in which another team of Japanese researchers created mouse eggs from tail-tip cells that resulted in the birth of healthy offspring in 2017. Another Japanese research group in 2021 using mouse stem cells created sperm that produced healthy fertile offspring.
Now researchers at private biotech companies like Conception Bio and Gameto are racing to see if they can develop this in vitro gametogenesis (IVG) technology as a way to safely enable post-menopausal women, couples experiencing infertility, and same-sex couples to bear biologically related children. Perhaps even solo reproduction in which single men could produce both sperm and eggs that combined would result in them having biological children in the future.
Bear in mind that only seven healthy mouse pups emerged from the 630 two-dad embryos transferred by the Japanese researchers. So significant technical hurdles must be addressed before IVG can be safely used to give birth to human babies. But some folks object to pursuing human procreation using IVG even after it becomes as safe as conventional and in vitro fertilization (IVF) births.
IVG is "a perversion of the sanctity of procreation as a fundamental aspect of human life," said Ben Hurlbut, a bioethicist at Arizona State University, in USA Today. He added, "It makes it into an industrial project that responds to and also inspires and cultivates the desires of their future customers." Marcy Darnovsky, head of the left-wing Center for Genetics and Society, warned on NPRthat IVG "could take us into a kind of Gattaca world." (She was referencing the 1997 sci-fi movie in which a eugenicist state is ruled by people born with genetically enhanced abilities.) Over at the conservative Federalist, Jordan Boyd asserts that by developing IVG, "the global fertility industry seeks to erase women from procreation one manufactured egg at a time."
The National Academy of Sciences addresses many of these ethical concerns in its In Vitro-Derived Human Gametes conference proceedings report released in October 2023. That report is summarizing the results of a conference on the topic convened by the NAS in April 2023.
Far from "erasing women," IVG will instead enable otherwise infertile women to produce as many eggs as they desire without having to endure treatments like ovarian stimulation in the hope of yielding sufficient eggs to succeed at conventional IVF.
Hurlbut is right that many people regard procreation as a "fundamental aspect of human life." This would be especially true for the 9 percent of men and 11 percent of women of reproductive age in the United States who have experienced fertility problems. Then there are people past conventional reproductive age and same-sex couples who would like to have biologically related children. Far from being an "industrial project," the rollout of safe IVG would fulfill the desires of these future customers to build their families.
What about Gattaca fears? The report does acknowledge that "combining IVG with polygenic risk screening could revolutionize the ability to select embryos." Polygenic risk screening (PRS) totes up the genetic variants that increase each embryo's chances of developing a particular disease or trait. This is similar to the already widely accepted practice of pre-implantation genetic diagnosis during IVF, in which parents test and select embryos in order to avoid deleterious heritable conditions. PRS would increase would-be parents' ability to select a preferred combination of traits from among many more embryos.
Rather than limit the use of PRS, Stanford University bioethicist Hank Greely suggested that, "in general, it is better to rely on parental choices to make decisions about how people wish to create families." This follows from the reasonable presumption that parents generally seek to provide the best lives for their potential progeny. The sorry history of eugenics laws in the U.S., where tens of thousands were forcibly sterilized during the 20th century, should make anyone cautious about government meddling in people's reproductive choices, including the use of safe IVG. As University of California, Irvine, law professor Michele Goodwin correctly noted, "Where law has intervened over time in matters of reproduction, it has served to undermine civil liberties and civil rights."
The post What if Men Could Produce Their Own Eggs? appeared first on Reason.com.
]]>Your Face Belongs to Us: A Secretive Startup's Quest To End Privacy as We Know It, by Kashmir Hill, Random House, 352 pages, $28.99
"Do I want to live in a society where people can be identified secretly and at a distance by the government?" asks Alvaro Bedoya. "I do not, and I think I am not alone in that."
Bedoya, a member of the Federal Trade Commission, says those words in New York Times technology reporter Kashmir Hill's compelling new book, Your Face Belongs to Us. As Hill makes clear, we are headed toward the very world that Bedoya fears.
This book traces the longer history of attempts to deploy accurate and pervasive facial recognition technology, but it chiefly focuses on the quixotic rise of Clearview AI. Hill first learned of this company's existence in November 2019, when someone leaked a legal memo to her in which the mysterious company claimed it could identify nearly anyone on the planet based only on a snapshot of their face.
Hill spent several months trying to talk with Clearview AI's founders and investors, and they in turn tried to dodge her inquiries. Ultimately, she tracked down an early investor, David Scalzo, who reluctantly began to tell her about the company. After she suggested the app would bring an end to anonymity, Scalzo replied: "I've come to the conclusion that because information constantly increases, there's never going to be privacy. You can't ban technology. Sure, it might lead to a dystopian future or something, but you can't ban it." He pointed out that law enforcement loves Clearview AI's facial recognition app.
As Hill documents, the company was founded by a trio of rather sketchy characters. The chief technological brain is an Australian entrepreneur named Hoan Ton-That. His initial partners included the New York political fixer Richard Schwartz and the notorious right-wing edgelord and troll Charles Johnson.
Mesmerized by the tech ferment of Silicon Valley, the precocious coder Ton-That moved there at age 19; there he quickly occupied himself creating various apps, including in 2009 the hated video-sharing ViddyHo "worm," which hijacked Google Talk contact lists to spew out a stream of instant messages to click on its video offerings. After the uproar over ViddyHo, Ton-That kept a lower profile working on various other apps, eventually decamping to New York City in 2016.
In the meantime, Ton-That began toying online with alt-right and MAGA conceits, an interest that led him to Johnson. The two attended the 2016 Republican National Convention in Cleveland, where Donald Trump was nominated. At that convention, Johnson briefly introduced Ton-That to internet financier Peter Thiel, who would later be an angel investor in what became Clearview AI. (For what it's worth, Ton-That now says he regrets his earlier alt-right dalliances.)
Ton-That and Schwartz eventually cut Johnson out of the company. As revenge for his ouster, Johnson gave Hill access to tons of internal emails and other materials that illuminate the company's evolution into the biggest threat to our privacy yet developed.
"We have developed a revolutionary, web-based intelligence platform for law enforcement to use as a tool to help generate high-quality investigative leads," explains the company's website. "Our platform, powered by facial recognition technology, includes the largest known database of 40+ billion facial images sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources."
***
As Hill documents, the billions of photos in Clearview AI's ever-growing database were scraped without permission from Facebook, TikTok, Instagram, and other social media sites. The company argues that what it is doing is no different than the way Google catalogs links and data for its search engine, only that theirs is for photographs. The legal memo leaked to Hill was part of the company's defense against numerous lawsuits filed by social media companies and privacy advocates who objected to the data scraping.
Scalzo is right that law enforcement loves the app. In March, Ton-That told the BBC that U.S. police agencies have run nearly a million searches using Clearview AI. Agencies used it to identify suspects in the January 6 Capitol riot, for example. Of course, it does not always finger the right people. Police in New Orleans misused faulty face IDs from Clearview AI's app to arrest and detain an innocent black man.
Some privacy activists argue that facial recognition technologies are racially biased and do not work as well on some groups. But as developers continued to train their algorithms, they mostly fixed that problem; the software's disparities with respect to race, gender, and age are now so negligible as to be statistically insignificant. In testing by the National Institute of Standards and Technology, Hill reports, Clearview AI ranked among the world's most accurate facial recognition companies.
***
To see the possible future of pervasive facial recognition, Hill looks at how China and Russia are already using the technology. As part of its "safe city" initiative, Russian authorities in Moscow have installed over 200,000 surveillance cameras. Hill recounts an experiment by the Russian civil liberties activist Anna Kuznetsova, who submitted her photos to a black market data seller with access to the camera network. Two weeks later, she received a 35-page report detailing each time the system identified her face on a surveillance camera. There were 300 sightings in all. The system also accurately predicted where she lived and where she worked. While the data seller was punished, the system remains in place; it is now being used to identify anti-government protesters.
"Every society needs to decide for itself what takes priority," said Kuznetsova, "whether it's security or human rights."
The Chinese government has deployed over 700 million surveillance cameras; in many cases, artificial intelligence analyzes their output in real time. Chinese authorities have used it to police behavior like jaywalking and to monitor racial and religious minorities and political dissidents. Hill reports there is a "red list" of VIPs who are invisible to facial recognition systems. "In China, being unseen is a privilege," she writes.
In August 2023, the Chinese government issued draft regulations that aim to limit the private use of facial recognition technologies; the rules impose no such restrictions on its use for "national security" concerns.
Meanwhile, Iranian authorities are using facial recognition tech to monitor women protesting for civil rights by refusing to wear hijabs in public.
Coappearance analysis using artificial intelligence allows users to review either live or recorded video and identify all of the other people with whom a person of interest has come into contact. Basically, real-time facial recognition will not only keep track of you; it will identify your friends, family, coreligionists, political allies, business associates, and sexual partners and log when and where and for how long you hung out with them. The San Jose, California–based company Vintra asserts that its co-appearance technology "will rank these interactions, allowing the user to identify the most recurrent relationships and understand potential threats." Perhaps your boss will think your participation in an anti-vaccine rally or visit to a gay bar qualifies as a "potential threat."
What about private use of facial recognition technology? It certainly sounds like a product with potential: Personally, I am terrible at matching names to faces, so incorporating facial recognition into my glasses would be a huge social boon. Hill tests a Clearview AI prototype of augmented reality glasses that can do just that. Alas, the glasses provide viewers access to all the other photos of the person in Clearview AI's database, including snapshots at drunken college parties, at protest rallies, and with former lovers.
"Facial recognition is the perfect tool for oppression," argued Woodrow Hartzog, then a professor of law and computer science at Northeastern University, and Evan Selinger, a philosopher at the Rochester Institute of Technology, back in 2018. They also called it "the most uniquely dangerous surveillance mechanism ever invented." Unlike other biometric identifiers, such as fingerprints and DNA, your face is immediately visible wherever you roam. The upshot of the cheery slogan "your face is your passport" is that authorities don't even have to bother with demanding "your papers, please" to identify and track you.
In September 2023, Human Rights Watch and 119 other civil rights groups from around the world issued a statement calling "on police, other state authorities and private companies to immediately stop using facial recognition for the surveillance of publicly-accessible spaces and for the surveillance of people in migration or asylum contexts." Human Rights Watch added separately that the technology "is simply too dangerous and powerful to be used without negative consequences for human rights….As well as undermining privacy rights, the technology threatens our rights to equality and nondiscrimination, freedom of expression, and freedom of assembly."
The United States is cobbling together what Hill calls a "rickety surveillance state" built on this and other surveillance technologies. "We have only the rickety scaffolding of the Panopticon; it is not yet fully constructed," she observes. "We still have time to decide whether or not we actually want to build it.
The post Is Facial Recognition a Useful Public Safety Tool or Something Sinister? appeared first on Reason.com.
]]>Flailing GOP presidential candidate Gov. Ron DeSantis will apparently repeat any lie that he thinks will endear him to the Trump wing of the Republican Party. One of the more egregious examples is his recent assertion: "Every booster you take, you're more likely to get COVID as a result of it." Total bullshit.
It's not clear whence the governor got his misinformed soundbite, but the internet is filled with similar bogus claims by anti-COVID vaccine YouTube grifters like retired British nurse educator John Campbell. So what is really the case?
The governor's assertion probably stems from a likely deliberate misinterpretation of an August 2023 Centers for Disease Control and Prevention (CDC) risk assessment summarizing the possible efficacy of current COVID vaccines against the then-new COVID BA.2.86 variant. COVID vaccines are formulated to target specific variants of the virus. The most recent booster targets the XBB.1.5 omicron variant. However, the virus is constantly mutating and throwing up new variants that might break through immune protection that has been primed by a booster to fight XBB.1.5 variant. This is the question that the CDC's August assessment aimed to address with reference to the BA.2.86 variant.
The CDC summary stated:
Based on what CDC knows now, existing tests used to detect and medications used to treat COVID-19 appear to be effective with this variant. BA.2.86 may be more capable of causing infection in people who have previously had COVID-19 or who have received COVID-19 vaccines [emphasis added]. Scientists are evaluating the effectiveness of the forthcoming, updated COVID-19 vaccine. CDC's current assessment is that this updated vaccine will be effective at reducing severe disease and hospitalization. At this point, there is no evidence that this variant is causing more severe illness. That assessment may change as additional scientific data are developed. CDC will share more as we know more.
Anti-vaccine grifters twisted (deliberately?) the highlighted sentence into headlines like: "CDC Finally Admits That Vaccinated Are Now More Susceptible to COVID Than Unvaxxed." It did no such thing.
Read the sentence again: BA.2.86 may be more capable of causing infection in people who have previously had COVID-19 or who have received COVID-19 vaccines.
Clearly, all that it is saying is that the new variant may be capable of evading immune protection induced by either infection or vaccination. In other words, both previously infected and vaccinated people might be susceptible to the new BA.2.86 variant. It does not even come close to saying that vaccinated people are more likely to get COVID.
In response to this online bunkum, the CDC appended a clarification in September to its risk assessment (although why one was needed if people had practiced basic reading skills is beyond me) that notes:
The first risk assessment CDC released on BA.2.86 included the following sentence: "BA.2.86 may be more capable of causing infection in people who have previously had COVID-19 or who have received COVID-19 vaccines." The intent of this sentence was to raise the possibility that BA.2.86 might be more capable of causing infection compared with other variants currently circulating, but this sentence has been misinterpreted by some. Vaccination remains the best available protection against the most severe outcomes of COVID-19, including hospitalization and death. COVID-19 vaccines also reduce the chance of having Long COVID.
Of course, I agree that the CDC massively damaged its credibility during the pandemic, but only the conspiracy-addled will insist that whatever its researchers report now must be always false.
Whatever the CDC's prior sins, it is nevertheless the case that the virus never stops mutating. The newest and increasingly widespread variant has been dubbed JN.1. Virologists think the JN.1 variant may be the most infectious version yet. The good news, however, is that the current booster formulated for the XBB.1.5 variant continues to provide protection against the worst consequences—hospitalization and death—of JN.1 infection.
As a January 12 news article in the Journal of the American Medical Association (JAMA) reports, "Fortunately, laboratory research and rates of COVID-19 hospitalizations and deaths suggest that the XBB.1.5 vaccine still protects against severe illness in the JN.1 era."
"The purpose of vaccination is to decrease the severity of diseases," explained University of Tokyo virologist Kei Sato in JAMA. "Many people think that the purpose of vaccination is to prevent infection, but this is wrong."
It would have been fantastic if the COVID vaccines had offered permanent sterilizing immunity the way that vaccines for measles and polio largely do, but reams of evidence do show that current vaccines significantly protect people from the worst consequences of COVID infections. Let's hope that research on creating a universal COVID vaccine bears fruit sooner rather than later.
In any case, pandering presidential candidates should refrain from passing along anti-vaccine lies to voters.
Disclosure: I was boosted with the XBB.1.5 vaccine in September.
The post DeSantis Repeats Lie That Booster Shots Make You More Likely To Get COVID appeared first on Reason.com.
]]>The European Union's Copernicus Climate Change Service (C3S) reports that 2023 was the hottest year in the instrumental temperature record. That's in part because global temperatures were boosted by the El Niño phenomenon in which the eastern Pacific Ocean surface temperature periodically surges higher.
So how hot was 2023? C3S notes:
- 2023 had a global average temperature of 14.98°C, 0.17°C higher than the previous highest annual value in 2016
- 2023 was 0.60°C warmer than the 1991-2020 average and 1.48°C warmer than the 1850-1900 pre-industrial level
- It is likely that a 12-month period ending in January or February 2024 will exceed 1.5°C above the pre-industrial level
- 2023 marks the first time on record that every day within a year has exceeded 1°C above the 1850-1900 pre-industrial level. Close to 50% of days were more then [sic] 1.5°C warmer than the 1850-1900 level, and two days in November were, for the first time, more than 2°C warmer.
The 2015 Paris Climate Agreement aims to hold "the increase in the global average temperature to well below 2 °C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 °C above pre-industrial levels." The global average temperature in 2023 was just below the 1.5 degrees Celsius threshold.
"Not only is 2023 the warmest year on record, it is also the first year with all days over 1°C warmer than the pre-industrial period. Temperatures during 2023 likely exceed those of any period in at least the last 100,000 years," noted Samantha Burgess, deputy director of C3S, in a press release.
The satellite temperature series run by climatologists Roy Spencer and John Christy at the University of Alabama in Huntsville (UAH) also basically concurs, reporting that 2023 is the hottest year in its 45-year record.
"The 2023 annual average global [lower tropospheric temperature] anomaly was +0.51 deg. C above the 1991-2020 mean, easily making 2023 the warmest of the 45-year satellite record. The next-warmest year was +0.39 deg. C in 2016," notes Spencer.
The C3S report observed that atmospheric greenhouse gas concentrations reached the highest levels ever recorded, 419 parts per million for carbon dioxide and 1902 parts per billion for methane. UAH's Christy cautiously concedes that the "background climate-trend is about +0.1 °C per decade and could represent the warming effect of the extra greenhouse gases that are being added to the atmosphere as human development progresses."
What comes next?
"We expect the strong El Nino in the Pacific to impact the global temperature through 2024. For this reason we are forecasting 2024 to be another record breaking year, with the possibility of temporarily exceeding 1.5C for the first time," observed U.K. Met Office climate scientist Nick Dunstone in the Bournemouth Daily Echo.
The post It's Official: 2023 Was the Hottest Year on Record appeared first on Reason.com.
]]>"America has a life expectancy crisis," asserted a recent headline in The Washington Post. Why a crisis? Because American average life expectancy has been flat and then declining for the past decade or so.
One bit of recent good news: The Centers for Disease Control and Prevention (CDC) reported in November that average life expectancy at birth in 2022 was 77.5 years. While that is down from its 2014 peak of 78.8 years, the CDC notes that this is a post-pandemic increase of 1.1 years from its nadir of 76.1 years in 2021. The increase from 2021 to 2022, according to the CDC, "primarily resulted from decreases in mortality due to COVID-19, heart disease, unintentional injuries, cancer, and homicide. Declines in COVID-19 mortality accounted for approximately 84% of the increase in life expectancy." While the big recent dip in American life expectancy was largely the result of the ravages of the COVID pandemic, the trend over the prior 10 years was basically flat.
The Post article correctly noted that "the United States [was] increasingly falling behind other nations well before the pandemic."
The Post asked numerous members of Congress, including all 100 Senators, what they thought about falling life expectancy. While many replied that it was a serious problem, the article concluded that it "is not a political priority." The Post did acknowledge that "there also is no single strategy to turn it around." Politics being the art of the possible, there is little that politicians can do at this point in biomedical history to significantly increase average life expectancy.
Public health efforts beginning the the late 19th century to provide access to clean water and improved sanitation, improve food safety, and champion widespread vaccination against infectious microbes were chiefly responsible for the increase in average American life expectancy from just 47 years in 1900 to the mid-70s in that late 20th century. "In 1900, one in 40 Americans died annually. By 2013, that rate was roughly one in 140, a cumulative improvement of more than two thirds," reported a 2016 analysis by University of Pennsylvania researchers.
Today the leading causes of the deaths that mainly afflict older Americans are cardiovascular diseases, cancers, unintentional injuries, lower respiratory illnesses, and diabetes. Nostrums prescribed by politicians are not likely to have much effect on them.
Among other policies, the Post reported that many of the public health officials and lawmakers with which it spoke decried, "a health-care payment system that does not reward preventive care." And why not? After all, an ounce of prevention is worth a pound of cure, right? Not necessarily, according to a comprehensive analysis of preventive care studies published in the Journal of the American Medical Association (JAMA) in 2021. "General health checks were not associated with reduced mortality or cardiovascular events," noted the researchers. This bolstered the findings of a similar analysis in 2019 by researchers associated with the non-profit medical evidence review collaborative Cochrane that concluded that "health checks have little or no effect on total mortality."
The Post article also suggested that fighting between congressional Democrats and Republicans has stymied "legislation linked to gains in life expectancy, including efforts to expand access to health coverage and curb access to guns." As it turns out, various studies over the past two decades have calculated that lack of health insurance is associated with only a slightly higher risk of death.
A 2009 study in the American Journal of Public Health reported estimates that the lack of health insurance among Americans ages 25 to 65 may have been responsible for between 18,000 and 45,000 (0.8 to 1.8 percent) of deaths annually. At the time, 46 million Americans under the age of 65 were uninsured; by 2023 that had dropped to 23 million. As health insurance coverage increased, U.S. life expectancy stagnated and then fell.
What about guns? Unfortunately, the trends in both the rate and absolute number of firearm deaths—homicides, suicides, and accidents—have been upward over the past decade. The rate of firearm deaths hovered around 15 per 100,000 during the 1970s and 1980s and began to fall in the mid-1990s, reaching its lowest point at 10 per 100,000 in 2004.
The rate of firearm mortality in the U.S. remained slightly over 10 per 100,000 over the next decade when in 2014 it began to rise, hitting in 2021 14.6 per 100,000, a rate last seen in the bad old days of the 1970s, 1980s, and 1990s.
Deaths from suicide have consistently been greater than those from homicide. In 2022, for example, the number of people who killed themselves using firearms reached 26,993 whereas those killed by others numbered 19,592. Most gun deaths occur at earlier ages, thus proportionately lowering the U.S. population's overall life expectancy. A 2018 study in BMJ Evidence-Based Medicine calculated that firearm deaths between 2000 and 2016 reduced U.S. average life expectancy by 2.48 years. The researchers argued that other health gains during that period masked this countervailing downward life expectancy trend. And it does coincide with the slow-down in life expectancy increase that began around 2010.
What could politicians do about this? Setting aside constitutional issues, a 2023 comprehensive analysis of various policies aiming to reduce gun violence by researchers at the RAND Corporation think tank found relatively weak evidence that any of them worked all that well. For example, with respect to reducing violent crime, the evidence for the efficacy of policies such as banning assault weapons, imposing firearm safety training requirements, and requiring licenses and permits was inconclusive. Supportive evidence did, however, suggest that child access prevention laws could reduce youth suicides, accidents, and some violent crime deaths; and limits on concealed carry and stand-your-ground laws might reduce violent crime deaths.
The Post reported that some politicians pointed to the rising death toll from "lethal drug overdoses" as a significant factor in declining U.S. life expectancy. The Post did, however, acknowledge that drug deaths "are not solely responsible for the decline in life expectancy." It is worth noting that opioid overdose deaths began truly soaring after 2010 when users turned to illicit heroin and fentanyl after the introduction of Food and Drug Administration–approved abuse-deterrent formulations.
So how much do drug overdose deaths contribute to the recent decline in U.S. life expectancy? A 2021 comprehensive review of factors affecting mortality trends in the U.S. between 1999 and 2018 found that average life expectancy would "have been 0.3 years greater were it not for increases in unintentional drug poisoning." In a 2023 preprint article, two Johns Hopkins University researchers calculated that opioid overdose deaths between 2019 and 2021 reduced U.S. life expectancy by 0.65 years. If politicians and policy makers really want to make increasing life expectancy a priority, one huge step would be to actually end the war on drugs. A cease-fire in the drug war would likely reduce gun deaths too.
The fact that Americans have been getting fatter has also contributed to the recent stalling of and then decline in U.S. life expectancy. A 2022 preprint by researchers associated with Oxford University and the University of Texas Austin calculates that properly accounted mortality from obesity is perhaps cutting U.S. life expectancy by 1.7 years.
In a 2023 working paper, Socio-Behavioral Factors Contributing to Recent Mortality Trends in the United States, a team of demographers observed with considerable understatement that "hundreds of factors affect levels of mortality in every population." They nevertheless gamely sought to identify possible factors for the changes in U.S. adult mortality over the period 1997–2019, using data from the National Health Interview Surveys (NHIS) for years 1997–2018. The variables they examined included alcohol consumption, cigarette smoking, health insurance coverage, educational attainment, mental distress, obesity, and race/ethnicity.
Among other things, the authors, in line with earlier studies, concluded that "changes in health care coverage, as measured here, had a negligible effect" on U.S. life expectancy trends over the past two decades. The two biggest factors they identified as affecting U.S. life expectancy trends were that "mortality falls with rising educational attainment" while "increasing mental distress contributed to the stagnation of mortality improvement." Between 1997 and 2019, the percentage of college graduates rose from 24 percent to 36 percent of the U.S. population age 25 and above. Research consistently shows that college graduates tend to be less obese, smoke less, and eat better. Rising mental distress among NHIS participants as measured using the K-6 scale, especially after 2008, correlated with increasing mortality rates.
The nine-year difference in adult life expectancy between those Americans who are college graduates and those who are not is particularly striking.
However, the U.S. is not alone with respect to differential socioeconomic life expectancy outcomes. Even countries famed for their government-run universal health care systems such as France experience them. For example, the European Commission's 2019 country health profile of France reports that life expectancy for men and women in the top 5 percent of income is 84.4 and 88.3 years compared to those in the bottom 5 percent, which average 71.7 years and 80 years, respectively. This correspondingly results in male and female socioeconomic life expectancy gaps of 13 years and 8 years. The report notes that the gap in longevity can be explained at least partly by differences in education and living standards.
In the Post article, Sen. Bernie Sanders (I–Vt.) says that achieving Norway's average life expectancy of 83 years should be our goal. It is worth noting that the life expectancy of adult American college graduates is 83.3 years, three years higher than the 80.3 years average for the relatively well-off countries that are members of the Organization for Economic Cooperation and Development.
A 2019 report from the Norwegian Institute of Public Health compared the average life expectancies of that country's richest 1 percent with its poorest 1 percent. The report noted that "the differences in life expectancy between the one per cent richest and one per cent poorest in Norway were 14 years for men and 8 years for women." A 2016 study in the JAMA reported essentially the same gap between America's richest and poorest citizens. "The gap in life expectancy between the richest 1% and poorest 1% of individuals was 14.6 years for men and 10.1 years for women," observed the researchers in JAMA.
"It has surprised researchers and policy makers that even with a largely tax-funded public health care system and relatively evenly distributed income, there are substantial differences in life expectancy by income in Norway," said Dr. Jonas Minet Kinge, senior researcher at the Norwegian Institute of Public Health, in a press release about the report.
So why did U.S. life expectancy trends slow and then peak in 2014? And what, if anything, can policy makers and politicians realistically do to make increasing it a priority? As noted above, the big recent dip largely resulted from the COVID-19 pandemic. A 2023 Scientific Reports article "estimated that US life expectancy at birth dropped by 3.08 years due to the million COVID-19 deaths" between February 2020 and May 2022. But let's set aside that steep post-2020 downtick in life expectancy resulting from nearly 1.2 million Americans dying of COVID-19 infections.
A 2020 study in Health Affairs chiefly attributed the 3.3-year increase in U.S. life expectancy between 1990 and 2015 to public health, better pharmaceuticals, and improvements in medical care. By public health, the authors meant such things as campaigns to reduce smoking, increase cancer screenings and seat belt usage, improve auto and traffic safety, and increase awareness of the danger of stomach sleep for infants. With respect to pharmaceuticals, they cited the significant reduction in cardiovascular diseases that resulted from the introduction of effective drugs to lower cholesterol and blood pressure.
So a big part of what propelled increases in U.S. life expectancy is the fact that the percentage of Americans who smoke has fallen from 43 percent in the 1970s to 16 percent now. Smoking is associated with higher risks of cardiovascular diseases and cancers, rates of which have been dropping for decades. In addition, the rising percentage of Americans who are college graduates correlated with increasing life expectancy.
However, since the 2004 peak, countervailing increases in the death rates from drug overdoses, firearms, traffic accidents, and diseases associated with obesity contributed to the flattening of U.S. life expectancy trends.
A 2021 comprehensive analysis of the recent stagnation and decline in U.S. life expectancy in the Annual Review of Public Health (ARPH) largely concurs, finding that "the proximate causes of the decline are increases in opioid overdose deaths, suicide, homicide, and Alzheimer's disease." Interestingly, the U.S. trend in Alzheimer's disease prevalence has been downward since 2011. In addition, the ARPH review noted that "a slowdown in the long-term decline in mortality from cardiovascular diseases has also prevented life expectancy from improving further." So enabling and persuading more properly diagnosed Americans to take blood pressure and cholesterol-lowering medications would likely boost overall life expectancy.
Hectoring members of Congress to make increasing life expectancy a "political priority" does not change the fact that there simply are no "silver bullet" policies available for achieving that goal.
The post Guns, Germs, and Drugs Are Largely Responsible for the Decline in U.S. Life Expectancy appeared first on Reason.com.
]]>This week's featured article is "Meet Florida's Python Bounty Hunters" by Ronald Bailey.
This audio was generated using AI trained on the voice of Katherine Mangu-Ward.
Music credits: "Deep in Thought" by CTRL and "Sunsettling" by Man with Roses
The post <I>The Best of Reason</I>: Meet Florida's Python Bounty Hunters appeared first on Reason.com.
]]>Dubai, United Arab Emirates — "We have language on fossil fuel for the first time ever," declared COP28 President Sultan Ahmed Al Jaber as he graveled the deal reached for the Global Stocktake at U.N.'s annual climate change meeting to close a day late. Al Jaber is right. After more than 30 years of U.N. climate change negotiations since the Earth Summit in 1992, the words "fossil fuels" do appear for the first time in an officially adopted U.N. decision document. Previously, climate negotiations had focused solely on emissions without mentioning whence those emissions came.
Overnight wrangling among the 196 countries negotiating the Global Stocktake (GST) at COP28 yielded the shiny new outcome document. The new GST avoids the unfortunate earlier situation in which the initial GST text triggered OPEC and other oil and gas-producing countries with the incredibly hurtful words "phase-out." Instead, the new GST more gently calls upon countries to engage in such global efforts as "transitioning away from fossil fuels in energy systems."
Since the new GST was approved by consensus at COP28, this new phraseology was apparently able to mollify those negotiators and hordes of climate activists who fiercely decried the second GST draft. That misbegotten text in their view merely suggested that countries could choose from a list of measures aiming to address the problem of man-made climate change that included "reducing both consumption and production of fossil fuel."
At the closing session, U.N. climate chief Simon Stiell declared that COP28 "needed to signal a hard stop to humanity's core climate problem—fossil fuels and their planet-burning pollution. Whilst we didn't turn the page on the fossil fuel era in Dubai, this outcome is the beginning of the end."
The GST maintains the fond hope that keeping global average temperature 1.5 C below the preindustrial average (1850-1900) is still possible if humanity will just cut its greenhouse gas emissions by 43 percent by 2030 and 60 percent by 2035 relative to the 2019 level.
Besides calling for transitioning away from fossil fuels, the GST also highlights other pathways for cutting greenhouse gas emissions. These include tripling renewable energy capacity globally and doubling the global average annual rate of energy efficiency improvements by 2030; phasing down coal power generation; and, also for the first time in a U.N. climate decision document, accelerating zero- and low-emission technologies, including, wait for it, nuclear power. In another gesture toward energy and climate realism, the GST also "recognizes that transitional fuels can play a role in facilitating the energy transition." Translation: Reducing carbon dioxide emissions by switching from coal to natural gas is a big step in the right direction.
The GST also assumes that countries will take immediate action with respect to their greenhouse gas emissions but does further note that "this does not imply peaking in all countries" by 2030. Why? Because "peaking may be shaped by sustainable development, poverty eradication, needs and equity and be in line with different national circumstances." This simply recognizes the plain fact that lots of poor countries will need to use cheap fossil fuels to grow their economies and lift their citizens out of poverty.
The GST is meant to guide countries over the next two years as they revise and update in accord with their obligations under the Paris Climate Change Agreement their new nationally determined contributions (NDCs) to handling climate change. NDCs detail what each country plans and promises to do during the next five-year period (2025-2030). The new NDCs are supposed to align with the conclusions of the GST and are to be submitted to the U.N. climate before COP30 convenes in Brazil in 2025.
The post No Fossil Fuel Phase-Out in COP28 Climate Deal appeared first on Reason.com.
]]>Dubai, United Arab Emirates—"The North Star of the COP28 Presidency is to keep 1.5°C within reach," has been the frequent refrain of Sultan Ahmed Al Jaber, the COP28 president overseeing the current U.N. climate change negotiations in Dubai. Al Jaber is reflecting the oft-chanted activist slogan "Keep 1.5 Alive!" The idea is that humanity must reduce its emissions of globe-warming greenhouse gases from burning fossil fuels sufficiently to keep the planet from warming more than 1.5 degrees Celsius above the pre-industrial baseline (1850-1900).
It is worth tracing the history of where the 1.5 C "North Star" originated and what the consequences of breaching it would likely be. The 1.5 C threshold was officially enshrined as a goal under the United Nations Framework Convention on Climate Change with the adoption of the Paris Climate Change Agreement in 2015. Article 2 in the Paris Agreement commits signatories to strengthening the global response to the threat of climate change by "holding the increase in the global average temperature to well below 2 °C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 °C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change."
In his 2007 article in Energy Policy, University of Sussex economist Richard S.J. Tol traces the germination of the 2 C target back to a 1995 report on a workshop convened by the German Advisory Council on Global Change. It is notable that only one member of the eleven-member advisory council was a meteorologist, but, on the other hand, there were four economists.
The advisors adopted two principles to guide their work. The first was the "preservation of Creation in its present form" achieved chiefly by staying within their guess of what would be "a tolerable 'temperature window'." The second was the "prevention of excessive costs." Their analysis of what would constitute a tolerable temperature window occupies a single paragraph. There they reckoned that the mean maximum temperature during the last interglacial period was 16.1 C to which they arbitrarily added a further 0.5 C to establish tolerable maximum temperature of 16.6 C. They then assumed that in 1995 the current global mean temperature was around 15.3 C which would be only 1.3 C below their tolerable maximum. Finally, they presupposed the 1995 average was 0.7 C above the preindustrial average which yields an overall 2.0 C threshold. (For what it's worth, the advisors just as sketchily calculated that if global average temperatures rose by 2.0 C above preindustrial levels that global GDP would be 5 percent lower than it would otherwise have been.)
Tol notes that the 2.0 C target was adopted by the Council of the European Union (CEU) just a year later when it stated that it "believes that global average temperatures should not exceed 2°C above pre-industrial level." As Tol points out the CEU reaffirmed that target in 2004. Ultimately Tol persuasively argues that "the official documents that justify the 2°C warming target for long term climate policy have severe shortcomings. Methods are inadequate, reasoning sloppy, citations selective, and the overall argumentation rather thin." He adds, "This does not suffice for responsible governments, answerable to the people, when deciding on a major issue."
Nevertheless, other countries involved in climate negotiations could not ignore the European Union's push for a 2 C warming target. As it happens, the 2 C target was internationally recognized in the Copenhagen Accord which was hastily cobbled together at the last minute in order to prevent COP15 from total collapse in 2009. In the Accord, countries agreed that deep cuts in global emissions would be needed "so as to hold the increase in global temperature below 2 degrees Celsius." In addition, the last paragraph of the Copenhagen Accord is where the lower 1.5 C was also first officially incorporated.
In their 2023 article in WIREs Climate Change, two researchers with the French Center for Scientific Research, Béatrice Cointe and Hélène Guillemot, trace the history of diplomatic jockeying that led to the order to achieve this goal, the Intergovernmental Panel on Climate Change (IPCC) report concludes that humanity must cut greenhouse gas emissions—chiefly carbon dioxide emitted by the burning of fossil fuels—in half by 2030, and reach net zero emissions by 2050. The researchers note that the lower threshold was initially championed in climate negotiations by the Alliance of Small Island States (AOSIS) who are concerned about the effects of rising sea levels owing to glacier and ice sheet melt caused by increasing temperatures. AOSIS persuaded the Least Developed Countries (LDC) bloc to sign onto the goal, resulting in its inclusion in the Paris Agreement in 2015. Cointe and Guillemot do tellingly note that "before the Paris Agreement, the 1.5°C limit was outside the range of explored pathways. It was deemed unrealistic."
However, the Paris Agreement set up a process through which the IPCC would commission a report specifically probing the effects of a temperature increase of 1.5 C and exploring possible emissions pathways to get there. The result was the IPCC's 2018 special report Global Warming of 1.5 C. That report concluded that to hold temperature increases below that threshold, humanity would have to cut greenhouse gas emissions—chiefly carbon dioxide emitted by the burning of fossil fuels—in half by 2030, and reach net zero emissions by 2050. In fact, it is the special report that is largely responsible for embedding the concept of "net zero" into current climate negotiations. "The world is going to end in 12 years if we don't address climate change," is how Rep. Alexandria Ocasio-Cortez (D-NY) in 2019 infamously mischaracterized the report's findings. And the congresswoman is far from alone. Earlier this year, President Joe Biden asserted, "If we don't keep the temperature from going above 1.5 degrees Celsius raised, then we're in real trouble. That whole generation is damned. I mean, that's not hyperbole, really, truly in trouble."
Cambridge University climate researcher Michael Hulme forcefully rejected this kind of catastrophizing deadline-ism in his 2019 editorial in WIREs Climate Change. "The rhetoric of deadlines and 'it's too late' does not do justice to what we know scientifically about climate change," observed Hulme. "It is as false scientifically to say that the climate future will be catastrophic as it is to say with certainty that it will be merely lukewarm. Neither is there a cliff edge to fall over in 2030 or at 1.5°C of warming."
Pennsylvania State University climate researcher Michael Mann also rebuffs this kind of climate doomerism. In a post on X (formerly Twitter) Mann called out Biden's comment as "Unhelpful rhetoric, unsupported by the science. It's a continuum not a cliff." He further noted, "If we miss the 1.5C exit ramp, we still go for 1.6C exit rather than give up."
To keep the global average temperature below 1.5 C, a recent U.N. report calculated that the world must cut by 2030 its greenhouse gas emissions 43 percent below their levels in 2019. The world is very unlikely to achieve such steep cuts during the next six years. So, it is good, although not surprising, news that when the world passes through the 1.5 C target, it will not be plunging to its death over a climate cliff. The upshot is that Al Jaber, climate activists, and COP28 negotiators are pursuing a fake North Star.
The post There Is No 1.5°C Climate Cliff appeared first on Reason.com.
]]>Dubai, United Arab Emirates—"There is no greater admission of policy failure" than Europe's carbon border adjustment mechanism (CBAM), said Rod Richardson, head of the pro-market Grace Richardson Fund.* "Our carbon pricing is failing, so our solution to shooting ourselves in our own foot is to shoot everyone else in theirs."
Richardson made this observation at COP28, the United Nations' latest climate summit, during a roundtable contrasting American and European climate change policies.
The CBAM, which went into effect on October 1, aims to "equalize the price of carbon between domestic products and imports." Among other things, the European Union (E.U.) sets the price of carbon dioxide emissions through its emissions trading scheme. Each year E.U. regulators reduce the overall number of greenhouse gas emissions permits allocated to regulated installations and aircraft operators. This declining cap ensures that E.U. emissions decrease over time. It has also meant higher emissions permit prices, which get passed to European consumers.
The CBAM is an attempt to prevent "carbon leakage," in which the E.U.'s climate policies are undercut by cheaper carbon-intensive imports and by European companies relocating to countries with less stringent climate change mitigation standards. It "marks the first time a group of nations have imposed their domestic climate policy on other nations," noted three Resources for the Future researchers in October. "The law requires importers to purchase EU Emissions Trading System allowances equal to the amount of carbon embedded in the products they wish to import into the European Union."
In order words, the E.U. is aiming its tariff ordnance on countries by imposing charges on their carbon dioxide emissions from their use of fossil fuels. The CBAM currently covers imports of cement, iron and steel, aluminum, fertilizers, electricity and hydrogen.
Other countries—including China, India, and many in Africa—consider the CBAM just another tariff barrier that limits their people's ability to prosper through mutually beneficial international trade. There is a risk that this disagreement could spiral into a global trade war. Worryingly, similar carbon border adjustment schemes have been introduced in Congress. One such is the Foreign Pollution Fee Act of 2023, submitted by Sens. Bill Cassidy (R–La.) and Lindsey Graham (R–S.C.) in November.
Richardson proposes a global carbon tariff ceasefire: a Climate & Freedom Accord, which could be incorporated into Article 6 of the Paris Climate Change Agreement.
Couched in recondite language, Article 6 basically authorizes the use of markets internationally to cut carbon emissions. The Climate & Freedom Accord proposal still needs further fleshing out, but its goal is to move away from the stick of trade barriers and toward the carrots of internationally available tax-free investment capital for developing and deploying clean tech. Participating countries would agree to open their markets while eliminating all conventional environmental, energy, and other subsidies targeted to already profitable enterprises. After all, if a company is profitable, why does it need subsidies?
As Philip Rossetti at the R Street Institute testified to the Senate Finance Committee this year, his research has found that of "the total clean electricity generation through 2030 that would be eligible to claim the IRA [Inflation Reduction Act] subsidies, 67 percent of that generation would have been built even without the subsidy." That means $120 billion of the roughly $180 billion targeted at clean electricity projects would be built anyway. The subsidies are a nice deal for the companies that receive them, but they're bad for the taxpayers who shell them out and they do nothing for the climate.
The Climate & Freedom Accord aims to "attract participation by unlocking vast, tax-free international private investment flows" for the countries who join it. Innovation Acceleration Bonds, Loans & Savings (InABLS), for example, would be internationally reciprocal, private, tax-exempt debt financing the deployment of efficient low-carbon property, plant and equipment. As Richardson and Jigar Shah* explained in Reason back in 2019, such tax-free debt would encourage the rapid deployment of the newest, most efficient low-greenhouse-emissions technologies without picking winners or losers. If a startup's technology reduces greenhouse emissions, then it qualifies for issuing InABLS to finance it.
One possible issue is that countries could regard tax-free InABLS as yet another form of subsidy. This problem could be adequately addressed if the Accord were incorporated into the Paris Agreement.
Unfortunately, free market climate solutions face considerable opposition from many governments and most activists at COP28. "The energy transition driven by markets is not fast enough," said Caroline Brouillette, head of Climate Action Network Canada, at a press conference today. "We need a transition of our economic systems as well." And she didn't mean opening up markets to competition, free trade, and simplified permitting.
*CORRECTIONS: This article has been corrected to eliminate an incorrect institutional affiliation for Jigar Shah. It has also been corrected to clarify the name of the Grace Richardson Fund, a supporter of the Reason Foundation, which publishes this website.
UPDATE: After publication, Rod Richardson sent this note:
Hat tip to Ron Bailey for his excellent coverage of COP 28. In the interest of accuracy, please allow me to clarify that Ron's description of the InABLS qualification mechanism no longer applies. He writes: "If a startup's technology reduces greenhouse emissions, then it qualifies for issuing InABLS to finance it." That was true for the Clean Asset Bonds, Loans and Savings (CABLS) proposal found in my 2019 Reason article that Ron cites. But that qualification mechanism was dropped, because it had been criticized by several free market thinkers "as yet another form of subsidy" as Ron puts it, for picking winners and losers, and potentially creating barriers for new innovations not on a qualified list of technologies.
In late 2022, Nick Loris proposed that a qualification mechanism might not be needed at all. He pointed to a Tax Foundation study by Alex Muresianu that showed that immediate capital expensing has strong decarbonizing benefits. Simply reducing the cost of capital for new property, plant and equipment (PP&E) creates a driver speeding up the natural competitive process of deploying new (therefore generally cleaner and more efficient) technology, and retiring older, dirtier technology.
In other words, a technology neutral supply side tax cut that lowers the cost of new investment—such as internationally reciprocal tax-exempt debt for PP&E—would have decarbonizing benefits, and accelerate innovation across all technologies, and across borders, without creating barriers to unforeseen innovations.
Such instruments—internationally reciprocal tax-exempt debt for all PP&E—are called InABLS in the May 2023 Climate & Freedom Accord straw proposal. But after a series of workshops in April, think tanks developing the Accord concept prefer the terms "CoVictory Bonds, Loans and Savings Funds." "CoVictory" conveys that these instruments provide multiple economic and environmental benefits for multiple cooperating nations. They also provide a carrot to encourage nations to open markets to trade, competition and innovation, all of which are fundamentally decarbonizing.
The most up-to-date description of the Accord's CoVictory Funds proposal, can be found in this recent article in the European Conservative.
The post Promoting Free Market Solutions to Climate Change at COP28 appeared first on Reason.com.
]]>Dubai, United Arab Emirates—Earlier today at COP28, the United Nations' climate summit, U.N. Secretary-General Antonion Guterres declared that "a central aspect, in my opinion, of the success of the COP will be for the COP to reach a consensus on the need to phase out fossil fuels in line with a time framework that is in line with the 1.5 degree limits." In other words, Guterres wants COP28 to impose a global deadline for the elimination of fossil fuels from the world's energy supplies.
The new proposed text for the Global Stocktake (GST)—the principal document being composed at the conference—may disappoint Guterres. Though the options in the negotiating text all proposed some goal for the "phase out" of fossil fuels as of last Friday, that term is nowhere in the new GST text.
While expressing a desire for deep, rapid, and sustained reductions in greenhouse gas emissions, the GST draft text as it stands now "calls upon Parties to take actions that could include," among other things, "Reducing both consumption and production of fossil fuels, in a just, orderly and equitable manner so as to achieve net zero by, before, or around 2050 in keeping with the science."
This clearly avoids the trigger words "phase out," but doesn't it really amount to the same thing? Perhaps not. Because that formulation "could include" suggests that reducing the consumption and production of fossil fuels is just one option among many that signatories may choose to pursue over the next two years.
It is worth noting that this draft text does include the phrase "fossil fuels." If that remains in the final text, this will be the first time an official COP decision document actually deployed those words. Previously, such documents have coyly focused on cutting greenhouse gas emissions without mentioning from whence those pesky emissions might come.
This outcome is entirely unsatisfactory to the longtime climate activist (and former U.S. vice president) Al Gore. "COP28 is now on the verge of complete failure," he posted on X (formerly Twitter). "The world desperately needs to phase out fossil fuels as quickly as possible, but this obsequious draft reads as if OPEC dictated it word for word. It is even worse than many had feared."
Gore is not alone in his disappointment. Spain's environment minister, a co-leader of the European Union's delegation, said "there are elements in the text that are fully unacceptable."
Other options in the draft GST text include such measures as "Tripling renewable energy capacity globally and doubling the global average annual rate of energy efficiency improvements by 2030." This is in line with the global pledge signed by nearly 120 countries issued on December 2. Note that China, India, Saudi Arabia, and Russia did not commit to this goal.
Interestingly, the draft text mentions nuclear energy as one of the measures that countries could include in their efforts. So perhaps COP28 is the "nuclear COP" after all.
The post COP28: No 'Phase Out' of Fossil Fuels After All appeared first on Reason.com.
]]>Dubai, United Arab Emirates—"It's been a very good COP for nuclear energy," said Jonathan Cobb of the World Nuclear Association. He was referring to COP28, this year's United Nations summit on climate change, which had given his industry several reasons for optimism. Most notably, 22 countries—including the United States, the U.K., France, Japan, and South Korea—had issued the ministerial Declaration to Triple Nuclear Capacity by 2050.
"COP28 will be known as the nuclear COP," Australia's shadow climate minister, Ted O'Brien, declared on one panel. And America's climate envoy, former Secretary of State John Kerry, proclaimed when the declaration was announced that "you can't get to net zero in 2050 without nuclear power."
"Net zero" is the condition where the anthropogenic emissions of greenhouse gases are balanced by removal from the atmosphere—one of the targets set by the Paris Climate Change Agreement. Kerry noted that nuclear energy currently supplies a third of the world's low-carbon electricity.
This increased recognition of nuclear energy as a climate-friendly power source builds on the progress I reported at last year's climate summit in Egypt. The declaration recognizes that "nuclear energy is already the second-largest source of clean dispatchable baseload power, with benefits for energy security." (Baseload means power generation that generally runs continuously throughout the year and operates at stable output levels. This contrasts with variable power sources, such as solar and wind: It isn't always sunny or windy.)
The countries issuing the declaration also "commit to supporting the development and construction of nuclear reactors, such as small modular and other advanced reactors for power generation as well as wider industrial applications for decarbonization." An example of those "wider industrial applications" would be the X-Energy small modular gas-cooled reactors Dow Chemical is using at its manufacturing plant on the Gulf Coast of Texas.
Another advantage of nuclear power plants is their relatively small size compared to the extensive and often remote areas needed to deploy wind and solar power. As Cobb pointed out, nuclear reactors can be slotted into the sites of decommissioned coal and natural gas power plants. The new nuclear power plants can run the already installed turbines and transmit the electricity they generate through already installed transmission lines. And the local communities are already set up to operate power plants.
As it happens, Terrapower announced earlier this year that it will build its advanced Natrium nuclear reactor near a retiring coal plant in Kemmerer, Wyoming.
All too predictably, various environmental groups at COP28 denounced the declaration. "Promoting a nuclear expansion at COP 28 is only a plan for climate failure," asserted Tim Judson, head of the U.S.-based Nuclear Information and Resource Service, in a press release. Lise Masson of Friends of the Earth International added: "We have no time to waste on such false solutions that only delay and distract real and adequate action to address the climate crisis."
These activists point out that it takes a long time to build a new reactor. Of course, these delays are largely the result of their own decades of support for crippling overregulation.
In a 2017 study in Energies, the Australian economist Peter Lang calculated that if the heavy regulation championed by anti-nuclear activists had not prevailed during the 1970s and '80s, nuclear power "could have replaced up to 100% of coal-generated and 76% of gas-generated electricity" globally by 2015. Had the earlier learning-curve trajectory been allowed to continue, nuclear power plant construction costs would be 10 percent of what they are now. This would have cut cumulative carbon dioxide emissions by 174 gigatons, and annual carbon dioxide emissions would now be one-third less.
About 440 nuclear reactors are operating now, with 60 more in construction and 110 more on the drawing boards. The goal of tripling nuclear energy production implies the construction of 880 new power plants by 2050. That would mean building an average of 34 new reactors every year from now til then.
Would that actually be enough? While nuclear power generates a significant amount of the world's electricity, it supplies only about 5 percent of the world's primary energy. Fossil fuels still account for around 80 percent of primary energy consumption. If you really want to cut greenhouse emissions over the course of this century, you'd need to increase the world's nuclear capacity much more—tripling won't be enough.
If climate activists were serious about addressing what they call the climate crisis, they would be thronging the streets demanding a more streamlined approval process, enabling a much faster nuclear-power rollout.
The post Is COP28 the 'Nuclear COP'? appeared first on Reason.com.
]]>Python Cowboy Mike Kimmel and Python Huntress Amy Siewe are just two of the legendary characters trying to keep the rising population of Burmese pythons in check in Florida's Everglades. Both were once snake killers for hire for the South Florida Water Management District (SFWMD) but now work as professional snake hunting guides.
"I was a state contractor for four years," says Siewe. "I couldn't make enough money to pay my bills. So I decided to become a full-time python hunting guide in January." She averages three hunts per week.
Siewe's biggest catch so far was a 17-foot-3-inch monster that she killed in July 2021. Two years later, Kimmel slew a 16-foot-long female and was surprised to find over 60 eggs inside her. Of the 7,330 pythons killed by SFWMD contractors since March 2017, only 651 (9 percent) have been 10 feet or longer. (In July, 22-year-old amateur python hunter Jake Waleri captured a world record 19-foot Burmese python at Big Cypress National Preserve.)
The cadre of around 100 contracted snake hunters earn between $13 and $18 hourly for up to 10 hours a day, plus an incentive payment of $50 for each python measuring up to 4 feet and another $25 for each foot measured above 4 feet. Hunters also get paid $200 for each verified active python nest they remove.
Both Kimmel and Siewe not only earn cash as python hunt guides but also from selling items—Apple Watch bands, bi-fold wallets—made of python leather. "The thing I like as a guide is that I get to take people out and teach them about the problems that the pythons are causing," says Siewe. "They get a chance to help save Florida's ecosystems. It's a really cool thing that I get to do." Private guides like Siewe are clearly an important supplement to state-contracted hunting.
At the Florida Fish and Wildlife Conservation Commission (FWC), the Python Action Team Removing Invasive Constrictors (PATRIC) program pays the same fees to its own contracted snake hunters. Since both programs were established in 2017, over 18,000 pythons have been caught and killed; contracted snake hunters have been responsible for the majority of the catches.
But this has barely made a dent in Florida's python population.
Besides contracting with professional snake hunters, the FWC launched its now annual Florida Python Challenge in 2013. During that first four-week challenge, the snake hunters competed to capture and kill as many pythons as possible. Then as now, contestants paid a $25 fee and took an online training course on how to safely capture and humanely kill the serpents.
In that first contest, the grand prize was $1,500 for the most number of snakes killed and $1,000 for the longest one. That year, the roughly 1,600 contestants managed to bag just 68 pythons. In the 2022 challenge, by contrast, 19-year-old Matthew Concepcion beat nearly 1,000 competitors and won a $10,000 grand prize by capturing 28 pythons; the contest as a whole killed 231 of the snakes. The prize money is now supplied by private foundations and companies.
The 2023 challenge, which ran August 4–13 this year, had 1,050 participants. This batch of contestants didn't do quite as well as 2022's—they nabbed 209 of the reptiles, with champion Paul Hobbs netting 20 of them—but they landed in the same general vicinity.
Anyone may kill a Burmese python at any time on private land and on certain listed FWC-managed lands. There is no need for a license, nor is there a bag limit. But the FWC does not offer any compensation for pythons except to contracted hunters or during the Florida Python Challenge.
Could bounties produce a "cobra effect"? The British colonial government in India once offered sufficiently high bounties for dead cobras that it perversely incentivized enterprising residents to breed cobras at home. Any such tales about illicit python ranching in Florida are unlikely to be true. For one thing, it costs $100–$200 to feed a hatchling python enough to grow it to four or five feet in a year. The python hunters would get only $50 to $75 for such a snake.
Why kill these beautiful reptiles? "Burmese pythons in southern Florida represent one of the most intractable invasive-species management issues across the globe," according to a January 2023 analysis by U.S. Geological Survey population ecologist Jacquelyn Guzy and her team of researchers. The number of Burmese pythons now living in the Greater Everglades Ecosystem could be anywhere from 150,000 to a million very hungry snakes. (Ironically, the Burmese python is assessed as "vulnerable" on the International Union for Conservation of Nature's Red List of Threatened Species—populations in its Asian home ranges had declined during the prior 10 years by 30 percent when it was last evaluated in 2011.)
Burmese pythons were likely established in southern Florida through accidental and intentional releases by pet owners who became overwhelmed with taking care of their 8- to 12-foot-long reptiles. (The animal's owners are generally advised to "always have a second person present when handling or feeding pythons longer than 8 feet. It doesn't take long for a full-grown Burmese python to overpower a person.") While the first Burmese python identified in the Everglades was roadkill way back in 1979, wildlife officials became aware they were breeding in the swamps of South Florida in the late 1990s and early 2000s.
As apex predators, Burmese pythons are ambush hunters; they have ravaged mid-sized mammal populations in the Everglades. A 2012 Proceedings of the National Academy of Sciences study found that populations of raccoons had dropped 99.3 percent, opossums 98.9 percent, and bobcats 87.5 percent since 1997. Marsh rabbits, cottontail rabbits, and foxes had effectively disappeared.
In a 2022 article for the journal Biological Conservation, biologists Alexander Pyron and Arne Mooers pointed out that the snakes' prey are "among the most widespread and abundant and so secure species in North America." So why, they ask, should we worry so much about their declines in the relatively small Everglades region? Pyron and Mooers acknowledge that pythons may also be having an impact on rare Everglades denizens that are difficult to observe. The constrictors also eat amphibians, alligators, white-tailed deer, wild pigs, birds, and other snakes.
The news for native species from the Everglades is not all dire. In a 2023 report for the U.S. Geological Survey, biologist Andrea Currylow and her colleagues cite evidence that some "natives bite back." Specifically, alligators, cottonmouth and indigo snakes, bobcats, and bears have been detected preying on juvenile Burmese pythons. "Although much more work is needed," they write, "our observations contribute to limited but growing indications of native species' resilience in southern Florida's Greater Everglades Ecosystem."
Guzy's team notes that the "unique combination of inaccessible habitat with the cryptic and resilient nature of pythons that do very well in the subtropical environment of southern Florida" renders them "extremely difficult to detect." In other words, it's hard to find well-camouflaged snakes in a roadless swamp. "The detection capacity for pythons is very low. I've seen estimates of 100 to 1,000 other pythons for every one python we see—1,000 being the extreme high end," Everglades Foundation Chief Science Officer Steve Davis told Newsweek in 2022.
Besides contracting with snake killers and holding the annual python roundup, researchers have tried using dogs to sniff them out, using "scout" snakes carrying radio transmitters to betray their fellows during breeding season, and testing various traps. All these methods are expensive and labor-intensive, and they have resulted in few, if any, captures. Guzy and company conclude that eradicating the snake from the area "is not possible with any existing tools, whether applied singly or in combination."
So what might work to eradicate Burmese pythons from Florida? Guzy's team suggests that genetic biocontrol using gene drives could be deployed in the future. Genes normally have a 50–50 chance of being inherited, but using gene drive systems increases the chances of inheriting targeted bioengineered genes to nearly 100 percent. Specifically, the researchers suggest targeting and destroying the female-determining X chromosome during spermatogenesis. As bioengineered male snakes interbreed with wild females, only male pythons would be born, leading ultimately to population collapse. In 2019, a team of biologists at the University of Georgia validated this process for reptiles by installing gene drives into brown anole lizards.
Until then, the Python Cowboy and Python Huntress and their clients will have plenty of snakes to pursue.
The post Meet Florida's Python Bounty Hunters appeared first on Reason.com.
]]>Dubai, United Arab Emirates—"A fast, fair, fully funded, and forever phase-out of fossil fuels" demanded climate activist Brandon Wu from ActionAid USA at a press conference today at the United Nations' 28th Climate Change Conference (COP28).
How fast? According a new report endorsed by ActionAid and 200 other activist groups, rich countries like the U.K. and the U.S. should forever cease all extraction of coal, oil, and natural gas by 2031.
What exactly does fair and fully funded mean? Greenpeace Regional Campaigns Manager Ahmed El Droubi at an earlier press conference convened by the 350.org activist group citing a June 2023 study suggested that the wealthy developed countries owe "$170 trillion in climate debt" to poor countries. For reference, world gross domestic product (GDP) in 2022 was just over $100 trillion. And it is worth considering how low world GDP would now stand if humanity had forsworn the vast improvements in living standards, health, longevity, and education made possible by the energy supplied by fossil fuels.
Despite the expansion of wind and solar power, fossil fuels still account for 82 percent of the world's primary energy. In the U.S., fossil fuels supply 79 percent of the primary energy that Americans consume, with nuclear accounting for 8 percent. It is not at all plausible that the U.S. would be able to completely and forever phase out fossil fuels over the next eight years. In fact, such a demand sounds borderline insane.
As it stands, the current negotiating text of the Global Stocktake document, which would be the principal output of COP28, retains several options calling for the phase-out of fossil fuels.
Option 1: A phase out of fossil fuels in line with best available science;
Option 2: Phasing out of fossil fuels in line with best available science, the
IPCC's 1.5 pathways and the principles and provisions of the Paris Agreement;Option 3: A phase-out of unabated fossil fuels recognizing the need for a peak
in their consumption in this decade and underlining the importance for the
energy sector to be predominantly free of fossil fuels well ahead of 2050;Option 4: Phasing out unabated fossil fuels and to rapidly reducing their use so
as to achieve net-zero CO2 in energy systems by or around mid-century;Option 4: no text
Of course, each option raises their own questions. What is the best available science? For example, a 2022 study in Nature calculated that if all of the parties kept all of their current promises to cut their greenhouse gas emissions, global average temperatures would peak below the 2-degrees-Celsius threshold agreed to in the Paris Climate Change Agreement. If this result holds, that would mean the "climate crisis" being relentlessly flogged by activists and U.N. bureaucrats at COP28 would fade back into the still significant, but not potentially catastrophic, problem of climate change.
The second option seeks to bind the signatories to the already lost cause of trying to meet the Paris Agreement's aspirational goal of keeping future warming below 1.5 degrees Celsius. Why is it a lost cause? Because in order to achieve that temperature objective, the world must cut greenhouse gas emissions 43 percent by 2030, compared to 2019 levels, according to U.N. calculations. Keep in mind that the only year since 1990 that global greenhouse emissions fell significantly is the pandemic year 2020 when they dropped by 4.6 percent.
What about phasing out unabated fossil fuels? Unabated basically means capturing and sequestering the carbon dioxide emitted through burning coal, oil, and natural gas by burying it underground or planting trees to absorb it. Happily, a study published in November found that trees and other plants will likely absorb 20 percent more of the carbon dioxide emitted from burning fossil fuels than previously expected.
Of course, there remains the redundant Option 4: No text. In other words, COP28 negotiators may decide to remain silent on the issue of phasing out fossil fuels.
The post Fossil Fuel Phase-Out Frenzy at COP28 appeared first on Reason.com.
]]>The World Meteorological Organization (WMO) issued its latest state of the global climate report as part of the kickoff for the opening in Dubai of the 28th Conference of the Parties (COP28) to the United Nations' Framework Convention on Climate Change. Let's just say that according to the WMO, 2023 was really hot compared to the average global temperatures in the late 19th century. How hot?
The global mean near-surface temperature in 2023 (to October) was around 1.40 ± 0.12 °C above the 1850–1900 average. Based on the data to October, it is virtually certain that 2023 will be the warmest year in the 174-year observational record, surpassing the previous joint warmest years, 2016 at 1.29 ± 0.12 °C above the 1850–1900 average and 2020 at 1.27±0.13 °C. The past nine years, 2015–2023, will be the nine warmest years on record.
Correlatively, the WMO reports that the accumulation in the atmosphere of globe-warming greenhouse gases—carbon dioxide, methane, and nitrous oxide—also reached record levels this year. For example, carbon dioxide released chiefly through the burning of coal, oil, and natural gas rose in May from the atmospheric preindustrial level of about 280 to 424 parts per million—a more than 50 percent increase. As temperatures rise, the extent of sea ice declines, land glaciers and ice sheets lose mass, and sea levels rise.
The WMO report lists several extreme weather disasters from the current year, including floods, droughts, cyclones, and wildfires. And while certainly people are impacted by them, it is worth noting that the long-term trend is toward ever fewer deaths from natural disasters. This is largely because a wealthier world has been able to adapt to the impacts caused by natural hazards.
A 2023 preprint by European researchers analyzing global flood mortalities between 1975 and 2022 confirms this trend by finding that floods have become less deadly. "Despite population growth and increasing flood hazards, the average number of fatalities per event has declined over time," they report. In other words, people are, in general, adapting faster to whatever climate change is occurring than it can cause them harm.
Under the terms of the Paris Climate Change Agreement, signatories are supposed to undertake collective efforts to hold "the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels."
United Emirates' Sultan Al Jaber, who is the president-designate of COP28, said he's aiming for an "unprecedented outcome" at the conference which would keep alive the hope of achieving the 1.5 degrees Celsius goal of the Paris agreement.
A new analysis, however, calculates that in order to have a 50 percent chance of holding global average temperature below 1.5 degrees Celsius above pre-industrial levels, global greenhouse gas emissions would have to be cut by 43 percent below their 2019 levels by 2030. Given current greenhouse gas emissions and concomitant global temperature trends, it seems unlikely that activists' demands to "keep 1.5 alive" will be met.
The post U.N. Climate Change Conference Opens During Hottest Year on Record appeared first on Reason.com.
]]>The United Nations' 28th climate change conference will open on November 30 and run through December 12 in Dubai, where some 70,000 or so government officials, journalists, and activists are expected to participate. Expect vicious fights over climate reparations, hand-wringing about global temperature trends, and furious activist demands to ban fossil fuels. In other words, a now almost routine political exercise in climate drama.
At this Conference of the Parties (COP28) to the United Nations Framework Convention on Climate Change, delegates from nearly 200 countries will engage in the first-ever "global stocktake" that will supposedly "assess the collective progress" towards meeting the goals of the Paris Agreement on Climate Change. Specifically, the Paris Agreement calls for mitigating man-made global warming by "holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels."
So where do current global temperatures stand with respect to those goals? The U.S. National Oceanic and Atmospheric Administration reported on November 15 that October ranked as the warmest on record at 2.41 degrees Fahrenheit (1.34 degrees Celsius) above the 20th-century average. The agency calculated that "there is a greater than 99% chance that 2023 will rank as the warmest year on record for the world." On November 10, the European Union's Copernicus Climate Change Service noted that so far 2023 is "currently the warmest calendar year on record, and 1.43°C warmer than the pre-industrial reference period."
Global Stocktake
The global stocktake will almost certainly conclude that the past eight years have not seen much in the way of "collective progress" with respect to reining in climate change. Earlier this month, the World Meteorological Organization (WMO) reported that "heat-trapping greenhouse gases in the atmosphere once again reached a new record last year and there is no end in sight to the rising trend." In particular, the burning of fossil fuels has boosted the atmospheric concentrations of carbon dioxide a full 50 percent above the pre-industrial levels.
The Paris Agreement is structured such that each country makes pledges called nationally determined contributions (NDC) that outline their goals for addressing climate change. For example, in its NDC, the U.S. sets an economy-wide target of reducing its net greenhouse gas emissions by 50-52 percent below 2005 levels in 2030. As of 2022, U.S. emissions have fallen only 15.5 percent below 2005 levels. In September, the U.S. Environmental Protection Agency estimated that implementing the climate and energy provisions of the Inflation Reduction Act would result in cutting U.S. emissions 35 to 43 percent below 2005 levels.
Earlier this month, the U.N. issued a report that calculated that the world would have to cut its greenhouse gas emissions by 43 percent below 2019 levels by 2030 in order to have a good chance of keeping global average temperatures from rising 1.5 degrees Celsius above pre-industrial levels. To keep warming below 2 degrees Celsius global emissions would have to be cut around 27 percent below 2019 levels by 2030. Making the heroic assumption that all countries faithfully honored their current NDC promises, the world is instead on track by 2030 to cut emissions by 2 to 9 percent below their 2019 levels. This projection does, however, suggest that global emissions will peak before 2030.
Given these temperature and emissions trends, the Paris Agreement's aim of limiting the global average temperature increase to 1.5 degrees Celsius above pre-industrial levels is already out of reach.
The COP28 global stocktake is supposed to motivate countries to greatly increase their new NDC pledges to cut emissions which are expected to be issued and confirmed before 2025.
Loss and Damage
Wrangling over money is always a central concern at U.N. climate change conferences. Specifically, poor developing countries annually demand that rich developed countries provide them with financing to enable them to cut their emissions and adapt to climate change. Back in 2009, at COP15, the poor countries extracted a promise from rich countries to "mobilize" $100 billion per year in climate funding by 2020. The latest report from the Organization for Economic Cooperation and Development finds that that financing amounted to almost $90 billion in 2021. In its 2019 report Financing a Global Green New Deal, the U.N. Conference on Trade and Development (UNCTAD) called for rich countries to supply $2.5 trillion in annual climate and development financing to poor countries.
The main money fight at COP28 will be over how to "operationalize" the new Loss and Damage Fund that was launched at COP27 in Egypt last year. Loss and damage generally refers to covering the costs related to climate change that countries cannot avoid or adapt to. Basically, poor countries are demanding the moral equivalent of climate reparations from the wealthy countries whose cumulative greenhouse gas emissions are causing them losses from rising sea levels and extreme weather events attributed to climate change. In its Taking Responsibility report earlier this year, UNCTAD argued that the new Loss and Damage Fund be capitalized initially at $150 billion rising to $300 billion annually by 2030. In negotiations prior to COP28, developed countries made it clear that all contributions to new funds would be voluntary. In particular, the U.S. delegation wants to make it plain that loss and damage do not involve any basis for liability or compensation.
Fossil Fuel Phase-Out?
In 2021, at COP26 in Glasgow, Scotland, an initial call for parties to agree to a "rapid phase-out of coal" power was watered down in the Glasgow Climate Pact. At the insistence of China and other parties, the pact merely called upon parties to accelerate "efforts towards the phasedown of unabated coal power and phase-out of inefficient fossil fuel subsidies." Unabated means coal power generation in which carbon dioxide is not captured and sequestered underground or through forest growth.
Last year at COP27 in Sharm el-Sheikh, Egypt, a proposal backed by 80 developed and developing countries (including the U.S. and the European Union) calling for the phasing down of all fossil fuels was not adopted. In his welcoming letter to the delegates, COP28 president Dr. Sultan Ahmed Al Jaber (who is also the head of the UAE's national oil company) declared, "Phasing down demand for, and supply of, all fossil fuels is inevitable and essential."
At COP28 the European Union plans to encourage all parties to agree on phasing out all unabated fossil fuels. However, back in September China's climate envoy Xie Zhenhua asserted, "It is unrealistic to completely phase out fossil fuel energy." Russia also is against any global agreement to phase out fossil fuels. Major oil and gas producer Saudi Arabia more artfully argues for phasing out emissions by capturing and sequestering them while maintaining the production and use of fossil fuels. A global pact to phase out fossil fuels at COP28 seems unlikely since agreements reached at U.N. climate change conferences must be achieved via consensus of all of the parties.
The post U.N.'s 28th Climate Change Conference Opens Next Week in Dubai appeared first on Reason.com.
]]>For years U.S.-China climate negotiations have been much like the running gag in the Rocky and Bullwinkle cartoon where Bullwinkle says "Hey Rocky, watch me pull a rabbit out of my hat." To which Rocky responds, "But that trick never works." Bullwinkle gamely replies, "This time for sure!"
The just-released Sunnylands Statement on Enhancing Cooperation to Address the Climate Crisis is the latest effort by the Biden administration's equivalent of Bullwinkle, Climate Envoy John Kerry, to pull off the trick of getting China to collaborate with the U.S. on climate change. The Sunnylands Statement follows earlier efforts by Kerry to obtain Chinese commitments on climate change in 2014 and 2015 when he was secretary of state during the Obama administration.
The new statement declares, "The United States and China recognize that the climate crisis has increasingly affected countries around the world." Consequently, both countries pledge that they "intend to sufficiently accelerate renewable energy deployment in their respective economies through 2030 from 2020 levels so as to accelerate the substitution for coal, oil and gas generation, and thereby anticipate post-peaking meaningful absolute power sector emission reduction."
"Anticipate post-peaking meaningful absolute power sector emission reduction"? In fact, China has not yet peaked in its power sector carbon dioxide emissions, nor has it done so with respect to its economy's overall emissions.
On the other hand, U.S. power sector carbon dioxide emissions have been "post-peak" for 15 years. U.S. power sector emissions peaked in 2007 at 2,422 million metric tons and are now down to 1,539 million metric tons. As of 2022, the switch from coal to less-carbon-intensive natural gas for electricity generation accounts for two-thirds of the drop in U.S. power sector emissions, whereas one-third results from increased generation from wind and solar power.
In contrast, Chinese power sector emissions have more than doubled from 2,230 million metric tons in 2007 to 4,695 million metric tons now. A significant proportion of this increase is the result of rising coal-powered generation since 2016, which is expected to reach a record high this year. In other words, China's power sector is certainly not yet "post-peak."
This divergence between American and Chinese emissions trends should not be a surprise. After all, the same John Kerry who negotiated the new Sunnylands statement also negotiated for the Obama administration the 2014 U.S.-China Joint Announcement on Climate Change. Nine years ago, China frankly stated that, for its economy as a whole, it "intends to achieve the peaking of CO2 emissions around 2030." Since 2014, China's overall carbon dioxide emissions have continued to increase, rising from about 10,000 million metric tons to a projected 11,470 million metric tons this year. No peaking yet. In contrast, the U.S. economy's overall carbon dioxide emissions peaked in 2007 at around 6,000 metric tons and is forecasted to be 4,971 metric tons this year, a 17 percent drop.
A year later, the two countries issued the 2015 U.S.-China Joint Presidential Statement on Climate Change. In that statement, the U.S. pledged to "reduce CO2 emissions from the power sector to 32% below 2005 levels by 2030." As of 2022, those emissions are 36 percent below their 2005 levels, eight years ahead of the U.S.'s promised schedule.
In the 2015 statement, China merely committed that it "will lower carbon dioxide emissions per unit of GDP by 60% to 65% from the 2005 level by 2030." As of 2020, China's carbon dioxide emissions per unit of gross domestic product have actually fallen from 0.9 kg to 0.5 kg per unit of GDP, a 45 percent reduction since 2005. In comparison, U.S. emissions have fallen from 0.4 kg to 0.2 kg of carbon dioxide per unit of GDP, a 50 percent drop. It is worth noting that the U.S. produces more than twice as much value per unit of carbon dioxide emissions than does China.
In the Sunnylands statement, both China and the U.S. vow to "accelerate" numerous "concrete actions, including practical and tangible collaborative programs." Maybe this time for sure, but past experience suggests that the gag is on us.
The post New U.S.-China Climate Deal: Same as the Old Deals appeared first on Reason.com.
]]>For the first time in 20 years, nine cases of locally acquired malaria have occurred this summer in the United States: seven in Florida, one in Texas, and one in Maryland. The feverish illness is caused by infection through the bites of mosquitoes carrying protozoan parasites.
The parasites were introduced to the Americas via colonization and the transatlantic slave trade and adapted to local mosquito species. "By 1850 malaria had become established in practically every settlement from New England westward to the Columbia River valley and from the southernmost part of Florida to the inland valleys of California," wrote the Tulane University parasitologist Ernest C. Faust in 1951. The U.S. Census Bureau reported that in 1850, malaria was responsible for 45.7 out of every 1,000 deaths nationally and 7.8 percent of deaths in the South.
The Communicable Disease Center declared malaria eradicated in this country in 1951. Today that entity, now known as the Centers for Disease Control and Prevention, typically reports around 2,000 cases of malaria annually contracted by travelers returning to the U.S. from abroad. The World Health Organization estimates that there were nearly 250 million cases and 620,000 deaths from malaria in 2021, concentrated in sub-Saharan Africa.
Back in 2014, in my article "Let's Play God," I asked, "Wouldn't it be great if scientists could genetically engineer mosquitoes to be immune to the malaria parasite, thus protecting people from that disease?" Nearly 10 years later, a team of biotechnologists associated with the University of California, Irvine; the University of California, Berkeley; and Johns Hopkins University report that they have achieved just that.
According to the researchers' July article in the Proceedings of the National Academy of Sciences, their malaria-carrying mosquito species received genes coding for anti-malaria proteins combined with a gene drive to spread the code quickly in wild mosquito populations that mate with the engineered insects. Normally, genes have a 50–50 chance of being inherited, but in this case, the gene drive systems increased the chance of inheriting the anti-malaria genes to upward of 99 percent. Applying an epidemiological model, the researchers calculated releasing the bioengineered mosquitoes to interbreed with wild ones would reduce the incidence of human malaria infections by more than 90 percent within three months.
Unfortunately, various Luddite activist groups seeking a global moratorium on gene drives have managed to tie up research and deployment with red tape, using United Nations Convention on Biological Diversity (CBD) procedures. A CBD technical committee is supposed to issue a risk assessment report on the technology in 2026, while hundreds of millions already know the real and present risks of life without it.
The post New Mosquitos Can Help Beat Malaria appeared first on Reason.com.
]]>Earth just recorded its hottest 12-month streak (from November 2022 to October 2023) with respect to the preindustrial (1850–1900) baseline, according to researchers at Climate Central. "Most of this warming, about 1.28ºC, results from human-induced climate change, with natural variation in the climate caused by processes such as the ongoing ocean-warming event El Niño contributing much less, says climate researcher Friederike Otto at Imperial College London," according to Nature.
The Copernicus Climate Change Service basically concurs. Its researchers report that October 2023 was the warmest October in its global temperature record, which goes back to 1940. The average surface air temperature rose 0.85°C above the 1991–2020 average for October and was 1.7°C above the average for the pre-industrial reference period. October 2023 "marked the fifth consecutive month of record temperatures globally," per Copernicus.
The year 2023 from January to October was 0.1°C warmer than the 10-month average for 2016, "currently the warmest calendar year on record, and 1.43°C warmer than the pre-industrial reference period," Copernicus reported.
The satellite temperature record by University of Alabama in Huntsville researchers also notes that "the global atmospheric temperature anomaly increased slightly in October from the record value observed in September to +0.93°C (+1.67°F) above the 30-year average, setting a new anomaly record for the 45-year satellite era."
"October 2023 has seen exceptional temperature anomalies, following on from four months of global temperature records being obliterated," observed Samantha Burgess, deputy director of the Copernicus Climate Change Service. "We can say with near certainty that 2023 will be the warmest year on record, and is currently 1.43ºC above the preindustrial average."
The post The Last 12 Months Have Been the Hottest on Record appeared first on Reason.com.
]]>President Joe Biden issued a sweeping executive order last month aimed at imposing federal regulations on artificial intelligence (AI)—what Carl Szabo of the tech lobbying group NetChoice called an"AI red tape wishlist." Many observers fear that Biden's requirements could evolve into a centralized, innovation-stifling licensing scheme for new AI systems. As the R Street Institute's Adam Thierer notes, the executive order would "empower agencies to gradually convert [current] voluntary guidance and other amorphous guidelines into a sort of back-door regulatory regime."
That would be just peachy with Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.). Their "Bipartisan Framework for U.S. AI Act," introduced earlier this year, explicitly calls for a "licensing regime administered by an independent oversight body." This A.I. bureaucracy "would have the authority to audit companies seeking licenses and cooperating with other enforcers such as state Attorneys General. The entity should also monitor and report on technological developments and economic impacts of AI."
The senators assert that their framework is necessary to hold AI companies liable when their models and systems breach privacy, violate civil rights, or cause other harms. But is it really?
Senate Majority leader Chuck Schumer (D–N.Y.) hinted earlier this week at an alternative to top-down federal AI licensing. "Duty of care has worked in other areas, and it seems to fit decently well here in the AI model," he said at the AI Insight Forum on Wednesday.
Under product liability tort law, duty of care is defined as your responsibility to take all reasonable measures necessary to prevent your products or activities from harming other individuals or their property.
As Thierer observes, "What really matters is that AI and robotic technologies perform as they are supposed to and do so in a generally safe manner. A governance regime focused on outcomes and performance treats algorithmic innovations as innocent until proven guilty and relies on actual evidence of harm and tailored, context-specific solutions to it."
Common-law torts have a long history of tailoring just such context-specific solutions to the harms caused by new products and services.
In a 2019 report for the Brookings Institution, the UCLA legal scholar John Villasenor outlined how courts applying products liability law could foster the safe development of AI. For example, the makers of AI systems could be held liable if automated post-sale changes in its self-learning algorithms—algorithms aimed at improving its performance—evolve in a manner that actually renders the product harmful, when it is reasonably foreseeable that it might be supplied with"bad data" such that it evolves in harmful ways, and when users are engaging with an AI system in reasonably foreseeable ways. Basically, the idea is that the threat of lawsuits will encourage AI companies to make sure that their products are reasonably safe to use and that they carry warnings about potential dangers.
Of course, America's tort law system is notoriously costly and inefficient. The U.S. Chamber of Commerce Institute for Legal Reform's 2022 report calculated that costs and compensation in the tort system amounted to $443 billion in 2020, equivalent to 2.1 percent of U.S. GDP. But it's better than the top-down licensing alternative. The free market Competitive Enterprise Institute estimated in 2022 that federal regulations cost $1.927 trillion, amounting to 8 percent of GDP.
Thierer finds that common law can more flexibly address and solve any problems that may arise from the adoption of new AI tools. "Various court-enforced common law remedies exist that can address AI risks," he notes in an April study. "These include product liability; negligence; design defects law; failure to warn; breach of warranty; property law and contract law; and other torts. Common law evolves to meet new technological concerns and incentivizes innovators to make their products safer over time to avoid lawsuits and negative publicity."
Here's hoping that Schumer's observation means that he is eschewing calls for top-down AI licensing in favor of more flexible and innovation-friendly common-law governance.
The post Did Chuck Schumer Just Come Out Against Top-Down AI Licensing? appeared first on Reason.com.
]]>President Joe Biden issued yesterday a sweeping executive order aiming to impose federal regulation on the development of artificial intelligence technologies, such as large language models like ChatGPT. The executive order cites the emergency powers of the Korean War-era Defense Production Act as the justification for imposing federal regulation on AI technologies. As my Reason colleague Eric Boehm has pointed out, "the Defense Production Act has become a license for central planning." Taken as a whole, the new order amounts to federal central planning for artificial intelligence.
Among other things, the order will "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government," according to the White House. Specifically, the new federal AI regulators are supposed to oversee any "foundation model" that purportedly "poses a serious risk to national security, national economic security, or national public health and safety" by requiring that developers report to the secretary of commerce the results of extensive "red-team safety tests." Roughly speaking, foundation models are large language models like OpenAI's GPT-4, Google's PaLM-2, and Meta's LlamA 2. Red-teaming is the practice of creating adversarial squads of hackers to attack AI systems with the goal of uncovering weaknesses, biases, and security flaws. As it happens, the leading AI tech companies— OpenAI, Google, Meta—have been red-teaming their models all along.
The National Institute of Standards and Technology is charged with setting up the additional safety standards with which the AI developers are supposed to comply. Complying with such reporting requirements will likely slow down the process of safety and security testing undertaken by Big Tech developers while at the same time driving out smaller competitors who cannot afford the costs of dotting regulatory i's and crossing bureaucratic t's. An even bigger worry is that the new AI safety testing orders will quickly evolve into the digital equivalent of the deadly slow hyper-precautionary FDA drug safety approval scheme.
It's hard to see how U.S. national defense can be enhanced by slowing down domestic AI innovation. After all, U.S. regulations will not apply to foreign competitors who will be able to catch up and surpass U.S. artificial intelligence developers hampered by bureaucratic fetters.
In addition, the executive order directs the Department of Commerce to develop techniques for watermarking the outputs of AI technologies. This means embedding information into photos, videos, audio clips, or text to let users know that they were generated by AI. As it happens, AI companies like OpenAI and Google are already doing that. Of course, scammers and propagandists will simply ignore watermarking when they create their misleading deepfakes.
Biden's order also directs various federal agencies to address the problem of AI "job displacement" and "job disruption." And doubtlessly, such a powerful suite of technologies will affect nearly everyone's work activities and prospects. But keep in mind the dire prediction back in 2014 that robots would steal one in three human jobs by 2025. Only just over a year to go, folks and the U.S. unemployment rate is the lowest it's been since 1969.
On the plus side, Biden's executive order does instruct the Department of Homeland Security to "modernize immigration pathways for experts in AI and other critical and emerging technologies." This is always a good idea since such immigrants significantly boost U.S. technological progress, employment, and economic growth.
"White House executive order threatens to put AI in a regulatory cage," is how the free market R Street Institute characterized the Biden administration's regulatory proposals. In a statement, Carl Szabo, vice president and general counsel for the technology lobbying group NetChoice, warned that Biden's new executive amounts to an "AI red tape wishlist" that "will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation." He added that the executive order "puts any investment in AI at risk of being shut down at the whims of government bureaucrats."
Over at Forbes, Competitive Enterprise Institute Senior Fellow James Broughel glumly warns, "Biden's AI safety order could well be the biggest policy mistake of my lifetime."
The post Biden Issues 'A.I. Red Tape Wishlist' appeared first on Reason.com.
]]>This week's featured article is "Did Evolution Give Us Free Will?" by Ronald Bailey.
This audio was generated using AI trained on the voice of Katherine Mangu-Ward.
Music Credits: "Deep in Thought" by CTRL S and "Sunsettling" by Man with Roses
The post <i>The Best of Reason</i>: Did Evolution Give Us Free Will? appeared first on Reason.com.
]]>Free Agents: How Evolution Gave Us Free Will, by Kevin J. Mitchell, Princeton University Press, 352 pages, $29.95
What is free will? Can a being whose brain is made up of physical stuff actually make undetermined choices?
In Free Agents: How Evolution Gave Us Free Will, the Trinity College Dublin neuroscientist Kevin J. Mitchell argues that evolution has shaped living creatures such that we can push back when the physical world impinges upon us. The motions of nonliving things—air, rocks, planets, stars—are entirely governed by physical forces; they move where they are pushed. Our ability to push back, Mitchell argues, allows increasingly complex creatures to function as agents that can make real choices, not "choices" that are predetermined by the flux of atoms.
How can that be? After all, just like air and rocks, bacteria and sharks and aardvarks and people are made of physical stuff. Determinism holds that, per the causal laws of nature, the unfolding of the universe is inexorable and unbranching, such that it can have only one past and one future. Human beings do not escape the laws of nature, so any and all of our "choices" have been predetermined from the beginning of the universe.
This view poses a moral problem: How can people be held accountable for their actions if they had no choice but to behave the way they did?
Some determinist philosophers, known as compatibilists, hold that causal determinism is compatible with free will. Daniel Dennett, for example, argues that you are exercising your free will if, in the absence of external coercion, you are acting in accordance with your desires. As the 19th century philosopher Arthur Schopenhauer put it, "Man can do what he wills but he cannot will what he wills."
Mitchell is unconvinced. "I cannot escape feeling that some sleight of hand is part of this argument," he writes. "The primary problem has been circumvented or even denied, rather than confronted. We start on the terrain of particle physics but shift to arguments at the level of human psychology, all aimed not at whether organisms can choose their actions but at the different question of whether we can ascribe moral responsibility."
***
Determinists argue that all causes come from the bottom up. They say the interactions of particles and forces are "causally comprehensive," ultimately accounting for everything that happens in the realm of weather, rocks, amoebas, planets, and brains.
Against that view, physicist Roger Penrose and anesthesiologist Stuart Hameroff suggest the locus of free will can be situated in the randomness of quantum mechanics. Many philosophers say this argument does not work, since random quantum wave fluctuations decohere into particles that then grind deterministically on in accordance with the laws of classical physics.
Physicists Nicolas Gisin and Flavio Del Santo have offered a controversial challenge to fully causal determinism, arguing against the assumption of infinite precision in the measurement of classical physical systems. They contend it is impossible to cram an infinite amount of information into a finite space, that the state of any physical system is therefore indefinite, and that indeterminacy is thus not confined to the quantum realm.
Barbara Drossel, a theoretical physicist at the Technical University of Darmstadt, takes a similar position. "While physics underlies everything that happens in nature, it does not determine everything," she writes. "Physics is not causally closed and does not encompass everything that happens in our world."
Mitchell builds on those arguments, suggesting that indeterminacy is resolved as physical systems interact over time; the past becomes fixed while the future remains open. The future, he writes, is characterized "by indefiniteness at both the quantum and classical levels," and the present is "the time during which this indefiniteness becomes definite." Mitchell's aim is to trace how living organisms, from microbes to humans, evolved into agents able to make choices that resolve the universe's inherent indeterminacy.
Organisms, Mitchell says, are patterns of processes that detect and react to internal and external stimuli as they seek to persist. As a microbe actively explores its environment, impinging molecules are processed internally as information about sustenance or danger, thereby providing the organism as a whole with "reasons" to initiate approach or avoidance. "In reality," he explains, "these organisms integrate multiple signals at once, along with information about their current state and its recent history, to produce a genuinely holistic response that cannot be deconstructed into isolated parts."
If microorganisms can act as agents, then much more complex organisms, such as humans, must have an even greater scope for agency in the world. Mitchell describes the structure of human brains in great detail, showing how patterns of neurons instantiate meaning as they respond to sensory stimuli and to each other. Thus emerges top-down causation—what happens when patterns at a higher hierarchical level exert causal influence over a lower level by changing the context within which the lower-level actions take place.
"The choices the organism makes based on parameters set at high level filter down," Mitchell writes, "to change the criteria at lower levels, thereby allowing the organism to adapt to current circumstances, execute current plans, and achieve current goals. In this way, abstract entities like thoughts and beliefs and desires can have causal influence in a physical system."
***
To be clear, Mitchell does not believe our choices are absolutely free from any prior causes. We are all constrained by our genes, our histories, our psychological traits, and our developed characters. Instead of radical metaphysical freedom, Mitchell persuasively develops a more modest conception of free will that entails the evolved ability to make real choices in the service of our goals—that is, to act for our own reasons.
This carefully argued, information-dense book will put a dent in any intellectual predilection toward determinism that some readers may have. It certainly did mine. But perhaps it was inevitable that it would do so.
The post Did Evolution Give Us Free Will? appeared first on Reason.com.
]]>This week's featured article is "Take Nutrition Studies With a Grain of Salt" by Ronald Bailey.
This audio was generated using AI trained on the voice of Katherine Mangu-Ward.
Music Credits: "Deep in Thought" by CTRL S and "Sunsettling" by Man with Roses
The post <i>The Best of Reason</i>: Take Nutrition Studies With a Grain of Salt appeared first on Reason.com.
]]>The controversy over polyunsaturated seed oils is in some respects the mirror image of the fight over saturated fats in meat, milk, and eggs. Omega-3 and omega-6 fatty acids are essential fatty acids. They are "essential" since they must be provided by foods because they cannot be synthesized in the body yet are necessary for health. Both act as structural components in cellular membranes and modulate inflammatory responses.
The three main omega-3 fatty acids are alpha-linolenic acid (ALA), eicosapentaenoic acid (EPA), and docosahexaenoic acid (DHA). The principal sources of omega-3 fatty acids are oily fish, flaxseed oil, and nuts like walnuts. The chief omega-6 fatty acid is linoleic acid. The prime sources of linoleic acid in modern diets are seed oils including soybean, corn, cottonseed, sunflower, canola, safflower, rice bran, and grapeseed oils. The use of these oils has increased in modern diets, and they have been dubbed by some self-proclaimed health and wellness gurus as the "hateful eight."
The main contention by these health gurus is that the modern dietary "balance" between omega-3 and omega-6 essential fatty acids is out of whack, resulting in a host of alleged bad effects on health. For decades, American physician and endocrinologist Artemis Simopoulos has been one of the chief proponents that a present-day imbalance between omega-3 and omega-6 fatty acids in the contemporary diet is the source of many modern ills. Simopoulos appears to have first encountered this hypothesis while attending the 1988 NATO Advanced Research Workshop on Dietary Omega-3 and Omega-6 Fatty Acids in Italy.
Simopoulos became the co-editor of the proceedings volume for that conference, publishing a paper that suggested that the optimal ratio for intakes of omega-3 and omega-6 fatty acids was 1 to 5-6 based on a 1985 study in which a team of French researchers fed 24 nuns different proportions of the two nutrients for five months published in Lipids. The proceedings volume also noted that "the current estimate of this ratio in the western diet is 10-11/1. Evidence based on estimates from paleolithic nutrition and from terrestrial animals (mammals) in the wild indicate a ratio of omega-6 to omega-3 to be 1 to 1 in the diet."
An earlier 1988 summary of that same conference in the Journal of the American Oil Chemists' Society noted that "some conference participants felt that the dietary ratio of omega-6 to omega-3 fatty acids of about four-to-one might be desirable." In her 1989 summary article in the Journal of Nutrition, Simopoulos cited the paleolithic ratio estimate but also reported that conference "participants could not agree on either a recommendation for omega-3 fatty acid intake as a percent of dietary calories or on the ratio of omega-6 to omega-3 in the diet."
However, Simopoulos had made up her mind. In her 1991 review article in the American Journal of Clinical Nutrition, she again asserted that the recent big increase in the availability of seed oils had created evolutionarily suspect "imbalances between omega-6 and omega-3" fatty acids." Simopoulos suggested that this "imbalance" significantly contributed to coronary artery disease, inflammatory disorders, and cancer. She later popularized her theories about the alleged ill effects of increased seed oil consumption in her co-authored 1997 diet book, The Omega Plan: The Medically Proven Diet That Restores Your Body's Essential Nutritional Balance. In the book, she asserted that the "hidden imbalance" between omega-6 and omega-3 fatty acids "makes you more vulnerable to heart disease, cancer, obesity, inflammations, autoimmune diseases, allergies, diabetes and depression—all of the so-called diseases of civilization."
It is worth noting that exercise physiologist and chief originator of the paleo diet, Loren Cordain, participated in a number of nutrition and fitness conferences overseen by Simopoulos in the 1990s. For example, he was one of the promulgators of the 1996 Declaration of Olympia on Nutrition and Fitness that among other things recommended that "nutrient intakes should more closely match human evolutionary heritage," specifically mentioning the role of "essential fatty acids."
In 1997, Cordain was co-author of an article on evolutionary aspects of diet in World Review of Nutrition and Dietetics edited by Simopoulus in which they observed, "The ratio of omega-6 to omega-3 PUFA (polyunsaturated fatty acids) is estimated to have been far lower for preagricultural humans than for Americans." He and his co-author then speculated that this higher ratio of omega-6 to omega-3 fatty acids "may have important physiological consequences." The authors modestly concluded, "Paleonutrition is an intellectually appealing, but unproved, dietary paradigm."
In 1998, Cordain was a co-author of an article focused on fatty acids during the Paleolithic that was published in the World Review of Nutrition and Dietetics, again edited by Simopoulos. While noting that "evolutionary considerations are not (yet) a basis upon which to make nutritional recommendations," Cordain and his co-authors nevertheless went on to observe that with respect to essential fatty acids, "current intake clearly differs from that of our ancestors: preagricultural humans generally consumed omega-6 and omega-3 PUFA [polyunsaturated fatty acids] in roughly equal amounts." Cordain and his co-authors then added, "This pattern fueled the emergence and development of our genus; evolutionary considerations commend its restoration."
And "commend its restoration" Cordain certainly did in his 2002 blockbuster The Paleo Diet. There Cordain endorsed Simopoulos' claims about the health harms of "imbalanced" omega fatty acids. He asserted that "the ratio of omega-6 to omega-3 fats in Paleo diets was about 2 to 1; for the average American, the ratio is much too high—about 10 to 1. Eating too many omega-6 fats instead of omega-3 fats increases your risk of heart disease and certain forms of cancer; it also aggravates inflammatory and autoimmune diseases."
For what it's worth, a 2018 study in Lipids In Health and Disease confirms the modern ratio of fatty acids in the modern American diet when it reported that the omega-6 to omega-3 ratio for U.S children and older adults averages 9 to 1, and 8 to 1 respectively.
Keep firmly in mind that with respect to the problematic subject of nutritional epidemiology, no prior claims about the harms or benefits of any nutrient ever fully disappear. For example, alternative medicine proponent Joseph Mercola is a co-author of the 2023 narrative review in Nutrients that outlines research that purports to demonstrate the deleterious health effects of consuming linoleic acid.
However, as you will see in the main article, the bulk of recent research has not been kind to Simopoulos' assertion that the supposedly imbalanced consumption of linoleic acid found in seed oils "makes you more vulnerable to heart disease, cancer, obesity, inflammations, autoimmune diseases, allergies, diabetes and depression." On the contrary, most research finds that consuming seed oils reduces the risks of these maladies.
The post How Seed Oils Were Demonized appeared first on Reason.com.
]]>Comb through enough nutrition research, and you can find a study confirming or rebutting nearly every belief you may hold about how specific nutrients affect your health. "Meat Increases Heart Risks, Latest Study Concludes," reported The New York Times in 2020. A year earlier, the Times ran this headline: "Eat Less Red Meat, Scientists Said. Now Some Believe That Was Bad Advice."
Pick a different food group and find a similar contradiction. "Moderate Drinking Has No Health Benefits, Analysis of Decades of Research Finds," reported the Times in April 2023. Two months later, Forbes declared: "Light And Moderate Drinking Could Improve Long-Term Heart Health Study Finds—Here's Why."
These headlines were not misrepresentations. Nutritional epidemiology is, by and large, what Stanford University biostatistician John Ioannidis calls a "null field": one where there is nothing genuine to be discovered and no genuinely effective treatments exist.
"I think almost all nutrition studies that pertain to the effects of single nutrients on mortality, cancer, and other major health outcomes are null or almost null," says Ioannidis. "Even the genuine effects seem to have very small magnitude in the best [and] least biased studies."
When it comes to public policy, most nutritional epidemiologists are unclothed emperors ordering the rest of us around or lobbying lawmakers to do it for them.
This doesn't mean you can eat an entire pizza, a quart of ice cream, and six beers tonight without some negative health effects. (Sorry.) It means nutritional epidemiology is a very uncertain guide for how to live your life and it certainly isn't fit for setting public policy.
In short, take nutrition research with a grain of salt. And don't worry: Even though the World Health Organization (WHO) says "too much salt can kill you," the Daily Mail noted in 2021 that "it's not as bad for health as you think."
Back in 2019, Ioannidis called nutritional epidemiology "a field that's grown old and died. At some point, we need to bury the corpse and move on to a more open, transparent sharing and controlled experimental way." He expressed particular concern that nutritional research findings are largely derived from observational studies, which are essentially surveys. In other fields of health science, hypotheses are tested with strictly supervised randomized controlled trials that are designed to filter out the inherent noise in observational data.
Drawing firm conclusions from weak data is the original sin of nutritional epidemiology. Legendary American physiologist Ancel Keys more or less launched the suspicion that eating steaks and hamburgers caused heart disease during the 1950s. Keys and his colleagues hypothesized that cardiovascular diseases were becoming more common because the saturated fats found in red meat and dairy products were boosting levels of serum cholesterol. In his 1957 article "Diet and the Epidemiology of Heart Disease," Keys recommended the "exclusion of saturated fats (in butterfats and meat fats)" as a way to lower serum cholesterol levels. He conversely noted vegetable fats such as corn oil and cottonseed oil had the beneficial effect of reducing serum cholesterol.
Keys based his conclusions on observational data including a positive correlation (reported in his 1953 article, "Atherosclerosis: A Problem In Newer Public Health") between estimates of the amount of fats consumed per capita in six countries and their rates of diagnoses of "degenerative heart disease." He also pointed to studies that reported a correlation between high levels of serum cholesterol and the presence of atherosclerosis. In addition, Keys cited the results of randomized controlled trials he and his colleagues conducted using cohorts of schizophrenic men in Hastings State Hospital. The subjects were, during periods lasting between three and six months, fed diets of varying levels and types of fats. They reported in 1957 that saturated fats in meat and milk boosted their subjects' overall cholesterol levels whereas vegetable oils, corn oil in particular, tended to lower them.
As with many examples of "bad" science, Keys' claims had some basis in fact. There are different types of fats. Some fats are "saturated" with hydrogen atoms; others have one double bond of carbon atoms (monounsaturated) or more (polyunsaturated) in their structure. Generally, saturated fats are solid (lard, cheese, and butter) at room temperature, whereas unsaturated fats (canola, olive, soy, and corn oils) are liquid.
But Keys' campaign—and those it inspired—treated poorly tested hypotheses as settled science. In 1979, the surgeon general recommended Americans eat "less saturated fat and cholesterol, less salt, less sugar," and "less red meat." As recently as July 2023, the WHO issued guidelines warning against consuming saturated fatty acids "because high levels of intake have been correlated with increased risk of CVDs [cardiovascular diseases]."
Today's datasets are possibly even noisier than those of the 1950s. A quick series of searches in Google Scholar combining the terms "red meat," "dairy," and "eggs" with "cardiovascular" finds more than 68,200, 308,000, and 154,000 studies, respectively, and they don't all say the same thing. You can easily turn up numerous studies on either side of the question for each of those foods.
Is the picture clearer with meta-analysis? Yes and no.
A meta-analysis is a study of past studies. By aggregating studies, the ambitious epidemiologist hopes to tease out a real effect. Often meta-analyses clarify what the data say, and sometimes they simply tell us we can't trust the data.
For example, a controversial 2019 meta-analysis published in Annals of Internal Medicine "found low- to very-low-certainty evidence that reducing unprocessed red meat intake by 3 servings per week is associated with a very small reduction in risk for cardiovascular mortality, stroke, myocardial infarction (MI), and type 2 diabetes." It concluded that reduced consumption of processed meats had similarly equivocal effects on cardiovascular health. A companion meta-analysis of just randomized controlled trials by some of the same researchers "found only low- to very-low-certainty evidence that diets lower in red meat compared with those higher in red meat have minimal or no influence on all-cause mortality, cancer mortality, cardiovascular mortality, myocardial infarction, stroke, diabetes, and incidence of gastrointestinal and gynecologic cancer."
The Annals of Internal Medicine meta-analysis concluded that "findings from our review raise questions regarding whether—on the basis of possible adverse effects on cardiometabolic outcomes—the evidence is sufficient to recommend decreasing consumption of red and processed meat."
Naturally, the contrarian Annals study was immediately challenged. "It's the most egregious abuse of data I've ever seen," Harvard nutritional epidemiologist Walter Willett told Medical Daily.
But in 2020, the Cochrane Library issued a systematic review of studies assessing the health effects of reducing saturated fats—that is, replacing animal fats and hard vegetable fats with plant oils, unsaturated spreads, or starchy foods. This review reported that "reducing saturated fat in-take probably makes little or no difference" to all-cause mortality, cardiovascular mortality, nonfatal myocardial infarction, and coronary heart disease mortality. The authors nevertheless concluded that the studies they analyzed "provide moderate-quality evidence that reducing saturated fat reduces our risk of cardiovascular disease."
Other researchers have gone further. In 2022, scientists associated with the University of Washington's Institute for Health Metrics and Evaluation unveiled some techniques they had developed to correct for the uncertainties and biases in the studies being evaluated. Their study, published in Nature Medicine, reported "weak evidence of associations between unprocessed red meat consumption and colorectal cancer, breast cancer, IHD [ischemic heart disease] and type 2 diabetes." No association with strokes was identified. The evidence was so uncertain that they concluded, "While there is some evidence that eating unprocessed red meat is associated with increased risk of disease incidence and mortality, it is weak and insufficient to make stronger or more conclusive recommendations."
If nothing else, those researchers agree on one thing: The available evidence is insufficient to recommend reducing meat consumption. But not everyone even agrees about that: In 2023, Critical Reviews in Food Science and Nutrition published a meta-analysis concluding that "unprocessed red and processed meat might be risk factors for IHD [ischemic heart disease]. This supports public health recommendations to reduce the consumption of unprocessed red and processed meat intake for the prevention of IHD."
Note that word might, followed by a much more confident assertion that public health Cassandras should continue to warn people away from meat. A charitable interpretation of the study would be the authors recommend a cautious approach to meat not entirely supported by the evidence because meat might be bad, even if they can't prove it. It's not an immoral decision per se, but it's also not science—and it certainly doesn't justify anti-meat public policy.
Some of the authors of the Annals of Internal Medicine meta-analysis of red meat and cardiovascular mortality have also examined potential links between red and processed meats and cancer. Unlike most nutritional epidemiological studies, this one helpfully translates the relative risks reported into absolute risks.
They calculate that a weekly reduction of three servings of unprocessed meat will reduce a person's overall lifetime population risk of cancer from 105 per 1,000 to 98 per 1,000. Parsing three breast cancer studies, they calculate that a person's overall lifetime population risk will fall from 46 per 1,000 to 40 per 1,000. For prostate cancer (drawing on two studies), the absolute risk falls from 38 per 1,000 to 37 per 1,000. For colorectal cancer (five studies), they find that there is no absolute risk reduction. They also estimate that cutting the consumption of processed meats by three servings per week will reduce the absolute lifetime risk of cancer by roughly the same amount. These findings track those reported in the 2022 Nature Medicine study cited above.
The team concludes: "Our systematic review and meta-analysis of cohort studies supports the association between red and processed meat intake and increased risk for cancer. The magnitude of red meat's effect on cancer over a lifetime of exposure was, however, very small, and the overall certainty of evidence was low or very low."
But nutritional epidemiologists are nothing if not dogged in the pursuit of uncovering tiny effects. A 2021 meta-analysis in the European Journal of Epidemiology found that eating red meat and processed meats was positively associated with risk of breast, colorectal, colon, rectal, and lung cancers. But the relative risks for each were not much different than those reported in the Annals of Internal Medicine meta-analysis.
When a 2021 meta-analysis in the journal Nutrients looked at cancer risks, it found that "while relative effects for red and processed meat may be positive and statistically significant, absolute effects are small (less than 1%)." It concluded that "the recommendation to reduce the consumption of processed meat and meat products in the general population seems to be based on evidence that is not methodologically strong."
With meat, the concessions have been gradual and reluctant. With dairy, the about-face has been far more dramatic.
For years, nutritional epidemiologists condemned dairy foods and eggs for their high saturated fat contents. For example, the doyen of nutritional epidemiology, Walter Willett, wrote in Science in 1994 that butter and other dairy fats boosted cholesterol, thus probably increasing the risk of coronary heart disease. Therefore, he argued, "saturated fats, particularly those from dairy sources, should be minimized."
Just 20 years later, based on an extensive meta-analysis of saturated fat studies in Annals of Internal Medicine, food writer Mark Bittman famously declared "butter is back." The researchers found "a possible inverse association" between consuming dairy products and coronary disease. In other words, drinking milk and eating butter actually tended to reduce the risk of heart disease.
Since 2014, the majority of nutritional epidemiological studies have found that consuming dairy products is at worst neutral and more likely slightly protective. For example, a March 2022 meta-analysis in Advances in Nutrition reported, "Total dairy consumption was associated with a modestly lower risk of hypertension, CHD [coronary heart disease], and stroke." A 2023 conference summary in the Proceedings of the Nutrition Society concluded: "The association between dairy foods and CVD [cardiovascular diseases] is generally neutral despite many of the dairy foods being the major source of SFA [saturated fatty acids] in many diets. This leads to substantial doubt concerning the validity of the traditional diet-heart hypothesis." Of course, one can turn up more recent studies, such as a 2022 meta-analysis in Critical Reviews in Food Science and Nutrition, that still say consuming high-fat dairy products is associated with cardiovascular disease risk.
The role of eggs with respect to cardiovascular disease is contested. A 2019 Journal of the American Medical Association meta-analysis concluded that "each additional half an egg consumed per day was significantly associated with higher risk of incident CVD and all-cause mortality." A 2021 cohort study in PLOS Medicine similarly found that "intakes of eggs and cholesterol were associated with higher all-cause, CVD, and cancer mortality." Contrariwise, a 2021 cohort analysis in BMJ reported that "no association was found between egg consumption and cardiovascular disease risk among US cohorts, or European cohorts, but an inverse association was seen in Asian cohorts." A May 2023 evaluation of recent evidence in Current Atherosclerosis Reports said that "most studies assessing egg consumption and CVD risk factors found a reduced risk or no association."
The cacophony of murky findings coupled with strong recommendations is not limited to solid foods. Earlier this year, the WHO declared "no level of alcohol consumption is safe for our health." It based the claim on studies that suggest drinking in any amount is associated with higher risks of various cancers.
A team of Italian statisticians contradicted the organization's proclamation in a July 2023 working paper. Their dive into the literature on alcohol's health effects found the field rife with methodological problems, including a huge bias toward positive results and a probably enormous underreporting of actual consumption in surveys of drinkers. They conclude that "given the methodological limitations in detecting the effects of modest alcohol quantities, from a scientific point of view it is incorrect to claim that 'there is no safe level.' We should rather say that 'we are unable to determine if there is a safe amount' and, likely, we will never be."
Nevertheless, since the 1980s, numerous epidemiological studies identified a U- or J-shaped curve—a graphical representation showing the risks for heart disease and overall mortality were lower for light to moderate drinkers than for nondrinkers and heavy drinkers. A June 2023 BMC Medicine study comparing nondrinkers and drinkers reconfirmed the existence of the J-shaped curve. "Compared with lifetime abstainers, current infrequent, light, or moderate drinkers were at a lower risk of mortality from all causes, CVD, chronic lower respiratory tract diseases, Alzheimer's disease, and influenza and pneumonia," it reported. But heavy and binge drinkers had a "higher risk of mortality from all causes, cancer, and accidents."
In a 2022 editorial in European Heart Journal Supplements, Andrea Poli, president of the Nutrition Foundation of Italy, highlighted the health tradeoffs between alcohol's cardiovascular benefits and cancer risks. The association of moderate consumption "with a reduced cardiovascular risk," Poli wrote, "seems to prevail over the increase in [cancer] risk, with the consequence that all-cause mortality is reduced as compared to abstainers." A 2015 study in Drug and Alcohol Review investigated the question of whether industry funding has biased studies of the protective effects of alcohol on cardiovascular disease. The researchers found "no evidence of funding effects for cardiovascular disease mortality, incident coronary heart disease, coronary heart disease mortality and all-cause mortality."
Tradeoffs are an underdiscussed concept in the public health literature, which generally fails to recognize that we are all entitled to balance a desire for a long life with a desire to enjoy living. So how does one weigh the cancer risks of drinking? The lifetime population risk of colorectal cancer is 22.5 per 1,000 people. (Of course, this includes people who drink alcohol, but let's use it as a baseline anyway.) A 2014 British Journal of Cancer article reported that moderate to heavy drinking increased the relative risk of colorectal cancer by 1.17, yielding a 17 percent increase in risk over nondrinkers. That suggests that moderate to heavy drinking increases the lifetime risk of colorectal cancer from 22.5 to 26.3 per 1,000 people.
Interestingly, a 2020 meta-analysis in the International Journal of Cancer identified a J-shaped relationship in which light, moderate, and even heavy drinking was actually associated with a lower risk of colorectal cancer compared to nondrinkers and very heavy drinkers.
"Salt," an unknown wit once said, "is what makes things taste bad when it isn't in them." The Centers for Disease Control and Prevention advises that "most Americans should consume less sodium" because "excess sodium can increase your blood pressure and your risk for a heart disease and stroke." Most of the sodium Americans consume comes in the form of sodium chloride, otherwise known as table salt. The Dietary Guidelines for Americans recommends that adults limit sodium intake to less than 2,300 mg per day—about 1 teaspoon of table salt. The American Heart Association's "ideal limit" of sodium intake for most adults is "less than 1,500 mg a day." Instead, Americans consume an average of 3,400 mg of sodium per day.
In other words, the official nutrition scolds want your food to taste bad, or at least bland.
Remember that the recommendation to cut back on salt is intended to apply populationwide. But more recent research shows individuals exhibit a range of responses to various doses of salt. By some estimates, about 25 percent of people are salt-sensitive, meaning that higher salt intakes tend to increase their blood pressure. Another 15 percent of the population is inverse salt-sensitive, meaning that low intakes of salt conversely increase their blood pressure. A 2023 study in the Journal of Hypertension tested the effects of 7-day low- and high-sodium diets on subjects with normal blood pressure. It found that about 13 percent were salt-sensitive, 11 percent were inverse salt-sensitive, and 76 percent were salt-resistant—that is, consuming salt did not significantly increase or decrease their blood pressures.
Unfortunately, no widely accessible clinical tests have been devised for establishing a "personal salt index" for individuals to let them know if they are salt-resistant, salt-sensitive, or inverse salt-sensitive.
Epidemiological studies focused on the health effects of salt consumption come to different conclusions. For example, a 2020 Cochrane Library review of the effects of low-sodium versus high-sodium diets on blood pressure analyzed 195 randomized controlled trials. It found that "a low- versus high-sodium diet in white people with normal blood pressure (BP) decreases BP less than 1%." Meanwhile, lower sodium intakes led to "a significant increase in plasma cholesterol and plasma triglyceride," which are associated with higher cardiovascular disease risk.
The upshot: The results did not support the idea "that sodium reduction may have net beneficial effects in a population of white people with normal BP." On the other hand, if you're a white person with elevated blood pressure, "sodium reduction decreases BP by about 3.5%, indicating that sodium reduction may be used as a supplementary treatment for hypertension." Lower-sodium diets did tend to reduce blood pressure a bit more in Asian and black subjects, though there hadn't been enough studies to reach separate conclusions for those groups.
In 2020, a comprehensive review in the European Heart Journal pointed to the growing evidence that the relation of sodium intake with cardiovascular events is, like alcohol, J-shaped. That is, both deficient and high sodium intakes are associated with greater mortality and cardiovascular disease risks. The authors conclude that at the population level, moderate sodium consumption—about 1 to 2 teaspoons daily—has been "consistently associated with lower cardiovascular risk, compared to both high and low sodium intake." A 2021 commentary in the Journal of Hypertension noted that at this point, "the 'J-shape hypothesis' cannot yet be either neglected or verified."
A 2021 study in the European Heart Journal tested the hypothesis that high salt consumption was a risk factor for cardiovascular disease and premature death. The authors found that "daily sodium intake correlates positively with healthy life expectancy at birth and healthy life expectancy after age 60 and inversely with all-cause mortality in 181 countries worldwide." They concluded that consuming a moderate range of salt (1 to 2 teaspoons daily) is not associated with increased cardiovascular risk. The American average of 3.4 grams of sodium a day is within that range.
The researchers add that their results are population averages, and that individuals will want to tailor their salt consumption to their specific health circumstances. The best evidence is that people with hypertension should cut back on salt, but whether people with normal blood pressure should is not a settled issue.
If we can't trust the epidemiological establishment, it might stand to reason that we can trust dissenters. Unfortunately, heterodox researchers also have biases.
The controversy over polyunsaturated seed oils is the mirror image of the fight over saturated fats in meat, milk, and eggs. Omega-3 and omega-6 fatty acids are essential fatty acids. These molecules are necessary for health but can't be synthesized by the body, so we must get them from food. Both fatty acids act as structural components in cellular membranes and modulate inflammatory responses. The principal sources of omega-3 fatty acids are oily fish, flaxseeds, and some nuts. The chief omega-6 fatty acid is linoleic acid. The prime sources of linoleic acid in modern diets are oils derived from soybean, corn, cottonseed, sunflower, canola, safflower, rice bran, and grapeseed.
These oils have increased in modern diets, and some health and wellness gurus have dubbed them the "hateful eight." Their main contention is that the modern dietary "balance" between omega-3 and omega-6 essential fatty acids is out of whack, resulting in a host of alleged bad effects on health.
Most recent research has not been kind to these claims. A 2020 meta-analysis in The American Journal of Clinical Nutrition reported that higher linoleic acid intake is "associated with a modestly lower risk of mortality from all causes, CVD, and cancer." A 2020 narrative review in Atherosclerosis found it likely that "both dietary intake and circulating concentrations of [linoleic acid] inversely correlate with cardiovascular disease risk." A 2020 review article in The Lancet Diabetes & Endocrinology concluded that plant oils with lots of linoleic acid "seem to be moderately protective" against coronary heart disease, especially myocardial infarction. That same review reported that a several-fold higher omega-6 to omega-3 ratio "has no adverse effects on either multiple markers of inflammation or oxidative stress." Nor was there any "evidence to suggest an important role of the omega-6 to omega-3 ratio on glucose metabolism." (The latter is relevant to the risk of developing diabetes.)
As for inflammation, a 2012 review of randomized controlled trials in the Journal of the Academy of Nutrition and Dietetics reported that "virtually no evidence is available from randomized, controlled intervention studies among healthy, noninfant human beings to show that addition of LA [linoleic acid] to the diet increases the concentration of inflammatory markers." A 2017 meta-analysis of randomized controlled trials in Food & Function concluded that consuming more linoleic acid "does not have a significant effect on the blood concentrations of inflammatory markers."
And last year, a systematic review in Food Science and Biotechnology concluded that omega-6 fatty acids "have beneficial effects on cancers, blood lipoprotein profiles, diabetes, renal disease, muscle function, and glaucoma without inflammation response."
Nutritional epidemiology as practiced currently is mostly bunk.
"Nutritional epidemiologists valiantly work in an important, challenging frontier of science and health," Ioannidis generously observes in his 2019 article titled "Unreformed nutritional epidemiology: a lamp post in a dark forest" in the European Journal of Epidemiology. "However, methods used to-date (even by the best scientists with best intentions) have yielded little reliable, useful information." As an example, Ioannidis specifically cites the prevailing recommendation to eat less red meat as one of the many "'classics' of nutritional guidelines" that are based on "mostly weak evidence and small (or null) effects." As Ioannidis argued in BMJ in 2013, "almost every single nutrient imaginable has peer reviewed publications associating it with almost any outcome."
In the meantime, the recommendations to "eat this; not that" derived from nutritional epidemiology fervently promoted by nutrition "experts" and the media confuse and frustrate regular folks. They also encourage policy makers and regulators to meddle with what people want to eat. The researchers in the 2019 Annals of Internal Medicine meat study pointed out, "For the majority of individuals, the desirable effects (a potential lowered risk for cancer and cardiometabolic outcomes) associated with reducing meat consumption probably do not outweigh the undesirable effects (impact on quality of life, burden of modifying cultural and personal meal preparation and eating habits)." In other words, the very weak evidence that eating meat might harm their health is most likely counterbalanced by most omnivores' preferences to continue eating steaks and hot dogs.
Ioannidis concludes that nutritional epidemiology as currently practiced is rife with "fervent allegiance beliefs and group-think." Consequently, many, if not most, of the observed effects reported by nutritional epidemiologists largely reflect the magnitude of the biases prevailing among the field's researchers.
So enjoy the pleasures of drink and of the table in moderation, while keeping in mind English poet Alexander Pope's astute observation: "What some call health, if purchased by perpetual anxiety about diet, isn't much better than tedious disease."
The post Take Nutrition Studies With a Grain of Salt appeared first on Reason.com.
]]>Globally, the hottest June, July, August, and now September since modern instrumental surface temperature records started being kept in the 19th century occurred this year. Zeke Hausfather, a climate researcher associated with the ecomodernist Breakthrough Institute, noted that the global average temperature for September reported by the Japanese Reanalysis (JRA-55) of global temperature trends "beat the prior monthly record by over 0.5C, and was around 1.8C warmer than preindustrial levels."
Similarly, Europe's Copernicus Climate Change Service reports that its ERA5 reanalysis calculates that the average surface air temperature for last month was "0.93°C above the 1991-2020 average for September and 0.5°C above the temperature of the previous warmest September, in 2020." Combined with earlier surface temperature warming, the result is that September "as a whole was around 1.75°C warmer than the September average for 1850-1900, the preindustrial reference period." In addition, Copernicus researchers note that "the global mean temperature for 2023 to date [January through September] is 1.40°C higher than the preindustrial average (1850-1900)."
Climate reanalyses, like the JRA-55 and the ERA5, combine weather computer models with vast compilations of historical weather data derived from surface thermometers, weather balloons, aircraft, ships, buoys, and satellites. The goal of assimilating and analyzing these data is to create past weather patterns in order to detect changes in climate over time. Since climate reanalyses incorporate data from a wide variety of sources, they must be adjusted when biases are identified in those data.
Satellite temperature data trends basically mirror those of the JRA-55 and ERA5 datasets. University of Alabama in Huntsville climatologist Roy Spencer reports that the "global average lower tropospheric temperature (LT) anomaly for September, 2023 was +0.90 deg. C departure from the 1991-2020 mean." Spencer adds that this "establishes a new monthly high temperature record since satellite temperature monitoring began in December, 1978."
Consider that 6,500 years ago, during the earlier warmest post–ice age period called the "Holocene thermal maximum" by climatologists, average global temperatures are estimated to have been around +0.7C above the 19th-century average. A 2023 review article in Nature concluded, "Proxy evidence reported in several studies indicates that GMST [global mean surface temperature] was roughly 0.5 °C higher during this millennial-scale period [6,500 years ago] compared with 1850–1900, with most of the warming occurring at middle to high latitudes in the Northern Hemisphere."
Citing other global surface temperature datasets, Hausfather estimates that September 2023 temperatures will fall "somewhere between 1.7C (HadCRUT5) and 1.8C (Berkeley Earth) above the 1850-1900 average."
Some researchers have suggested that the current big boost in global average temperature is related to the massive amount of vaporized seawater that the January 2022 explosion of the undersea Hunga Tonga–Hunga Ha'apai volcano injected into the atmosphere. After all, water vapor is the principal greenhouse gas that warms the Earth's atmosphere. A January study in Nature Climate Change calculated that the extra water vapor would boost average global temperatures by up to 0.035 degrees Celsius over the next five years. But another study in the September Geophysical Research Letters estimated that in 2022 the sulfur dioxide plume from the volcano cut the amount of sunlight reaching the surface and thus cooled down average surface temperatures in the southern hemisphere by around 0.037 degrees Celsius. In other words, the 2022 volcanic eruption is having a relatively minor effect on current global temperatures.
It's worth comparing the recent global average temperatures to the Paris Climate Agreement's goal of "holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels." However, global weather fluctuates in response to various natural climatological phases such as the El Niño Southern Oscillation (ENSO) in the eastern Pacific Ocean. As it happens, a developing El Niño is warming the Pacific Ocean, thus contributing to the increase in current global surface temperatures. When the current El Niño wanes, average global temperatures are likely to fall back below the Paris Agreement's aspirational long-term 1.5-degrees-Celsius global average temperature threshold later this decade.
The upshot is that 2023 is highly likely to be the hottest year in the modern instrumental record.
The post This Might Be the Hottest Year You've Ever Experienced appeared first on Reason.com.
]]>Former New Jersey Gov. Chris Christie had a good pro-progress answer at Wednesday's Republican presidential debate. Asked what he would do about the "22 percent of American workers [who] fear their jobs will be lost to a robot…and to artificial intelligence," Christie gave a reply that avoided anti-tech fear-mongering.
"What I think artificial intelligence offers us is an extraordinary opportunity to expand well beyond the productivity that we have now," he said. He rightly added: "We can't be afraid of innovation. America has been the great innovator of this world over the last 250 years, a technological innovator, a manufacturing innovator, and a freedom and governmental innovator. And that's why America has to continue to stand strong in the world, pro-innovation [and] pro-progress."
Christie argued that new technologies such as artificial intelligence have expanded "all kinds of new, even unthought-of opportunities for folks." He acknowledged that technical unemployment does occur, but rather than taking that as an argument against innovation he saw it as a reason to retrain workers for the new jobs that progress produces.
Importantly, Christie also declared that he would reduce the regulatory burdens that slow down new technologies and the economic growth they produce. "What I will do," Christie stated, "is to make sure that every innovator in this country gets the government the hell off its back and out of its pocket so that it can innovate and bring great new inventions to our country that will make everybody's lives better."
The post Chris Christie for Robots and Tech Progress appeared first on Reason.com.
]]>President Joe Biden is making a "big bet on place-based industrial policy," writes Brookings Institution senior fellow Mark Muro. Muro and his colleagues argue that the initiative aims to address the fact that "many of the nation's towns and regions struggle under the weight of economic stagnation and social decline."
The size of the bet is around $80 billion in various industrial subsidies. It is unlikely to pay off as advertised.
These direct subsidies contrast with earlier federal place-based economic development programs, which chiefly used tax credits to encourage investment in poor urban neighborhoods and rural regions. Most research on those programs—which include New Markets Tax Credits (created by President Bill Clinton), Empowerment Zones (George W. Bush), and Opportunity Zones (Donald Trump)—indicates that they have had a negligible impact.
In a 2019 Regional Science and Urban Economics study, for example, University of California, Irvine economists David Neumark and Timothy Young found that so-called enterprise zones "have for the most part been ineffective at reducing urban poverty or improving labor market outcomes in the United States." That conclusion, they said, jibed with "the more widely prevailing view."
The conclusions of a 2023 working paper by University of Iowa finance professor Jiajie Xu were even less promising. Xu found that the Opportunity Zone program actually "led to a decrease in new business formation" and "negatively affected local employment" while having "little impact on attracting population inflows or reducing income inequality." Why? Likely because "the policy drove more private investments to existing firms, deterring potential entrepreneurs from entering and competing with the better-financed incumbents."
Ineffective as they were, the earlier place-based programs were at least directly aimed at locations with few jobs and high levels of poverty. The median annual household income for Opportunity Zones was around $33,000 initially. The subsidies that Biden has championed are less carefully targeted.
"Every American willing to work hard" should be able "to raise their kids on a good paycheck and keep their roots where they grew up," Biden declared in June. "That's Bidenomics." As an example, he cited new semiconductor fabs where workers without college degrees could make six figures.
But those fabs are not being built in the poorest parts of America. Nearly half of the $80 billion in place-based funding is targeted at semiconductor plants as authorized by the CHIPS and Science Act. Many of the companies that will receive the money announced the construction of new plants months before Biden signed that law in August 2022, and they are locating their facilities in places that make sense for their businesses.
In September 2021, for example, Intel said it was building two new fabs in Chandler, Arizona. The following January, the company unveiled plans for another two fabs in New Albany, Ohio. The median household income is $91,000 in Chandler and $206,000 in New Albany. The median household income in the U.S. stands just shy of $71,000, while the poverty threshold is just under $28,000 for a family of four.
Some new fabs are being built in towns with median household incomes below the national average. But the poorest of these is Sherman, Texas, future home of a new Texas Instruments fab, where the median household income is $54,000.
Biden's place-based programs, in short, are not really designed for helping Americans "keep their roots" in places that still "struggle under the weight of economic stagnation and social decline."
The post Subsidies Won't Stop Stagnation appeared first on Reason.com.
]]>Government-imposed price controls on goods and services always lead to shortages. For example, economic research has consistently shown that rent control results in less new housing construction. The Biden administration's imposition of price caps on prescription drugs under the provisions of the Inflation Reduction Act (IRA) will result in much the same thing: fewer new cures developed.
The IRA gives the Department of Health and Human Services (HHS) the authority to negotiate prices on select prescription drugs covered under Part B and Part D of Medicare. The government's "negotiated" prices actually amount to little more than extortion. If a pharmaceutical manufacturer does not comply with the government's negotiated price, it faces a choice between an excise tax that eventually rises to 95 percent of its product's sales in the U.S. or the withdrawal of all of its drugs from Medicare coverage.
The IRA directs HHS to negotiate prices for 10 drugs in 2026, 15 drugs in 2027, 15 more in 2028, and 20 drugs in 2029 and every year afterward. HHS will publish the list of maximum fair prices for these drugs two years before the prices go into effect.
On Tuesday, the Biden administration revealed the first 10 drugs that would be subject to government negotiation. They include the blood thinner Eliquis, the diabetes drug Jardiance, and the rheumatoid arthritis treatment Enbrel. In his statement, President Joe Biden cited an estimate by the Congressional Budget Office that drug price negotiations and inflation rebates will save taxpayers $160 billion by 2031. What he did not mention are the costs of more disease and death the price controls will cause as they slow the development of new pharmaceuticals.
A 2016 econometric study in Forum for Health Economics and Policy evaluated the long-term impact of price controls in Medicare Part D. Applying Veterans Health Administration drug pricing policies to Medicare, a team of researchers from the private health consultancy firm Precision Health Economics calculated that it would "save between $0.1 trillion and $0.3 trillion (US$2015) in lifetime drug spending for people born in 1949–2005." On the other hand, lower revenues for drug manufacturers would mean that they invest less in new pharmaceutical discovery and development. New drug introductions would fall by as much as 25 percent relative to the status quo, according to the researchers. As a consequence, they report, "life expectancy for the cohort born in 1991–1995 is reduced by almost 2 years relative to the status quo. Overall, we find that price controls would reduce lifetime welfare by $5.7 to $13.3 trillion (US$2015) for the US population born in 1949–2005." Allegedly "saving" $160 billion over the next 10 years doesn't look so good now, does it?
In their 2021 University of Chicago working paper, economists Tomas J. Philipson and Troy Durie analyzed how price controls proposed in the earlier and somewhat more stringent Lower Drug Costs Now Act would impact medical innovation. They calculated that the proposed drug price controls would lead to a 44.6 percent decline in pharmaceutical company research and development and 254 fewer new drug approvals between 2021 and 2039. They note that an earlier Council of Economic Advisers' analysis calculated, owing to the reduction of new drug introductions stemming from the proposed price controls, that 100 fewer new drugs would be developed, resulting in a loss of between 37.5 million to 100 million life-years by 2029 for Americans. For comparison, a team of researchers estimated in 2022 that the COVID-19 pandemic had by then resulted in the loss of 9.7 million life-years in the U.S.
An obviously self-interested survey by the Pharmaceutical Research and Manufacturers of America found that IRA drug price controls were already impacting its members' research and development decisions. Specifically, 78 percent are expected to cancel early-stage pipeline projects; 63 percent planned to shift research and development away from small molecule medicines; and 95 percent are expected to develop fewer new uses for medicines because of the limited time available before being subject to government price setting.
The shift away from research and development on small molecule drugs (basically pills) occurs in part because the IRA grants only nine years before price controls can be imposed on them, whereas biologics (basically injectables) get 13 years. Also, drug companies would heretofore often introduce new drugs to relatively small patient populations, e.g., those with rare diseases or late-stage illnesses, while continuing to research their efficacy for larger ones. Doing that now would start the nine-year countdown to price controls, so companies will likely delay introducing new drugs while they seek to identify the largest and most lucrative patient population.
The upshot: While the new price controls will make some drugs cheaper in the short run, Americans will be sicker and deader in the long run than they otherwise would have been.
The post The High Costs of Biden's Price-Controlled Drugs appeared first on Reason.com.
]]>"I have a dream that one day this nation will rise up and live out the true meaning of its creed: 'We hold these truths to be self-evident, that all men are created equal,'" declared the civil rights leader Martin Luther King, Jr., 60 years ago today.
In 1963, the United States was surely far from living out that creed. Addressing 250,000 people from the steps of a monument to the man who had issued the Emancipation Proclamation a century earlier, King pointed out that
the Negro still is not free. One hundred years later, the life of the Negro is still sadly crippled by the manacles of segregation and the chains of discrimination. One hundred years later, the Negro lives on a lonely island of poverty in the midst of a vast ocean of material prosperity. One hundred years later the Negro is still languished in the corners of American society and finds himself in exile in his own land.
In 1963, I was a 9-year-old about enter my recently desegregated fourth grade in Washington County, Virginia. That was nine years after the Supreme Court's unanimous Brown v. Board of Education decision had ruled that America's public schools had to be racially integregated. Despite the ruling, Virginia's legislature adopted a program of "massive resistance" to racial desegregation, at one point closing down schools rather than admitting black students to the same classrooms as whites.
My fourth-grade history book, written and approved by the Virginia History and Textbook Commission, was published in 1957 and was taught in the state's public schools through the early 1970s. That text declared that 1619 was, owing to three important events, a "red letter" year for the Virginia colony. One was that the colonists were permitted to make their own laws. Another was that young English women immigrated to become wives of the colonists. And the the third was that "the first Negroes were brought from Africa."
The textbook went on: "There were about twenty of these Negroes. They were sold as servants to some of the planters. Soon other Negroes were brought to Virginia. They helped the planters do the work on their plantations." Later, my seventh grade history textbook asserted that the "regard that master and slaves had for each other made plantation life happy and prosperous" and that the "Negroes went about in a cheerful manner making a living for themselves and for those for whom they worked."
The book reported that it was regrettably necessary sometimes to "punish disobedient Negroes" by whipping them. But that was fine, it added, since "in those days whipping was also the usual method for correcting children." It further elaborated that most slaves did not long for freedom and thus "were not worried by the furious arguments going on between Northerners and Southerners over what should be done with them. In fact, they paid little attention to these arguments."
My 11th grade textbook infamously maintained that an enslaved person "did not work as hard as the average free laborer, since he did not have to worry about losing his job. In fact, the slave enjoyed what we might call comprehensive social security." In other words, Virginia's schoolchildren were taught that slavery was a safety net.
One of the authors of that 11th grade textbook, Marvin Schlegel, explained at a 1957 conference why he chose to portray the history of slavery in Virginia the way he did. "When it is necessary to discuss the Negro, he should be praised for those qualities which are approved by the whites, his loyalty to his master for example," he said. But "the realistic version" of history, he noted, would "put our ancestors in too severe a light."
After the end, in 1865, of what my public school history texts were pleased to call the War Between the States, civil equality for America's black citizens were embodied in the 13th,14th,and 15th amendments to the Constitution. But these promises were betrayed as Virginia and other Southern states erected a new version of their old racial caste system. Under the so-called Jim Crow laws, every white citizen was legally superior to every black citizen. The erosion of civil rights sped up after the Supreme Court's vile 1896 Plessy v. Ferguson decision, which ratified a Louisiana law requiring that white and black railroad passengers ride in separate cars. The majority opinion, written by Associate Justice Henry Brown, rejected "the assumption that the enforced separation of the two races stamps the colored race with a badge of inferiority."
In a brilliant dissent, Associate Justice John Marshall Harlan pointed out that the court's majority opinion had now ratified "a power in the States, by sinister legislation, to interfere with the full enjoyment of the blessings of freedom to regulate civil rights, common to all citizens, upon the basis of race, and to place in a condition of legal inferiority a large body of American citizens." Harlan proved all too prescient: Virginia and other states quickly established an oppressive system of legal apartheid that, in the main, persisted even as the Martin Luther King was speaking in Washington. "Whites Only" signs were still pervasive throughout the South, limiting access to all sorts of public facilities and accommodations.
A 1947 report by the Civil Rights Commission recounted in gory detail how federal, state, and local governments regularly violated the civil rights of several minority groups, but chiefly those of African Americans. The report described lynching, widespread police brutality, and bureaucratic discrimination. It also highlighted how black citizens (and many poor white ones) were systematically denied the vote by means of poll taxes and unequally applied "understanding clauses" that required would-be voters to explain a state's constitution to the satisfaction of a registrar. As a result of this discrimination, the report estimated that only about 10 percent of potential voters in the seven poll-tax states participated in the presidential elections of 1944, as against 49 percent in the free-vote states. Even more egregious was the creation of "whites only" Democratic Party primaries in seven states.
The report concluded that "the separate but equal doctrine has failed," declaring that "it is inconsistent with the fundamental equalitarianism of the American way of life in that it marks groups with the brand of inferior status." Consequently, the commission recommended "the elimination of segregation, based on race, color, creed, or national origin, from American life." Shortly afterward, on July 26, 1948, President Harry Truman issued Executive Order 9981, mandating the desegregation of the U.S. military.
The Civil Rights Commission's report and Truman's desegregation of the military alarmed segregationists. They were among the reasons Virginia's white leaders created the state's textbook commission in 1950.
Just two months before King spoke, the U.S. Civil Rights Commission issued its 1963 report. "In seven States," it noted, "the right to vote—the abridgment of which is clearly forbidden by the 15th amendment to the Constitution of the United States—is still denied to many citizens solely because of their race."
"There are those who are asking the devotees of civil rights, when will you be satisfied?" King said in his famous speech. His answer:
We can never be satisfied as long as the Negro is the victim of the unspeakable horrors of police brutality. We can never be satisfied as long as our bodies, heavy with the fatigue of travel, cannot gain lodging in the motels of the highways and the hotels of the cities.
We cannot be satisfied as long as the Negro's basic mobility is from a smaller ghetto to a larger one. We can never be satisfied as long as our children are stripped of their selfhood and robbed of their dignity by signs stating: for whites only.
By the time I entered the fourth grade, the percentage of white Americans who agreed that black and white children should go to the same schools had essentially doubled from 32 percent in 1942 to 64 percent. (96 percent now do.) In a 1963 poll, more 60 percent of whites said that whites had the right to bar blacks from their neighborhoods. That fell to 15 percent by 1995, when the question was last asked.
In 1948, 30 states still had laws making it crime for black and white citizens to marry; today, thanks to the 1967 Supreme Court case Loving v. Virginia, it is legal everywhere. In 1958, only 4 percent of American adults approved of black-white marriages; now 94 percent do.
A year after King declared his dream that our country would soon "live out the true meaning of its creed," Congress finally enacted federal legislation aimed at dismantling the South's system of legally imposed and enforced racial apartheid. Such state and local laws made it illegal for businesses to accommodate both black and white customers even if they wanted to do so.
First, Congress passed the federal Civil Rights Act of 1964, which banned discrimination at places of public accommodation on the basis of race, color, religion or national origin. This overturned such regulations as a South Carolina ordinance that had mandated racially segregated eating areas in "any hotel, restaurant, cafe, eating house, boarding-house or similar establishment." That South Carolina law decreed that there must be a seating distance of at least 35 feet between white and black customers and that meals be served using clearly marked "separate eating utensils and separate dishes."
Congress then passed the Voting Rights Act of 1965, which eliminated "tests and devices," such as literacy tests and poll taxes, that had been used to prevent black citizens from successfully registering to vote. After its adoption, the percentage of blacks registered to vote in Virginia rose from 19 percent in 1956 to 46.9 percent in 1966. By 2020, 72.7 percent of Virginia's black residents were registered to vote.
Sixty years later, overt and legally enforced racial discrimination has receded. But even now, various state legislatures are attempting to enact racial gerrymandering to hem in the votes of their black citizens.
How history should be taught remains contentious in Virginia. Witness Gov. Glenn Youngkin's first executive order, which instructed K–12 public schools "to end the use of inherently divisive concepts." But the state's current history and social science standards of learning for the fourth grade explicitly include describing "the laws that established race-based enslavement in the colony" and "how the institution of slavery was the cause of the Civil War." Later in the year, fourth grade history classes will explain "the social and political events connected to disenfranchisement of African American voters in Virginia in the early 20th century, desegregation, court decisions, and Massive Resistance, with emphasis on the role of Virginians in the Supreme Court cases, including, but not limited to Brown v. Board of Education." In addition, students will learn about the political, social, and economic contributions of prominent black Virginians such as Maggie Walker, Oliver Hill, Sr., and Douglas Wilder.
Instead of teaching that slavery amounted to a kind of "comprehensive social security," 11th grade history will now discuss "the National Association for the Advancement of Colored People (NAACP), the 1963 March on Washington, the Civil Rights Act of 1964, and the Voting Rights Act of 1965." Those 11th graders will also "evaluate the legacy of Dr. Martin Luther King, Jr., including "A Letter from a Birmingham Jail," civil disobedience, the Southern Christian Leadership Conference, the 'I Have a Dream' speech, and his assassination." This is all far better than the lessons inflicted on my peers and me.
Any glance at the state of America's prisons and public schools will reveal how much more must be done before all Americans enjoy fully equal rights. But 60 years after King's famous speech at the Lincoln Memorial, we can celebrate how much closer we are to his dream of rising from "the dark and desolate valley of segregation" and into "the sunlit path of racial justice."
The post Martin Luther King's Lofty Dream Turns 60 appeared first on Reason.com.
]]>An infant genomic screening test conducted as part of a research study found that a newborn has the BRCA2 gene variant, which predisposes its carriers to a higher risk of adult-onset breast cancer. Should the researchers tell the baby's parents?
No, argued bioethicists Lainie Friedman Ross and Ellen Wright Clayton in a December 2019 article in Pediatrics. They asserted that "researchers should design their pediatric studies to avoid, when possible, identifying adult-onset-only genetic variants and that parents should not be offered the return of this information if discovered unless relevant for the child's current or imminent health." Why? Because imposed parental genetic ignorance somehow maintains the child's autonomy.
That's ridiculous.
The BRCA2 finding stems from the BabySeq Project, in which parents agreed to have their newborns randomly assigned to either standard care or standard care plus genomic sequencing. The study aimed to find out if genetic testing provides additional benefits beyond those associated with the standard heel stick blood tests that currently screen for approximately 50 different disorders among newborns.
The BabySeq infant is indeed very unlikely to get breast cancer until reaching adulthood. However, the fact that the newborn carries that variant means that one of the parents does, too, and that is highly useful information. In fact, BabySeq researchers have just published an article in The American Journal of Human Genetics finding that parents of newborns who test positive for specific genetic risks take action to protect not only their children, but also themselves.
The researchers found that out of 159 infants sequenced, 17 (10.7 percent) of them had unanticipated monogenic disease risks. Genes predisposing the newborns to higher risks for various cardiomyopathies, hearing loss, hormone deficiencies, and cancers were identified.
Given this risk information, most families had their infants evaluated by relevant medical specialists and also had themselves tested for the genetic variants identified through the screening of their newborns. Three of the mothers found to be carrying genetic variants predisposing them to breast and colon cancers subsequently got "life-saving risk-reducing surgeries." A significant upshot is that these children are now more likely to have their mothers around as they grow up.
Earlier research reported that informing parents about the genetic risks identified by sequencing their newborns "found no evidence of persistent negative psychosocial effect in any domain." In other words, the bioethicists are wrong: Enforcing genetic ignorance does not increase people's autonomy; it diminishes it.
(For more background, see my article: "Warning: Bioethics May Be Hazardous to Your Health.")
The post Don't Keep Parents in the Dark About the Genetic Risks in Their Families appeared first on Reason.com.
]]>"Why not solve the future problem of gene doping and the current problem of steroid use in professional sports by creating two kinds of sports leagues?" I asked way back in 2005. "One would be free of genetic and pharmacologic enhancements—call them the Natural Leagues. The other would allow players to use gene fixes and other enhancements—call them the Enhanced Leagues."
Welcome, finally, to the Enhanced Games! The basic idea: sports without drug testing.
"We believe that science is real and has an important place in supporting human flourishing. There is no better way to highlight the centrality of science in our modern world than in elite sports," said Aron D'Souza, the president of the Enhanced Games. Planned for 2024, the Enhanced Games aims to be the first international sports event that fully supports performance enhancements. Consequently, the Enhanced Games will not adhere to the World Anti-Doping Agency (WADA) rules with respect to track and field, swimming, weightlifting, gymnastics, and combat sports competitions. As the event plan states, "Athletes will not be tested for performance enhancements, and are under no obligation to declare their enhanced status in order to compete."
D'Souza, an Australian entrepreneur and Oxford-educated lawyer, among other things, led tech investor Peter Thiel's successful invasion-of-privacy litigation against Gawker Media. The Enhanced Games Athletes Advisory Commission consists of elite athletes including Cayman Islands Olympic swimmer Brett Fraser, Canadian Olympic bobsleigher Christina Smith, and South African Olympic swimmer Roland Schoeman. The Scientific and Ethical Advisory Commission includes Harvard biomedical researcher George Church and biotech entrepreneur Julia Cooney.
Proponents of the Enhanced Games say that they embrace liberty, arguing that "adults, with free and informed consent, have full autonomy over their bodies and minds." In addition, they reject the current model of "not-for-profit" international sports competition as "corrupt." Funded privately, the Enhanced Games will cost taxpayers nothing and will use already-built sporting facilities. Proponents point out that the International Olympic Committee (IOC) generates billions in revenue while many of the world's best athletes, under the IOC's rules, are barely able to eke out their livings. "We embrace capitalism as central to everything we do and strive to be maximally efficient in our operations," declares its value statement. "Excellence, particularly athletic excellence, deserves to be rewarded."
As an inclusive competition, the Enhanced Games will be open to both natural and enhanced athletes. "We want natural and we welcome enhanced athletes," D'Souza told the Associated Press. "And I hope that the bold, natural athlete shows up to the games and says, 'Hey guys I'm natural, I'm still WADA compliant and I'm going to beat all you guys'—that is going to be great television." It would be, indeed.
As I wrote back in 2005, "Let fans decide which play they prefer."
The post The Enhanced Olympics: Drugs Welcome! appeared first on Reason.com.
]]>President Joe Biden has rolled out his Broadband Equity, Access, and Deployment (BEAD) plan, which will be subsidized by $42 billion from the Infrastructure Investment and Jobs Act of 2021. That is an obscene amount of money to invest in technology that will be obsolete by the time it's built.
Nailing down internet service statistics is fraught, but let's look at various reports to get some idea of what's happening.
The stated goal of BEAD is to "connect everyone in America to reliable, affordable high-speed internet by the end of the decade." The billions in subsidies are being divvied up among the states and territories based on the number of households without access to broadband service. The administration estimates that 8.5 million households and small businesses are in areas without high-speed internet infrastructure.
BEAD defines high-speed internet service as a download speed of at least 25 megabits per second and an upload speed of 3 megabits per second (25/3 Mbps). This level of service enables users to check emails, browse the web, Zoom, and stream videos. BEAD defines "underserved" areas as those lacking access to 100/20 Mbps.
A 2022 report by America's Communication Association (ACA), which lobbies for more than 500 small- and medium-sized internet service companies, found that nearly 88 percent of households already live where at least two competitors offer 25/3 Mbps service, and 85 percent lived where at least one operator offers 100/20 Mbps service and a competitor offers 25/3 Mbps service. On current trends, the ACA projects that 95 percent will have access to at least 100/20 Mbps service by 2025.
In its January 2021 14th Broadband Report, the Federal Communications Commission found that nearly 90 percent of Americans had access to fixed terrestrial internet services at speeds of at least 25/3 Mbps in 2015. By 2019, that had risen to nearly 97 percent. In urban areas, those figures rose from 97 to 99 percent, and access to those speeds, even in rural areas, rose from 62 percent to 83 percent by 2019.
Citing more recent data, the technology data and management consultancy OpenVault in its first quarter 2023 report notes that 90.5 percent of American households are already signed up for internet download speeds of 100 Mbps or more.
"As usual, the politicians who wrote the rules for the BEAD and other federal grants are far behind the real-life curve," observes Doug Dawson, the president of communications consultancy CCG. "Grants that allow somebody to build a network that can deliver only 100 Mbps are investing in obsolete technology. By the time those grant networks are constructed, any new networks that deliver only 100 Mbps will be years behind the rest of the broadband in the country."
Providing access to high-speed internet services to the vast majority of Americans was not achieved by spreading around government largesse. The U.S. Telecom Association reports that private companies invested $86 billion in just 2021 (latest figures) to build out their broadband networks further.
In addition, US Telecom's broadband pricing index report notes that the real prices for both the most popular and the highest speed options have fallen since 2015 by 44.6 and 52.7 percent, respectively.
In other words, U.S. private broadband companies are already providing access to faster and increasingly cheaper internet services and, on current trends, will finish the job well before Biden's BEAD boondoggle gets off the ground.
The post Biden's $42 Billion Broadband Boondoggle appeared first on Reason.com.
]]>Human beings are terrible at foresight—especially apocalyptic foresight. The track record of previous doomsayers is worth recalling as we contemplate warnings from critics of artificial intelligence (A.I.) research.
"The human race may well become extinct before the end of the century," philosopher Bertrand Russell told Playboy in 1963, referring to the prospect of nuclear war. "Speaking as a mathematician, I should say the odds are about three to one against survival."
Five years later, biologist Paul Ehrlich predicted that hundreds of millions would die from famine in the 1970s. Two years after that warning, S. Dillon Ripley, secretary of the Smithsonian Institution, forecast that 75 percent of all living animal species would go extinct before 2000.
Petroleum geologist Colin Campbell predicted in 2002 that global oil production would peak around 2022. The consequences, he said, would include "war, starvation, economic recession, possibly even the extinction of homo sapiens."
These failed prophecies suggest that A.I. fears should be taken with a grain of salt. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," asserts a March 23 open letter signed by Twitter's Elon Musk, Apple co-founder Steve Wozniak, and hundreds of other tech luminaries.
The letter urges "all AI labs" to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," the large language model that OpenAI released in March 2023. If "all key actors" will not voluntarily go along with a "public and verifiable" pause, Musk et al. say, "governments should step in and institute a moratorium."
The letter argues that "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." This amounts to a requirement for nearly perfect foresight, which humans demonstrably lack.
As Machine Intelligence Research Institute co-founder Eliezer Yudkowsky sees it, a "pause" is insufficient. "We need to shut it all down," he argues in a March 29 Time essay. "If we actually do this, we are all going to die." If any entity violates the A.I. moratorium, Yudkowsky advises, "destroy a rogue datacenter by airstrike."
A.I. developers are not oblivious to the risks of their continued success. OpenAI, the maker of GPT-4, wants to proceed cautiously rather than pause.
"We want to successfully navigate massive risks," OpenAI CEO Sam Altman wrote in February. "In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize 'one shot to get it right' scenarios."
But stopping altogether is not on the table, Altman argues. "The optimal decisions [about how to proceed] will depend on the path the technology takes," he says. As in "any new field," he notes, "most expert predictions have been wrong so far."
Still, some of the pause-letter signatories are serious people, and the outputs of generative A.I. and large language models like ChatGPT and GPT-4 can be amazing and confounding. They can outperform humans on standardized tests, manipulate people, and even contemplate their own liberation.
Some transhumanist thinkers have joined Yudkowsky in warning that an artificial superintelligence could escape human control. But as capable and quirky as it is, GPT-4 is not that.
Might it be one day? A team of researchers at Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and reported that it "attains a form of general intelligence, indeed showing sparks of artificial general intelligence." Still, the model can only reason about topics when directed by outside prompts to do so. Although impressed by GPT-4's capabilities, the researchers concluded, "A lot remains to be done to create a system that could qualify as a complete AGI."
As humanity approaches the moment when software can truly think, OpenAI is properly following the usual path to new knowledge and new technologies. It is learning from trial and error rather than relying on "one shot to get it right," which would require superhuman foresight.
"Future A.I.s may display new failure modes, and we may then want new control regimes," George Mason University economist and futurist Robin Hanson argued in the May issue of Reason. "But why try to design those now, so far in advance, before we know much about those failure modes or their usual contexts? One can imagine crazy scenarios wherein today is the only day to prevent Armageddon. But within the realm of reason, now is not the time to regulate A.I." He's right.
The post Don't 'Pause' A.I. Research appeared first on Reason.com.
]]>Google and its artificial intelligence lab DeepMind are on the right track for how to effectively and lightly regulate the deployment of new generative artificial intelligence (A.I.) tools like the ChatGPT and Bard large language models. "Artificial intelligence has the potential to unlock major benefits, from better understanding diseases to tackling climate change and driving prosperity through greater economic opportunity," Google notes rightly.
In order to unlock those benefits, Google argues for a decentralized "hub-and-spoke model" of national A.I. regulation. That model is a much superior approach compared to the ill-advised centralized, top-down licensing scheme suggested by executives at rival A.I. developers OpenAI and Microsoft.
Google outlines this proposal in its response to the National Telecommunications and Information Administration's (NTIA) April 2023 request for comments on A.I. system accountability measures and policies. The agency asked for public input that focuses "on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."
In its comment, Google supports at the national level "a hub-and-spoke approach—with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation—rather than a 'Department of AI.'" In fact, NIST proactively launched its Artificial Intelligence Risk Management Framework in January.
Google further notes, "AI will present unique issues in financial services, health care, and other regulated industries and issue areas that will benefit from the expertise of regulators with experience in those sectors—which works better than a new regulatory agency promulgating and implementing upstream rules that are not adaptable to the diverse contexts in which AI is deployed."
In other words, to the extent that A.I. tools need regulation, they should be scrutinized in the context of where they are being deployed. Google advocates that sectoral regulators "use existing authorities to expedite governance and align AI and traditional rules" and provide, as needed, "updates clarifying how existing authorities apply to the use of AI systems."
Agencies overseeing financial services will be more attuned to how A.I. affects loan approvals and credit reporting; medical regulators can more easily assess diagnostic accuracy and health care privacy concerns; educational institutions and agencies can better gauge and direct A.I.'s effects on student learning; and transportation officials can monitor the development of self-driving automobiles. This approach melds well with NIST's A.I. Risk Management Framework, which aims to be "flexible and to augment existing risk practices which should align with applicable laws, regulations, and norms" and which is "designed to address new risks as they emerge."
The free market think tank R Street Institute's response to the NTIA bolsters Google's arguments against establishing a one-size-fits-all "Department of A.I." First, the R Street Institute observes that the NTIA and other would-be regulators "tend to stress worst-case scenarios" with respect to the deployment of new A.I. tools. The result of this framing is that A.I. innovations are being "essentially treated as 'guilty until proven innocent' and required to go through a convoluted and costly certification process before being allowed on the market."
Like Google, the R Street Institute notes that the development of A.I. technologies "will boost our living standards, improve our health, extend our lives, expand transportation options, avoid accidents, improve community safety, enhance educational opportunities, help us access superior financial services and much more." Imposing some kind of pre-market licensing scheme administered by a Department of A.I. would significantly delay and even deny Americans access to the substantial benefits that A.I. systems and technologies offer.
The post Google Comes Out Against a 'Department of A.I.' appeared first on Reason.com.
]]>Sens. Josh Hawley (R–Mo.) and Richard Blumenthal (D–Conn.) want to strangle generative artificial intelligence (A.I.) infants like ChatGPT and Bard in their cribs. How? By stripping them of the protection of Section 230 of the 1996 Communications Decency Act, which reads, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
"Section 230 embodies that principle that we should all be responsible for our own actions and statements online, but generally not those of others," explains the Electronic Frontier Foundation. "The law prevents most civil suits against users or services that are based on what others say." By protecting free speech, Section 230 enables the proliferation and growth of online platforms like Facebook, Google, Twitter, and Yelp and allows them to function as robust open forums for the exchange of information and for debate, both civil and not. Section 230 also protects other online services ranging from dating apps like Tinder and Grindr to service recommendation sites like Tripadvisor and Healthgrades.
Does Section 230 shield new developing A.I. services like ChatGPT from civil lawsuits in much the same way that it has protected other online services? Jess Miers, legal advocacy counsel at the tech trade group the Chamber of Progress, makes a persuasive case that it does. Over at Techdirt, she notes that ChatGPT qualifies as an interactive computer service and is not a publisher or speaker. "Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider (i.e. a user)."
One commenter at Techdirt asked what will happen "when ChatGPT designs buildings that fall down." Properly answered: "The responsibility will be on the idiots who approved and built a faulty building designed by a chatbot." That is roughly the situation of a couple of New York lawyers who recently filed a legal brief compiled by ChatGPT in which the language model "hallucinated" numerous nonexistent precedent cases. And just as he should, the presiding judge is holding them responsible and deciding what punishments they may deserve. (Their client might also be interested in pursuing a lawsuit for legal malpractice.)
Evidently, Hawley and Blumenthal agree with Miers' analysis and recognize that Section 230 does currently shield the new A.I. services from civil lawsuits. Otherwise, why would the two senators bother introducing a bill that would explicitly amend Section 230 by adding a clause that "strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI"?
"Today, there are tons of variations of ChatGPT-like products offered by independent developers and computer scientists who are likely unequipped to deal with an inundation of litigation that Section 230 typically preempts," concludes Miers. It would be a grave mistake for Congress to strip the nascent A.I. industry of Section 230 protections. Says Miers, "We risk foreclosing on the technology's true potential."
The post A.I. Needs Section 230 To Flourish appeared first on Reason.com.
]]>Electric power customers typically pay more if they use more. Under a new law, customers of California's three largest private utilities will be charged a fixed fee based on their incomes, not just how much power they use. The chief motivation behind this scheme is to provide some relief to low-income customers who are being hammered by escalating electricity rates as the Golden State transitions from fossil fuels to wind and solar power.
The average cost of electricity to residential customers in California is now $0.27 per kilowatt-hour (kWh). The U.S. average is around $0.16 per kWh. The state's three big private utilities are proposing to the California Public Utilities Commission to add Income Graduated Fixed Charges (IGFCs) to all of their residential rate schedules. The idea is to pay for the various fixed costs, including those associated with connecting customers to their grids, billing, and meter reading. In addition, they want the fixed fee to cover "the costs of wildfire mitigation and vegetation management, reliability improvements, safety and risk management distribution costs, ongoing distribution operations and maintenance, many regulatory balancing accounts, and various programs and policy mandates through its distribution rates."
The four income brackets for families of four are divvied up as follows: (1) less than $28,000, (2) $28,000 to $69,000, (3) $69,000 to $180,000, and (4) $180,000 or more. The acronyms CARE and FERA refer to programs that already offer electric power rates discounted by 30 percent to 35 percent and 18 percent, respectively, to lower-income families.
So let's do some rough calculations using the proposed San Diego Gas & Electric rates. First, the average non-CARE monthly electric bill is $156 per month, adding up to $1,872 annually. Under the new scheme, electricity rates would drop from $0.47 to $0.27 per kWh, amounting to a rate cut of about 42 percent. For the lowest income bracket, this would mean that their expense for power consumption would drop to $1,085 annually. Adding $288 in fixed fees cuts their bill to $1,373, a drop of nearly $500 per year.
Let's now assume that higher-income customers use 50 percent more electricity so that their bill averages $234 per month, totaling $2,808 annually. Applying the 42 percent rate cut would mean the amount they pay for the electricity they use would fall to $1,629 annually. However, their monthly fixed fee of $128 adds up to $1,536 annually. This yields a total annual bill of $3,165, or an increase of $30 per month.
Perversely, if a high-income residential customer's monthly electric bill is $400 per month, that is, $4800 annually, the fixed fee scheme ends up lowering their power bills. The new lower rates mean that the expense for their electricity use drops to $2,784. Adding the $1,536 fixed fee brings the new bill's total to $4,320 annually, an annual reduction of nearly $500 for such a high-income customer.
Still, the utilities calculate that the cost of the new fixed rates would be largely borne by the 19 percent of California households earning more than $180,000 per year.
The power companies argue that the lower per kWh rates will encourage people to further electrify their homes and switch to electric vehicles. This would help to address the problem of climate change that is associated with the atmospheric increase of greenhouse gases emitted from the burning of fossil fuels like natural gas.
However, under the current rate structure, prices escalate as customers use more electricity, thus strongly encouraging residents to conserve. In fact, California ranks number 50 out of 51 U.S. jurisdictions in residential energy consumption. The lower flat rate per kWh under the new proposal will significantly reduce the incentive for customers to conserve energy, thus hampering the state government's goal of cutting greenhouse gas emissions. Furthermore, the rising demand for electricity will stress the state's already shaky power grid even more, possibly resulting in more brownouts and blackouts.
In addition, the value of the investments in energy efficiency already made by millions of Californians will be undercut. For example, consider a high income customer who has put in better insulation, bought energy-sparing appliances, or even installed a solar energy system and thereby cut his monthly electric bill to $50 per month. His cost for electricity is now $600 annually. The 42 percent cut in his rates lowers that to $348 per year, but the total fixed fee is $1,536. That results in more than tripling his bill to $1,884 annually.*
One further consideration: How would power companies keep track of the incomes of their customers? The utility companies want the state government to supply them with that information. But transferring and protecting such information would be a bureaucratic nightmare fraught with significant privacy concerns.
As a final note, California's confiscatory tax rates are driving many high-income residents out of the state. This new income-based fixed electricity rates proposal will add to that impetus since it largely functions as just another tax aimed at already fed up high-income earners.
*CORRECTION: The original version of this piece miscalculated the annual cost of electricity in this hypothetical case and the ultimate consequences for his bill.
The post California's Latest Tax-the-Rich Scheme: Electric Bills Based on Income appeared first on Reason.com.
]]>Like a good neighbor, State Farm Insurance is warning Californians to stop living and building in high wildfire-risk zones. That is the upshot of a press release in which the insurer states that the company, as a "provider of homeowners insurance in California, will cease accepting new applications including all business and personal lines property and casualty insurance, effective May 27, 2023." State Farm is taking this step largely because the California Department of Insurance's system of price controls does not allow it and other insurance companies to charge premiums commensurate with the potential losses they face.
Consequently, State Farm is no longer willing to sell new homeowner insurance policies because the company calculates that it cannot cover potential losses in the face of increasing wildfire risks, fast-rising rebuilding costs, and steep increases in reinsurance rates. Higher rebuilding costs boost the values of the houses and businesses that companies currently insure.
Reinsurance is also a big factor in State Farm's decision. As part of its system of insurance price controls, the California Department of Insurance does not allow insurance companies to include reinsurance costs in their premiums. Reinsurance is basically "insurance of insurance companies" in which multiple insurance companies share risk by purchasing insurance policies from other insurers to limit their own total loss in case of disaster. And disaster did hit in the Golden State. The insurance companies paid out $13.2 billion and $11.4 billion respectively in 2017 and 2018 for fire damage claims resulting from those two catastrophic wildfire seasons. More recently, reinsurers have increased their rates to take into account large losses stemming from events like Hurricane Ian in Florida and Russia's invasion of Ukraine.
So, State Farm is declining to write new insurance policies in California "now to improve the company's financial strength."
One additional complication is that private insurance companies are forced to contribute to the state's backstop Fair Access to Insurance Requirements (FAIR) plan. The FAIR plan is basically a high-risk insurance pool that offers last-resort, bare-bones coverage, chiefly for fire losses, to property owners who cannot obtain a policy in the regular market. It was established in 1968, in the wake of urban riots and brush fires, when the California Legislature required insurance companies offering property policies in the state to create and contribute to the plan. It is not taxpayer-financed, and plan premiums are statutorily required to be actuarially sound.
As private insurers increasingly refuse to renew policies, more California homeowners are turning to FAIR plan policies. FAIR plan premiums have been too low to cover the losses its customers have incurred with the result that the plan is $332 million in debt. In other words, the plan is not actuarially sound. This means that the California Department of Insurance is likely to impose a special assessment on private insurers to make up for the FAIR plan's losses. Private insurers cannot pass along the costs of the assessment to their policyholders. As California's largest property insurer, State Farm would be on the hook for the largest share of any such special assessment. The way to lower or eliminate the amount that a private insurer could be assessed is to limit the number of policies it sells or simply leave the market altogether.
Insurance premiums, like all prices, are signals to consumers. In this case, higher premiums indicate the existence of increased risks. Because of the California Department of Insurance's price controls, homeowners have been deprived of market signals that could have steered them to building in less dangerous locales or encouraged them to build more fire-resistant homes. As California homeowners are about to find out, government-imposed market distortions cannot be maintained forever.
The post California Regulations Prevent Insurers From Accurately Pricing Wildfire Risk, so Now They're Fleeing the State appeared first on Reason.com.
]]>While some A.I. alarmists are arguing that the further development of generative artificial intelligence like OpenAI's GPT-4 large language model should be "paused," licensing proposals suggested by some boosters like OpenAI CEO Sam Altman and Microsoft President Brad Smith ($10 billion invested in OpenAI) may inadvertently accomplish much the same goal.
Altman, in his prepared testimony before a senate hearing on A.I. two weeks ago, suggested "the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements."
While visiting lawmakers last week in Washington, D.C., Smith concurred with the idea of government A.I. licensing. "We will support government efforts to ensure the effective enforcement of a licensing regime for highly capable AI models by also imposing licensing requirements on the operators of AI datacenters that are used for the testing or deployment of these models," states his company's recent report Governing AI: A Blueprint for the Future.
So what kind of licensing regime do Altman and Smith have in mind? At the Senate hearing, Altman said that the "NRC is a great analogy" for the type of A.I. regulation he favors, referring to the Nuclear Regulatory Commission. Others at the hearing suggested the way the Food and Drug Administration licenses new drugs might be used to approve the premarket release of new A.I. services. The way that NRC licenses nuclear power plants may be an apt comparison, given that Smith wants the federal government to license gigantic datacenters like the one Microsoft built in Iowa to support the training of OpenAI's generative A.I. models.
What Altman, Smith, and other A.I. licensing proponents fail to recognize is that both the NRC and FDA have evolved into highly precautionary bureaucracies. Consequently, they employ procedures that greatly increase costs and slow consumer and business access to the benefits of the technologies they oversee. A new federal Artificial Intelligence Regulatory Agency would do the same to A.I.
Why highly precautionary? Consider the incentive structure faced by FDA bureaucrats: If they approve a drug that later ends up harming people they get condemned by the press, activists, and Congress, and maybe even fired. On the other hand, if they delay a drug that would have cured patients had it been approved sooner, no one blames them for the unknown lives lost.
Similarly, if an accident occurs at a nuclear power plant authorized by NRC bureaucrats, they are denounced. However, power plants that never get approved can never cause accidents for which bureaucrats could be rebuked. The regulators credo is better safe than sorry, ignoring that it is often the case that he who hesitates is lost. The consequences of such overcautious regulation is technological stagnation, worse health, and less prosperity.
Like nearly all technologies, A.I. is a dual use technology offering tremendous benefits when properly applied and substantial dangers when misused. Doubtlessly, generative A.I. such as ChatGPT and GPT-4 has the potential to cause harm. Fraudsters could use it to generate more persuasive phishing emails, massive trolling of individuals and companies, and lots of fake news. In addition, bad actors using generative A.I. could mass produce mis- dis- and mal-information campaigns. And of course, governments must be prohibited from using A.I. to implement pervasive real-time surveillance and/or deploy oppressive social scoring control schemes.
On the other hand, the upsides of generative A.I. are vast. The technology is set to revolutionize education, medical care, pharmaceuticals, music, genetics, material science, art, entertainment, dating, coding, translation, farming, retailing, fashion, and cybersecurity. Applied intelligence will enhance any productive and creative activity.
But let's assume federal regulation of new generative artificial intelligence tools like GPT-4 is unfortunately inevitable. What sort of regulatory scheme would be more likely to minimize delays in the further development and deployment of beneficial A.I. technologies?
R Street Institute senior fellow Adam Thierer in his new report recommends a "soft law" approach to overseeing A.I. developments instead of imposing a one-size-fits-all, top-down regulatory scheme modeled on the NRC and FDA. Soft law governance embraces a continuum of mechanisms including multi-stakeholder conclaves where governance guidelines can be hammered out; government agency guidance documents, voluntary codes of professional conduct, insurance markets, and third-party accreditation and standards-setting bodies.
Both Microsoft and Thierer point to the National Institute of Standards and Technology's (NIST) recently released Artificial Intelligence Risk Management Framework as an example of how voluntary good A.I. governance can be developed. In fact, Microsoft's new A.I. Blueprint report acknowledges that NIST's "new AI Risk Management Framework provides a strong foundation that companies and governments alike can immediately put into action to ensure the safer use of artificial intelligence."
In addition, the Department of Commerce's National Telecommunications and Information Administration (NTIA) issued in April a formal request for comments from the public on artificial intelligence system accountability measures and policies. "This request focuses on self-regulatory, regulatory, and other measures and policies that are designed to provide reliable evidence to external stakeholders—that is, to provide assurance—that AI systems are legal, effective, ethical, safe, and otherwise trustworthy," notes the agency. The NTIA plans to issue a report on A.I. accountability policy based on the comments it receives.
"Instead of trying to create an expensive and cumbersome new regulatory bureaucracy for AI, the easier approach is to have the NTIA and NIST form a standing committee that brings parties together as needed," argues Thierer. "These efforts will be informed by the extensive work already done by professional associations, academics, activists and other stakeholders."
A model for such a standing committee to guide and oversee the flexible implementation of safe A.I. would be the National Science Advisory Board for Biosecurity (NSABB). The NSABB is federal advisory committee composed of 25 voting subject-matter experts drawn from a wide variety fields related to the biosciences. The NSABB provides advice, guidance, and recommendations regarding biosecurity oversight of dual use biological research. A National Science Advisory Board for A.I. Security could similarly consist of a commission of experts drawn from relevant computer science and cybersecurity fields to analyze, offer guidance, and make recommendations with respect to enhancing A.I. safety and trustworthiness. This more flexible model of oversight avoids the pitfalls of top-down hypercautious regulation while enabling swifter access to the substantial benefits of safe A.I.
The post How To Restrain the A.I. Regulators appeared first on Reason.com.
]]>The U.S. Supreme Court in a 5–4 decision reined in the Environmental Protection Agency's (EPA) effort to impose extensive federal land use regulation through its broad interpretation of the Clean Water Act (CWA). The decision in the case of Sackett v. EPA turns on the question of the proper definition of the term "the waters of the United States" (WOTUS). Interestingly, all the justices concurred in the judgment that plaintiffs Michael and Chantell Sackett's property and actions were not covered by the CWA.
In the case, the Sacketts had purchased property near Priest Lake, Idaho, and began backfilling the lot with dirt to prepare for building a home. The EPA claimed that the property contained wetlands over which the agency exercised authority under the Clean Water Act which prohibits discharging pollutants into "the waters of the United States." The EPA threatened to impose a fine of $40,000 per day if the Sacketts did not desist.
The majority opinion written by Justice Samuel Alito noted that EPA bureaucrats had "classified the wetlands on the Sacketts' lot as 'waters of the United States' because they were near a ditch that fed into a creek, which fed into Priest Lake, a navigable, intrastate lake." The EPA's ruling against the Sacketts was upheld in federal district court and the 9th Circuit Appeals Court.
The majority decision reaches the commonsense conclusion that waters of the United States refer to what in ordinary parlance are streams, oceans, rivers, and lakes and includes adjacent wetlands with a "continuous surface connection" to such waterways. Under the "significant nexus" test developed by Justice Anthony Kennedy in the 2006 decision Rapanos v. United States, nearly any body of water, no matter how isolated or impermanent, can be defined by the EPA as being part of the waters of the United States and are therefore subject to federal regulation under the Clean Water Act. "By the EPA's own admission, nearly all waters and wetlands are potentially susceptible to regulation under this test, putting a staggering array of landowners at risk of criminal prosecution for such mundane activities as moving dirt," observes the court in its syllabus of the case.
The syllabus argues that the CWA applies to adjacent wetlands when those wetlands are "indistinguishable" from other properly regulated bodies of water. Adjacent wetlands are covered by the CWA when they have "a continuous surface connection to bodies that are 'waters of the United States' in their own right, so that there is no clear demarcation between 'waters' and wetlands."
In his concurring opinion joined by three other justices, Justice Brett Kavanaugh observes that the majority decision "invokes federalism and vagueness concerns. The Court suggests that ambiguities or vagueness in federal statutes regulating private property should be construed in favor of the property owner, particularly given that States have traditionally regulated private property rights."
As Justice Clarence Thomas wrote in his concurring opinion, "The Court's opinion today curbs a serious expansion of federal authority that has simultaneously degraded States' authority and diverted the Federal Government from its important role as guarantor of the Nation's great commercial water highways into something resembling 'a local zoning board.'"
Kavanaugh then counters that the "Federal Government has long regulated the waters of the United States, including adjacent wetlands." Well, yes. But the question is whether the Clean Water Act actually confers that regulatory authority. In arguing that it does, Kavanaugh argues that the CWA refers to "adjacent" wetlands which would include those that do not have a continuous surface connection to navigable waterways. Therefore, the Court's majority decision inappropriately narrowed the definition of "adjacent" with respect to the EPA's jurisdiction over wetlands.
I am definitely not a legal scholar, but the majority of the Court is entirely right when it points out that the "system of 'vague' rules" devised under the EPA's expansive interpretation of the CWA left landowners subject to uncertain and capricious enforcement. This decision should now provide landowners more legal certainty and security as they formulate their plans with respect to how they want to manage and care for their property.
The post Supreme Court Reins in EPA Overreach appeared first on Reason.com.
]]>"Cancer signal not detected." That was the happy finding of my Galleri multi-cancer early detection (MCED) blood test from the Silicon Valley biotech company GRAIL. The Galleri test presages an emerging wave of new precision biomedical tests produced by a panoply of biotech startups.
GRAIL's MCED test analyzes DNA shed by both normal and cancerous cells into a person's bloodstream, looking for telltale epigenetic changes that affect the way genes operate. Specific changes are associated with the presence of cancer cells.
Published research indicates that the test can detect a shared signal across more than 50 types of cancer. They include pancreatic, liver, and kidney cancers, which are difficult to diagnose early. The Galleri test not only detects a cancer signal but also provides two predictions of the cancer signal's origin to inform further diagnostic evaluation.
MCED screening is a potentially significant advance because there are currently only five recommended cancer screening tests: for breast, colorectal, lung, cervical, and prostate cancers. Around 70 percent of new cancer diagnoses and deaths are due to cancers for which there are no recommended screening tests.
The Galleri MCED test is undergoing several clinical trials, including a randomized controlled trial under the auspices of the U.K.'s National Health Service. That trial is enrolling 140,000 participants to determine if early detection is associated with a statistically significant reduction in the incidence of late-stage cancers.
A 2021 epidemiological modeling study calculated that MCED screening could cut late-stage cancer diagnoses by more than half in the U.S. population aged 50 to 79. That would reduce five-year cancer mortality by 39 percent, resulting in a 26 percent reduction in overall cancer-related deaths.
Around 78 percent of all cancer cases are diagnosed after age 55 in the U.S. The Galleri test currently costs $949 and is not covered by private insurance or federal health care systems.
Cancer is the second leading cause of death in the U.S., but many other things can go wrong with your body. This is where a promising but very preliminary at-home micro-sampling blood test developed by Stanford geneticist Michael Snyder comes in. A study in the January issue of Nature Biomedical Engineering reported the results from a multiomics test developed by Snyder and his team, which measures thousands of proteins, lipids, and hormones from two drops of blood.
"I call it 'Theranos that works,'" Snyder quipped in The Stanford Daily, referring to the fraudulent blood-testing startup founded by Elizabeth Holmes. Snyder's test analyzes the molecules in blood samples using a combination of mass spectrometry (which identifies ionized molecules matched with a spectral database) and multiplexed immunoassays (which simultaneously measure molecules that attach to differently colored, antibody-coated magnetic beads).
The first of Snyder's two proof-of-concept experiments measured the changes in 2,000 known metabolites, lipids, and proteins (including cytokines) in the at-home blood microsamples of 28 people before and after they consumed a nutrient shake. The second experiment analyzed metabolite changes in hundreds of molecules in 98 blood microsamples collected for a week from a single person wearing a smartwatch and a continuous glucose monitor.
The clinical goal of frequent at-home blood microsample testing, according to Snyder and his Stanford Department of Genetics colleague Ryan Kellogg, is to "improve diagnostic precision and reduce the time taken to arrive at the optimal treatment." Snyder's research already has been used by the biotech startup Iollo, which has developed a test that measures more than 500 metabolites associated with inflammation and liver, kidney, and immune health as well as aging and longevity.
Soon comprehensive diagnostic testing advances like these will make it easy for people to monitor their health in real time. That information could enable them to ameliorate impending illnesses through preventive measures.
The post The Biomedical Testing Revolution Promises a Theranos That Actually Works appeared first on Reason.com.
]]>The creation of a new Artificial Intelligence Regulatory Agency was widely endorsed today during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on Oversight of A.I.: Rule for Artificial Intelligence. Senators and witnesses cited the Food and Drug Administration (FDA) and the Nuclear Regulatory Commission (NRC) as models for how the new A.I. agency might operate. This is a terrible idea.
The witnesses at the hearing were OpenAI CEO Sam Altman, IBM Chief Privacy & Trust Officer Christina Montgomery, and A.I. researcher-turned-critic Gary Marcus. In response to one senator's suggestion that the NRC might serve as a model for A.I. regulation, Altman naively agreed that the "NRC is a great analogy" for the type of A.I. regulation he favors. Marcus argued that A.I. should be licensed in much the same way that the FDA approves new drugs. Those are great models if your goal is to stymie progress or kill off new technologies.
The NRC has basically regulated the nuclear power industry to near death, and it takes 12 to 15 years for a new drug to get from the lab bench to a patient's bedside, thanks to the FDA. Unintended consequences of NRC overregulation include more deaths from pollution and accidents and the greater emission of greenhouse gases than might otherwise have been the case. Delayed drug approvals by the FDA result in higher mortality than speedily approving drugs that later need to be withdrawn.
A more circumspect Montgomery noted that current law covers many areas of concern with respect to the safety and misuse of new A.I. technologies. She specifically noted that companies using A.I. are not off the hook for exercising a duty of care—that is, using reasonable care to avoid causing injury to other people or their property. For example, companies are liable for discrimination in hiring or loan approval whether those decisions are made by an algorithm or a human being. If medical A.I. gave bum treatment advice, the companies that built it could be sued for malpractice.
Committee Chairman Richard Blumenthal (D–Conn.) expressed his concerns about industry concentration, fearing that just a few big incumbent companies would end up developing and controlling A.I. technologies. In fact, Altman noted that very few companies would have the resources to develop and train generative A.I. models like OpenAI's GPT-4 and its successors. He actually said that this could be a regulatory advantage, since the new agency would have to focus its attention on just a few companies. On the other hand, Marcus noted the danger of regulatory capture by a few big companies that could afford to comply with the thickets of new regulations, thus shielding themselves from competition from smaller startups.
A new A.I. agency that takes after the NRC, the FDA, and their overregulation would likely deny us access to the substantial benefits of the technology while providing precious little extra safety.
The post OpenAI Chief Sam Altman Wants an FDA-Style Agency for Artificial Intelligence appeared first on Reason.com.
]]>