A new study in the American Journal of Public Health purports to show that a 1995 tightening in Connecticut's gun permit laws led to a 40 percent reduction in gun homicides over the next decade—nearly 300 fewer gun homicides than their counterfactual construction of a Connecticut without those laws would have had. Despite its immediate adoption by gun control advocates, the study may not actually deliver what's advertised.
If you are eager to tie together various gun-violence-related items in one messy stew of proposed restrictions on gun possession rights, do note that Charleston mass murderer Dylann Roof would not have been prevented from obtaining a weapon by Connecticut-style background check laws (whose details are given at more length immediately below).
The study was conducted by researchers Kara E. Rudolph, Elizabeth A. Stuart, Jon S. Vernick, and Daniel W. Webster under the aegis of the John Hopkins Center for Gun Policy and Research, part of the Johns Hopkins Bloomberg School of Public Health. A press release from that Bloomberg School of Public Health sums up the findings of the study and the nature of the law whose effects it purports to chart:
[the study] compared Connecticut's homicide rates during the 10 years following the law's implementation to the rates that would have been expected had the law not been implemented. The large drop in homicides was found only in firearm-related killings, not in homicides by other means, as would be expected if the law drove the reduction…..
The Connecticut law requires all prospective handgun purchasers to apply for a permit in person with the local police regardless of whether the seller of the handgun is a licensed dealer or private seller. It also raised the handgun purchasing age from 18 to 21 years and required prospective purchasers to complete at least eight hours of approved handgun safety training.
"Permit-to-purchase laws, which require prospective handgun purchasers to first obtain a license from the police after passing a comprehensive background check, appear to reduce the availability of handguns to criminals and other people who are not legally permitted to buy guns," says study author Daniel Webster….
Note that "appear to reduce" from Webster, a matter we'll revisit later.
Curiously for a paper that purports to be a work of objective social science and not a pure tool of politics, the press release announcing the study wraps up like this:
Public opinion survey data from Webster and his colleagues recently published in Preventive Medicine show that the majority of Americans (72 percent) and gun owners (59 percent) support requiring people to obtain a license from a local law enforcement agency before buying a handgun to verify their identity and ensure they are not legally prohibited from having a gun.
Weird, huh? Science that its institutional sponsors believe is somehow made more important/valid by public opinion regarding aspects of its conclusion? (Here's a site, by the way, dedicated to Connecticut's permit process.)
Gun rights enemies in the media leapt upon the study eagerly, painting it as a killing blow to supporters of relatively unrestricted Second Amendment rights; it was dubbed "The NRA's worst nightmare" by Salon.
Politicians have also used the study's release as a hook to introduce a new bill to establish a new federal grant funding program for states that institute their own "permit to purchase" gun regulations. Such regulations would have to emulate Connecticut's allegedly powerfully death-reducing ones, applying to all sales (not just those from federally licensed gun dealers), and include no sales to under-21-year-olds, fingerprinting, background checks, and licensed permission from a local law enforcement agency.
Does the study prove what it claims to prove?
Given the amazingly complicated set of causes and incentives feeding into any human decision—and every gun homicide is the result of a human decision—establishing that the change in background check laws that "led to" a reduction in gun homicides "caused" them (even in that one Connecticut case, much less concluding that such laws can be relied on to have that effect in other places and times) is likely beyond any final authoritative conclusion via the usual methods of the social sciences.
There's a whole lot to unwind when it comes to what we reliably know about how the presence of guns or gun laws affects public safety—it's not an area where the "science is settled" by any means—and most of it should be of near-zero policy relevance for anyone who respects either constitutional rights or the right to self-defense.
Other people's misuse of a tool in either crimes or accidents has no clear bearing on our legal attitudes toward those tools. All "facts of reality" are hugely underdetermined by any singular supposed cause. And how many people would have been killed by guns in Connecticut from 1995 to 2005 absent the gun law change isn't even a fact; gussied up in the methods of the social sciences as it is in this paper, it is a guess.
The researchers do seem to be trying hard. Obviously, they are facing a problem: we can see what happened in real Connecticut after these laws passed: the murder rate per 100,000 population started dropping significantly after 1996. Does that settle it? No, because very similar gun murder rate drops happened in most other states, states that did not have such tougher background laws instituted. The study's own major chart shows all other control states largely moving in lockstep with Connecticut on murder rates.
So they can't learn anything meaningful comparing what happened in Connecticut to what happened in most other states. So they create a "synthetic Connecticut" which they posit shows what would have happened in Connecticut minus the new laws.
The "synthetic Connecticut" to which they compare the real Connecticut, and which fares far worse in gun homicides minus those supposedly life-saving laws, is derived from a "weighted combination of states that exhibits homicide trends most similar to Connecticut's prior to the law's implementation." (It ends up being 72 percent Rhode Island, with a mix of Maryland and California tossed in.)
The authors have an impressive list of covariates considered to ensure other things are kept as equal as they can be, including "Population size, population density (log-transformed), proportion 0-18 years, proportion 15-24 years, proportion black (log-transformed), proportion Hispanic (log-transformed), proportion ? 16 years living at or below poverty, and income inequality as measured by the Gini coefficient are from the U.S. Census Bureau."
Granting that neither I nor anyone else I know has done extensive social science research on the results of Connecticut's gun change ourselves, let's walk through five potential problems with leaping from this study's results to the conclusion that instituting Connecticut-like laws nationally is a public safety imperative, one we can be confident will reduce gun homicides by 40 percent.
1) How do we know that synthetic-Connecticut really is a good marker for real Connecticut? The weight of that point seems to be almost entirely a pure case of believing that "past performance guarantees future results." Without saying anything about why it was so or should be presumed to always be so, the authors note that in the past Rhode Island's gun homicide levels matched Connecticut's very closely.
I ran the study by Aaron Brown, a math whiz who currently works as chief risk manager at AQR Capital Management. He pointed out to me that, especially given Connecticut's continued rough matching of national gun homicide trends before and after the law passed, the real phenomenon to be explained here is not why Connecticut's gun homicides went down, but why Rhode Island's went up unusually for part of the period.
As Brown wrote, the "claim that Connecticut's law reduced gun homicides is not based on any fall in Connecticut's rate (which merely mirrors the national trend) but on an increase in gun homicide rate in Rhode Island (to be precise, 0.724*Rhode Island + 0.147*Maryland + 0.087*Nevada + 0.036*California + 0.005* New Hampshire). Given that Rhode Island is the outlier, it makes more sense to look for things in Rhode Island that increased gun homicide rates in 1998 than to ascribe the entire effect to a change in Connecticut in 1995."
The study says that "The third assumption [behind their results] is that there are no unmeasured confounders during the post-law period. This is an untestable assumption given the absence of randomization of PTP law implementation across states." That seems to be an admission that there was no attempt to think of any reason that Rhode Island should be an outlier from national trends, except that of course it must still be matching what would have happened in Connecticut absent the background check laws, because it used to.
Given Connecticut's continued matching of national trends over the studied period, I just don't see that as so clearly the real explanation as to rest the study's bold conclusion on it. (There are also some law enforcement related reasons that Connecticut's real homicide rate might have gone down as well, unconsidered in this paper.)
The raw numbers involved are always very small by the way, so small that for many of the years the Centers for Disease Control (CDC), the source of the data the study relies on, say that "Rates based on 20 or fewer deaths may be unstable. Use with caution." In a few years of the study, Rhode Island's total homicide deaths are fewer than 20. The study did not note any caution about it that I saw.
The rise in murder rates in synthetic-Connecticut begins in 1997; if you look at Rhode Island, its main component, we see in the raw CDC numbers (sent to me by CDC, numbers this study does not provide, merely showing a graphic of the rates) that the actual raw number of "extra" murders from that year through 2005 in Rhode Island amount to 52. (This number is derived by subtracting 14, 1997's raw number, from the actual number for every year from 1997-2005 in which the raw number was higher.)
Near as I can tell, then, the entire edifice of their result is based on the fact that in Rhode Island over the course of eight years, 52 people made the decision to murder, and the study presumes that because of past patterns, a proportional number of people in Connecticut would also, for some reason, have made the decision to murder over those years minus the laws. You can decide if that seems irrefutably true to you, minus the apparati of the social sciences.
2) To return to the "appear" mentioned above in "Permit-to-purchase laws…appear to reduce the availability of handguns to criminals," given that we are assuming that the law is having all sorts of powerful effects on behavior and outcomes, don't we need to know something about how extensively or effectively the laws are being enforced, and have some decent data or reasonable guesses to be sure that the law's existence almost certainly is preventing many, many gun purchases by murderers that would have occurred without the law?
The study doesn't try to answer that question. One of the authors, Dr. Daniel Webster of the Center for Gun Policy and Research, Bloomberg School of Public Health, Johns Hopkins University, said in an email to me that "Virtually no studies of gun control law take enforcement into account because data are lacking and we don't really know the degree to which deterrence (people not wanting to violate the law) is a function of levels of enforcement." Noted, and undoubtedly true. But I can't help but think that unknown is of huge significance to how authoritatively the results of a study like this should be taken.
3) The authors are sure their gun-related cause leads to a gun-related effect by noting that the effects on homicide rates they allege to have found are almost all in gun homicides, not in other homicides. Curiously to me, the synthetic-Connecticut used to compare the non-gun homicides is very different than the mostly-Rhode Island one used for gun homicides; it is mostly New Hampshire. That comparison seems to be apples-oranges, and one wonders what the results would have been if they'd used the same synthetic Connecticut for both comparisons.
4) The study traces changes from 1995 to 2005; when I asked the CDC to send me the raw data numbers that the study relied on, the CDC warned me that "the coding of mortality data changed significantly in 1999, so you may not be able to compare number of deaths and death rates from 1998 and before with data from 1999 and after." [UPDATE: In an email sent after this post went up, the CDC says that "the change in…coding has almost no effect on homicide or suicide unlike other causes of death." So this point seems to be of little relevance.]
I did not see anywhere in the study an acknowledgement that the source of the numbers considers comparisons across 1999 to be questionable. Webster said in an email on that point that "even if there was a change in classifying homicides in 1999, why would this have affected Connecticut dramatically different than in other states?" It might be worth noting that real Connecticut's gun homicide rate had already fallen 29 percent in the mere two years prior to the 1995 law, from 4.57 to 3.23 per 100,000 (as with all rates given in the study and this post).
5) The study stops looking for effects 10 years after the law went into effect. Why might that be? Six of the eight years since 2005 for which CDC had data show Connecticut with a higher real gun homicide rate than 2005, the year that the authors chose to stop. If they had gone out to 2006, the reduction in rates in real Connecticut from 1995 to 2006 is cut to 12 percent.
From 2005 to 2006, Connecticut's gun homicide rate went up 38 percent, from 2.05 to 2.84. Rhode Island—again, the bulk of their synthetic Connecticut—saw its rate go down 5 percent in that year, from 1.83 to 1.73.
If you look at the CDC gun homicide data from 2005 to 2012, you see Connecticut's rate going up 66 percent, from 2.05 to 3.41, and Rhode Island's going down 20 percent, from 1.83 to 1.45.
Recall that they got their effect from the fact that up until 2005 Connecticut's rate was going down while Rhode Island (mostly) saw theirs going up, generating all those extra murders that they assume Connecticut would have had if not for the background check law.
That all makes it seems like stopping in 2005 might be classic cherry-picking to make their results seem stronger.
Murphy et al.'s stated reason for stopping after 10 years in the study itself is "We conclude the post-law period in 2005 to limit extrapolation in our predictions of the counterfactual to 10 years, as has been done previously."
Previously in some other important study related to gun law changes and homicides? No. The footnote on that statement is to "Abadie A, Diamond A, Hainmueller J. Synthetic control methods for comparative case studies: Estimating the effect of California's tobacco control program. J Am Stat Assoc 2010;105:493– 505." While I'm not a professional social scientist, that reasoning seems unconvincing, especially when you see how much the year changes affect the outcome.
Webster repeated in an email, "As we explain in the article, we limit the analyses to the first 10 years after the law was implemented (1995) as has been recommended for research with synthetic control methods. Estimation of policy effects is based principally on a forecast of trends for the post-law period and the further you get out from the new policy, the less certain that forecast becomes." The tobacco study in question whose methods they rely on for the 10-year cutoff did so because of a change in the laws it was studying, not as a general principle of synthetic forecast social science.
While what the researchers did seems impeccable by the standards of their professional game, this laymen does wonder how we can be so sure that the decisions of a few dozen people over a decade in Rhode Island to kill someone with a gun can be attributed so confidently to a change in gun laws in Connecticut?
Even if the results of the study are true and robust, and Connecticut-style permit and background check laws apparently do keep a significant number of guns out of the hands of people who might kill others with them, they also keep guns out of the hands of far, far, far more people who would merely use them for pleasure or security the same way other Americans do. The nexus between prohibited classes under federal weapons law and people who will illegitimately harm others with weapons is very, very tenuous. And it might also be valuable to look at what crime rates in Connecticut relative to the rest of the country did in areas other than gun homicides if overall social policy decisions are to be made based on this analysis; Lott offers some charts on aspects of that question.
Dr. Webster was very helpful in answering a long series of questions that likely seemed like badgering, including helping me think through how some criticisms of the study that seemed cogent to me at first were 100 percent wrong.
He seemed suspicious of my motives. After I ran by him a series of potential issues with the study generated by either me or others I had read or communicated with, he wrote "No doubt, you're trying to come up with some explanation for scientific findings [that] don't fit your views."
The John Hopkins Center for Gun Policy and Research, part of the Johns Hopkins Bloomberg School of Public Health, from whom the study came is financed (see the name) by former New York mayor and busybody moneybags Michael Bloomberg, for whom restricting Second Amendment rights is a holy cause.
The study itself states "The study goal is to estimate the effect of Connecticut's PTP [permit to purchase] law on homicides in Connecticut—not to extrapolate the effect of Connecticut's law on homicides in an average control state." Regardless of that limited goal, and regardless of whether any of the objections to the study's conclusions discussed here are valid, to a certain class of consumers of news and commentary, thanks to this study and the press it received, it is already a settled fact that "science has proven that tougher background checks reduce gun homicides by 40 percent."
No doubt raised about the study, either here or elsewhere, will likely dislodge that "knowledge." In which case, the study has achieved its goal whether or not its results are replicable or rigorously proved.