Prof. Robert Ehrlich's review of the first edition of my book, More Guns, Less Crime, is well-written and it is interesting to know that he owns a gun despite his concerns about research on the benefits of doing so. Unfortunately, however, his discussion is incomplete and simply inaccurate. Below are responses to some of the more important claims he makes.
"Lott neglects to tell the reader that all his plots are not the actual FBI data (downloadable from their Web site), but merely his fits to the data."
There are several places in my book that discuss how the diagrams show how crime rates change before and after right-to-carry laws are adopted once other factors have been taken into account. It is important to distinguish not just whether there was a decline in crime rates, but whether there was a decline relative to other states that did not adopt the right-to-carry laws. The second edition of More Guns, Less Crime, which was published in 2000, was also clear on this point and the graphs showed the changes in crime relative to other states that did not change their laws and were in the same region of the country.
"Lott has used the data from 10 states in his book."
I used data from the entire United States. The first edition used state-level data from all the states and the District of Columbia, as well as county-level data for the entire country from 1977 through 1992 (and, in some estimates, up to 1994). The second edition of the book not only updated the county and state data through 1996, but also used city-level data for the largest 2,000 cities.
Possibly what Prof. Ehrlich means here is that only 10 states (with a total of 718 counties) adopted right-to-carry laws during the 1977-1992 period. The point of examining all counties in all the states was to make a year-by-year comparison of how the crime rates had changed in the counties with the right-to-carry laws relative to the counties in states without the laws. In the second edition of my book, a total of 20 states, representing 1,432 counties, adopted right-to-carry laws between 1977 and 1996.
"The actual data are much more irregular with lots of ups and downs, and they show nothing special happening at time t=0."
My book reports the year-to-year changes in crime rates (see pages 136-7), and these results are consistent with the before-and-after trends. One of the benefits of examining the change in trends is that there are straightforward statistical tests to see if the change is statistically significant.
"Overall, averaging the 10 states, there is a small but not statistically significant increase in the robbery rate at t=0, certainly not the dramatic decrease Lott's fits show."
Prof. Ehrlich has examined state-level robbery rates for the 10 states that had adopted right-to-carry laws between 1977 and 1992, using data extended up until 1995 for the four years on either side of adoption. He finds that there is no statistically significant change in before-and-after trends. He claims to use data up until 1997, but that is not possible since he limited the sample to only four years after adoption, and the first full year these states had the law in effect was 1992. I have tried to replicate his results, but have been unable to do so: Robbery rates are declining after adoption relative to how they were changing prior to adoption.
Yet, even if his data analysis had been correct, his approach has a lot of problems. The main difficulty is that there is no comparison of what is going on in the states that do not adopt right-to-carry laws. When such a comparison is made, the drop in crime is about twice as large in right-to-carry state and twice as statistically significant. Accounting for other factors (e.g., the arrest rate for robbery) also increases the statistical significance of the drop. Many aspects of what he did are unclear, such as whether he weighted each state equally or weighted them by population (as is normally done). But neither approach altered the final result.
"What [Lott] does is to fit a smooth curve (actually a parabola) to the data earlier than t=0, and a separate curve to the data later than t=0."
This is only one of several different approaches reported in my book. The first edition also presented actual data on the number of permits issued per county over time for several states where the data were available. The second edition further examined whether differences in right-to-carry laws can affect the number of people who get permits (e.g., the permitting fees, the length of the training requirement, and how many years the law has been in effect), and whether this in turn can explain the changes in crime rates.
"Given a completely random set of data, Lott's fitting procedure is virtually guaranteed to yield either a drop or a rise near time t=0."
This is not literally true. Besides a flat line, other possibilities very obviously include the crime rate first rising and then falling after adoption–or falling and then rising. The question is also not whether there is a change in trends, but also whether those changes are statistically significant.
"Similarly, Lott shows the rate of multiple public shootings declining dramatically (by 100 percent) only two years after t=0. But using follow-up data in a more recent paper, Lott shows multiple shootings rising precipitously the year before t=0 and then declining right at t=0."
There are no inconsistencies. This paper, coauthored with Bill Landes, examined whether the results were sensitive to removing observations from the year of adoption, as well as the two years prior to adoption. We found that the results remained essentially unchanged. Readers can check this by looking at footnote 20 in our paper.
"It's difficult enough understanding why the impact of the laws should be so much greater on multiple shootings by crazed killers than ordinary murders (which drop only 10 percent), but figuring out how the laws could work in reverse time on the thinking of these psychos is a real challenge."
It is all too easy to dismiss mass murderers as totally irrational. But individuals who go on shooting sprees are often motivated by goals such as fame. Making it difficult to obtain those goals may discourage some from engaging in their attacks. There is also the issue of stopping attacks that do still occur. Suppose that a right-to-carry law deters crime primarily by raising the probability that a perpetrator will encounter a potential victim who is armed. In a single-victim crime, this probability is likely to be very low. Hence the deterrent effect of the law—though negative—might be relatively small.
Now consider a shooting spree in a public place. In a crowd, the likelihood that one or more potential victims or bystanders is armed would be very large even though the probability that any particular individual is armed is very low. This suggests a testable hypothesis: A right-to-carry law will have a bigger deterrent effect on shooting sprees in public places than on more conventional crimes.
To illustrate, let the probability (p) that a single individual carries a concealed handgun be 0.05. Assume further that there are 10 individuals in a public place. Then the probability that at least one of them is armed is 1—(0.95)10 or about 0.40. Even if (p) is only .025, the probability that at least one of 10 people will be armed is 1—(0.975)10 or about 0.22.
Prof. Ehrlich claims that I fail to account for all relevant variables. Sure, there could possibly be still other variables out there, though I doubt it. The data used in the first edition of the book have been made available to academics at 45 different universities. I know of no study that has attempted to account for as many factors as I have, but if Prof. Ehrlich thinks that other factors are important, he is perfectly free to see whether including them alters the results. Other academics have tried different variables–for example, Bruce Benson at Florida State University tried including other variables for private responses to crime and Carl Moody at William and Mary University used additional variables to account for law enforcement–but so far none of these other variables has altered the results.
However, the variable list that I attempted to account for is much more extensive than Prof. Ehrlich indicates. Among the factors that I accounted for in the first and second editions of my book are: the execution rate for the death penalty; conviction rates; prison sentence lengths; number of police officers; different types of policing policies (community policing, problem-orientated policing, "broken window" strategies); hiring rules for police; poverty; unemployment; four different measures of income; many different types of gun control and enforcement; cocaine prices; the most detailed demographic information on the different age, sex, and racial breakdowns of the population used in any study; and many other factors. (Click here for an example of other research that accounts for some of these factors.)
Discovering some left-out variable is more difficult than simply saying that other factors affect the crime rate. This left-out factor must be changing in the different states at the same time that the right-to-carry laws are being adopted. In addition, crime rates are declining as more permits are issued in a county so the left-out variable must similarly be changing over time. Other evidence that I presented in my book indicates that just as crime rates are declining in counties with right-to-carry laws, adjacent counties on the other side of state borders in states without these laws are experiencing an increase in violent crime. The more similar these adjacent counties, the larger the spillover. Right-to-carry laws also reduce crime rates where the criminal and the victim come into direct contact with each other relative to those crimes where there is no such contact. To alter the results, these left-out factors would have to vary systematically to coincide with all these different results.
One of the reasons I graph the before-and-after trends as well as the year-to-year variations in crime rates is to allow the reader to judge for themselves whether the adoption of right-to-carry laws coincided with changes in crime rates. For a general audience, I thought that this graphical approach was the most straightforward.
As to the appropriateness of a particular statistical test, the answer depends upon what question one is asking. The one test that Prof. Ehrlich questions looked at whether there was a statistically significant change in the slopes in crime rates before and after the laws are adopted. For that question, the F-test that I used is the appropriate test.
Research by Florenz Plassman and Nicolaus Tideman that is forthcoming in the October 2001 issue of the Journal of Law and Economics breaks down crime data by each state and by individual years before and after the adoption of the right-to-carry law. They find that for all 10 states that adopted such laws between 1977 and 1992, murder, rape, and robbery rates fell after adoption. If Prof. Ehrlich were to identify the statistical test which he says shows a significant turning point for robbery before the adoption of right-to-carry laws, I would be happy to comment on it.
It is flattering that my research is the first topic that Prof. Ehrlich discusses in his book, Nine Crazy Ideas in Science. My research, however, is not alone in studying this issue. A large number of academics have examined the data. While a few academic articles have been critical of some of the methodology, not even these critics have found a bad effect from right-to-carry laws. In fact, the vast majority of academics have found benefits as large or larger than the ones I report.
What is also interesting is how little criticism there is of the other gun control topics that my book addressed. For example, no academics have found significant evidence that waiting periods or background checks reduce violent crime rates. Unfortunately, what I have found is that many of these gun control laws actually lead to more crime and more deaths. (See this study of so-called safe storage gun laws.)
In his book, Prof. Ehrlich awards "cuckoos" to the ideas he discusses, with one cuckoo meaning "Why not?" and four cuckoos meaning "certainly false." He gives my work three cuckoos, but there are a lot of academics who must then be in the same boat as I am. More important, his criticisms are based upon either an incomplete or inaccurate reading of my work.