A REASON ONLINE debate
More guns mean less crime. That's the essential thesis of John R. Lott Jr.'s path-breaking book, appropriately titled More Guns, Less Crime: Understanding Crime and Gun Control Laws (University of Chicago Press, 2000), which looked at the relationship between liberalized gun laws and criminal activity. In both the original 1998 and revised 2000 editions, Lott, a senior research scholar at Yale Law School, used national gun and crime data to perform an unprecedentedly thorough study of the issue. On the face of it, his claim makes sense: If criminals assume that potential victims may be armed, they'll be less likely to act. (See "Cold Comfort," January 2000.)
Not so fast, says George Mason University physicist Robert Ehrlich. In his new book, Nine Crazy Ideas in Science (A Few May Even Be True) (Princeton University Press), Ehrlich argues that the data are in fact inconclusive and that Lott is massaging the results to fit his theory. Ehrlich, a gun owner himself, concludes that liberalized gun laws have had no appreciable effect one way or another.
So which is it? We invited Ehrlich and Lott to debate the issue on Reason Online from May 21-24. Each was allowed to make two contributions and, after the initial salvo, each had to respond within hours of the other's posting. Readers interested in more information can visit the debate, which includes links to many of the sources mentioned below, including both Ehrlich's and Lott's books.
More Guns Mean More Guns
Why John Lott is wrong
John Lott's book, More Guns, Less Crime contains many points with which I agree. For example, I believe that many criminals are leery of approaching potential victims who may be armed—an idea at the core of his deterrence theory that guns help to prevent crime. I also believe that violent criminals are not typical citizens, and that the possession of a gun by a law-abiding citizen is unlikely to turn him into a crazed killer. Additionally, Lott has a point when he speaks of the media's overreporting of gun violence by and against kids and the corresponding underreporting of the defensive use of guns to prevent crime.
As a gun owner myself, I was quite prepared to accept Lott's thesis that the positive deterrent effect of guns exceeds their harmful effects on society, but as a scientist I have to be guided by what the data actually show, and Lott simply hasn't made his case. Here's why:
Lott misrepresents the data. His main argument that guns reduce crime is based on the impact on various violent crime rates of "concealed carry laws," which allow legal gun owners to carry concealed weapons. Since these laws were passed at different dates in different states, he looks at how the crime rates change at t=0, the date of the law's passage in each state. Lott's book displays a series of very impressive-looking graphs that show dramatic and in some cases immediate drops in every category of violent crime at time t=0. The impact on robberies is particularly impressive, where a steeply rising robbery rate suddenly turns into a steeply falling rate right at t=0—almost like the two sides of a church steeple. As they say, when something looks too good to be true, it probably is. Lott neglects to tell the reader that all his plots are not the actual FBI data (downloadable from their Web site), but merely his fits to the data.
The actual data are much more irregular with lots of ups and downs, and they show nothing special happening at time t=0. Lott has used the data from 10 states in his book. When we look at changes in the robbery rate state by state, only two of the states (West Virginia and Georgia) show decreases at t=0, while the other eight show increases. Overall, averaging the 10 states, there is a small but not statistically significant increase in the robbery rate at t=0, certainly not the dramatic decrease Lott's fits show. In fact, Lott's method of doing his fits is virtually guaranteed to produce an "interesting" result at time t=0. What he does is to fit a smooth curve (actually a parabola) to the data earlier than t=0, and a separate curve to the data later than t=0.
Given a completely random set of data, Lott's fitting procedure is virtually guaranteed to yield either a drop or a rise near time t=0. Only if the data just happened to lie on a single parabola on both sides of t=0 would the fits show nothing special at that time. Since random data would show a drop or a rise equally often at t=0, we have a 50 percent chance of finding a drop—not a very good argument for the drop being real. The fact that all categories of violent crime (murder, rape, assault, robbery) show drops is also not particularly surprising, since the causes of violent crime (whatever they are) probably affect the rates in all the separate categories. Similarly, it is no more mysterious that when the overall stock market rises or falls dramatically the individual sectors (industrials, utilities, etc.) are more likely than not to move in the same direction.
Lott's results are not consistent. Taking Lott's fits at face value, we find they give inconsistent results. For example, he shows murders, rapes, and robberies each declining sharply and immediately at t=0, the year of passage of the laws, but the aggravated assault rate rises slightly and doesn't start its descent until three years after the law's passage. Presumably, the same sorts of folks are committing murders and assaults, so this difference is very puzzling. Similarly, Lott shows the rate of multiple public shootings declining dramatically (by 100 percent) only two years after t=0. But using follow-up data in a more recent paper, Lott shows multiple shootings rising precipitously the year before t=0 and then declining right at t=0. It's difficult enough to understand why the impact of the laws should be so much greater on multiple shootings by crazed killers than ordinary murders (which drop only 10 percent), but figuring out how the laws could work in reverse time on the thinking of these psychos is a real challenge.
Lott's results cannot account for all the relevant variables. Recognizing that violent crime rates can depend on all sorts of factors aside from the passage of concealed carry laws, Lott includes many variables when he runs his multiple linear regressions to disentangle the impact of each factor. Many of these variables, such as arrest rates, percentage of African Americans, and population density, account for a far greater percentage of the variation in violent crime than the mere 1 percent he attributes to passage of the laws. However, with such a small dependence on the one factor he is looking for, only if Lott has included all the relevant variables that could affect the rate of violent crime can he hope to see the residual amount due to the effect of that one factor. In answer to this criticism, Lott says OK—tell me what variable I've left out and I'll include it. But the list of plausible variables that could affect violent crime rates over time is virtually endless.
Here, for example, are 14 that Lott didn't include: (1) amount of alcohol sold, (2) price of alcohol, (3) amount of drugs sold, (4) price of drugs, (5) number of police on the beat, (6) number of police brutality complaints, (7) average summer temperature, (8) number of convicted felons on the streets, (9) average age of convicted felons on the streets, (10) percentage of teenagers living in two-parent households, (11) high school dropout rate, (12) dollars spent on crime prevention programs, (13) minimum wage rate, (14) amount of media violence. I'm sure readers could come up with many more plausible factors, any one of which could mask the true dependence on the concealed carry laws.
Lott doesn't properly compute statistical significance. Another very serious problem with Lott's method is how he calculates the statistical significance of his results. He essentially asks, What is the probability of getting the observed variation of the crime rate on either side of t=0 based on changes in the various socio-demographic variables and random variations? If that computed probability is very small, he regards his hypothesis that the concealed carry laws made the difference as being proven.
But that's not right. He needs to look at the probability of a change in the crime rate for years t= -3, -2, -1, 0, 1, 2, 3, etc. Only if the probability is very much less for year zero than the other years can he consider his results meaningful. It seems very likely, however, that Lott would find similarly low probabilities for all these other years, because only if the violent crime rate were static over time would there be no significant variation on either side of year t=0, or any other given year. In fact, John Donahue, a law professor at Stanford, analyzed Lott's data and found that the most significant turning point for the robbery rates occurs before t=0.
Lott has correctly observed that, by passing concealed carry laws in various states in various years, the U.S. has been in effect conducting an extremely interesting social experiment. That experiment, in principle, can give us an empirical answer to the relationship between easing restrictions on gun-carrying permits and crime. However, his one-sided analysis of the data inspires little confidence that we can count on him to tell us the true results of this experiment. From all indications it seems that the concealed carry laws probably have had almost no effect, one way or the other.
John R. Lott Jr.
Less gun control means less violent crime
Robert Ehrlich's review of the first edition of my book, More Guns, Less Crime, is well-written, and it is interesting to know that he owns a gun despite his concerns about research on the benefits of doing so. Unfortunately, however, his discussion is incomplete and simply inaccurate. Below are responses to the more important claims he makes.
"Lott neglects to tell the reader that all his plots are not the actual FBI data…but merely his fits to the data." There are several places in my book that discuss how the diagrams show how crime rates change before and after right-to-carry laws are adopted once other factors have been taken into account. It is important to distinguish not just whether there was a decline in crime rates, but whether there was a decline relative to other states that did not adopt the right-to-carry laws. The second edition of More Guns, Less Crime, which was published in 2000, was also clear on this point, and its graphs showed the changes in crime relative to other states that did not change their laws and were in the same region of the country.
"Lott has used the data from 10 states in his book." I used data from the entire United States. The first edition used state-level data from all the states and the District of Columbia, as well as county-level data for the entire country from 1977 through 1992 (and, in some estimates, up to 1994). The second edition of the book not only updated the county and state data through 1996, but also used city-level data for the largest 2,000 cities. Possibly what Ehrlich means here is that only 10 states (with a total of 718 counties) adopted right-to-carry laws during the 1977-1992 period. The point of examining all counties in all the states was to make a year-by-year comparison of how the crime rates had changed in the counties with the right-to-carry laws relative to the counties in states without the laws. In the second edition of my book, a total of 20 states, representing 1,432 counties, adopted right-to-carry laws between 1977 and 1996.
"The actual data are much more irregular with lots of ups and downs, and they show nothing special happening at time t=0." My book reports the year-to-year changes in crime rates, and these results are consistent with the before-and-after trends. One of the benefits of examining the change in trends is that there are straightforward statistical tests to see if the change is statistically significant.
"Overall, averaging the 10 states, there is a small but not statistically significant increase in the robbery rate at t=0, certainly not the dramatic decrease Lott's fits show." Ehrlich has examined state-level robbery rates for the 10 states that had adopted right-to-carry laws between 1977 and 1992, using data extended up until 1995 for the four years on either side of adoption. He finds that there is no statistically significant change in before-and-after trends. He claims to use data up until 1997, but that is not possible since he limited the sample to only four years after adoption and the first full year these states had the law in effect was 1992. I have tried to replicate his results, but have been unable to do so: Robbery rates are declining after adoption relative to how they were changing prior to adoption.
Yet even if his data analysis had been correct, his approach has a lot of problems. The main difficulty is that there is no comparison of what is going on in the states that do not adopt right-to-carry laws. When such a comparison is made, the drop in crime is about twice as large in right-to-carry states and twice as statistically significant. Accounting for other factors (e.g., the arrest rate for robbery) also increases the statistical significance of the drop. Many aspects of what he did are unclear, such as whether he weighted each state equally or weighted them by population (as is normally done). But neither approach altered the final result.
"What [Lott] does is to fit a smooth curve (actually a parabola) to the data earlier than t=0, and a separate curve to the data later than t=0." This is only one of several different approaches reported in my book. The first edition also presented actual data on the number of permits issued per county over time for several states where the data were available. The second edition further examined whether differences in right-to-carry laws can affect the number of people who get permits (e.g., the permitting fees, the length of the training requirement, and how many years the law has been in effect), and whether this in turn can explain the changes in crime rates.
"Given a completely random set of data, Lott's fitting procedure is virtually guaranteed to yield either a drop or a rise near time t=0." This is not literally true. Besides a flat line, other possibilities very obviously include the crime rate first rising and then falling after adoption—or falling and then rising. The question is also not whether there is a change in trends, but also whether those changes are statistically significant.
"Similarly, Lott shows the rate of multiple public shootings declining dramatically (by 100 percent) only two years after t=0. But using follow-up data in a more recent paper, Lott shows multiple shootings rising precipitously the year before t=0 and then declining right at t=0." There are no inconsistencies. This paper, co-authored with William M. Landes, examined whether the results were sensitive to removing observations from the year of adoption, as well as the two years prior to adoption. We found that the results remained essentially unchanged.
"It's difficult enough understanding why the impact of the laws should be so much greater on multiple shootings by crazed killers than ordinary murders (which drop only 10 percent), but figuring out how the laws could work in reverse time on the thinking of these psychos is a real challenge." It is all too easy to dismiss mass murderers as totally irrational. But individuals who go on shooting sprees are often motivated by goals such as fame. Making it difficult to obtain those goals may discourage some from engaging in their attacks. There is also the issue of stopping attacks that do still occur. Suppose that a right-to-carry law deters crime primarily by raising the probability that a perpetrator will encounter a potential victim who is armed. In a single-victim crime, this probability is likely to be very low. Hence the deterrent effect of the law—though negative—might be relatively small.
Now consider a shooting spree in a public place. In a crowd, the likelihood that one or more potential victims or bystanders are armed would be very large even though the probability that any particular individual is armed is very low. This suggests a testable hypothesis: A right-to-carry law will have a bigger deterrent effect on shooting sprees in public places than on more conventional crimes.
To illustrate, let the probability (p) that a single individual carries a concealed handgun be .05. Assume further that there are 10 individuals in a public place. Then the probability that at least one of them is armed is about .40 (= 1—(.95)10). Even if (p) is only .025, the probability that at least one of 10 people will be armed is .22 (= 1—(.975)10).
Ehrlich claims that I fail to account for all relevant variables. Sure, there could possibly be still other variables out there, though I doubt it. The data used in the first edition of the book have been made available to academics at 45 different universities. I know of no study that has attempted to account for as many factors as I have, but if Ehrlich thinks that other factors are important, he is perfectly free to see whether including them alters the results. Other academics have tried different variables—for example, Bruce Benson at Florida State University tried including other variables for private responses to crime, and Carl Moody at the College of William & Mary used additional variables to account for law enforcement—but so far none of these other variables has altered the results.
However, the variable list that I attempted to account for is much more extensive than Ehrlich indicates. Among the factors that I accounted for in the first and second editions of my book are: the execution rate for the death penalty; conviction rates; prison sentence lengths; number of police officers; different types of policing policies (community policing, problem-orientated policing, "broken window" strategies); hiring rules for police; poverty; unemployment; four different measures of income; many different types of gun control and enforcement; cocaine prices; the most detailed demographic information on the different age, sex, and racial breakdowns of the population used in any study; and many other factors.
Discovering some left-out variable is more difficult than simply saying that other factors affect the crime rate. This left-out factor must be changing in the different states at the same time that the right-to-carry laws are being adopted. In addition, crime rates are declining as more permits are issued in a county, so the left-out variable must similarly be changing over time. Other evidence that I presented in my book indicates that just as crime rates are declining in counties with right-to-carry laws, adjacent counties on the other side of state borders in states without these laws are experiencing an increase in violent crime. The more similar these adjacent counties, the larger the spillover. Right-to-carry laws also reduce crime rates where the criminal and the victim come into direct contact with each other relative to those crimes where there is no such contact. To alter the results, these left-out factors would have to vary systematically to coincide with all these different results.
One of the reasons I graphed the before-and-after trends as well as the year-to-year variations in crime rates was to allow readers to judge for themselves whether the adoption of right-to-carry laws coincided with changes in crime rates. For a general audience, I thought that this graphical approach was the most straightforward.
As to the appropriateness of a particular statistical test, the answer depends upon what question one is asking. The one test that Ehrlich questions asked whether there was a statistically significant change in the slopes in crime rates before and after the laws are adopted. For that question, the F-test that I used is the appropriate test.
Research by Florenz Plassman and Nicolaus Tideman that is forthcoming in the October 2001 issue of the Journal of Law and Economics breaks down crime data by each state and by individual years before and after the adoption of the right-to-carry law. They find that for all 10 states that adopted such laws between 1977 and 1992, murder, rape, and robbery rates fell after adoption. If Ehrlich were to identify the statistical test which he says shows a significant turning point for robbery before the adoption of right-to-carry laws, I would be happy to comment on it.
It is flattering that my research is the first topic that Ehrlich discusses in his book, Nine Crazy Ideas in Science. My research, however, is not alone in studying this issue. A large number of academics have examined the data. While a few academic articles have been critical of some of the methodology, not even these critics have found a bad effect from right-to-carry laws. In fact, the vast majority of academics have found benefits as large or larger than the ones I report.
What is also interesting is how little criticism there is of the other gun control topics that my book addressed. For example, no academics have found significant evidence that waiting periods or background checks reduce violent crime rates. Unfortunately, what I have found is that many of these gun control laws actually lead to more crime and more deaths.
In his book, Ehrlich awards "cuckoos" to the ideas he discusses, with one cuckoo meaning "Why not?" and four cuckoos meaning "certainly false." He gives my work three cuckoos, but there are a lot of academics who must then be in the same boat as I am. More important, his criticisms are based upon either an incomplete or inaccurate reading of my work.
Lott's numbers don't tell us anything
I reply below to the main criticisms of John Lott—at least those which I have understood.
Lott doesn't deny that he misleads the reader by neglecting to mention that his plots are fits to the data, because he can't. His graphs are in fact labeled "number of violent crimes" per 100,000 population and I find no statement in his book that the graphs are fits, rather than actual data. In his reply, Lott justifies the use of displaying fits by noting that it is important to show "adjusted" crime rates after other variables (aside from the laws) have been taken into account.
Lott is correct that I was using the first edition of his book when I made the comment about only 10 states changing their right-to-carry laws in the stipulated time period.
Lott claims that I "used data up until 1997, but that is not possible since he limited the sample to only four years after adoption [of the laws]…." Clearly, he is mistaken, since my plots show data extending 10 years before the law's adoption.
My statement about the changes in slope in the various states was based on simple linear fits to the data two years on either side of t=0, without weighting the states by population. However, without doing any statistical analysis whatsoever, a mere glance at the graphs for the 10 states should allow readers to decide for themselves whether the data for the 10 states actually show anything particular happening at time t=0. (The data for robbery can be found plotted in my book or downloaded from the FBI Web site.)
Lott claims that his fitting procedure is not biased, because by using random data one is notvirtually guaranteed to find a drop or a rise at t=0, as I claimed. Instead, he points out that the random data might show an abrupt change in the slope, not the actual level, at t=0 (e.g., first rising then falling, or first falling then rising). But Lott's correction to my statement actually makes my basic point even stronger, since a decrease in slope is exactly what might be expected if Lott were right. Thus, if his fitting procedure would force random data to show a change in slope at t=0—equally often an increase or decrease—we can't have too much confidence that any observed decrease in slope validates his theory.
It's difficult to find anything about mass murder amusing, but I find Lott's calculation for the greater deterrent effect of easing concealed-carry laws on multiple shootings very humorous. Essentially, he is saying that after concealed-carry laws are eased, mass murderers really are more deterred than ordinary murderers, because the chances are much greater that someone in a large group is actually armed. Now, I don't think mass murderers are totally irrational. But I find this type of probability calculation more revealing of Lott's thinking than that of mass murderers, some of whom I imagine would relish the idea of going out in a blaze of glory, in case someone in the group were armed. ("Suicide by police" seems to be a fairly common act by some psychos.)
In Lott's rebuttal on this same issue he fails to address the other inconsistency in his results: How could the laws act in reverse time, causing a big spurt of mass shootings the year before the laws were enacted? He also neglects to answer my question on how his analysis can show the murder rate dropping immediately after the laws are passed, but the aggravated assault rate not starting its drop until four years later.
Lott is right in pointing out that the omitted variables would need to change systematically in a way correlated with the dates of passing the laws. But given that the laws (according to him) account for such a tiny fraction of the change in crime rates, and given an extremely long list of possible variables, it seems likely that some of them could fit the bill. If Lott's claim that he really has accounted for all the key variables that affect violent crime rates were correct, then he really should be able to predict how the crime rates will change in the future in each state, based on all these variables. Moreover, if his predictions fail to be borne out in any state it would show that he has left out some factor. (We are all used to hearing about why the stock market did what it did on any given day, after the fact. But the failure to make such accurate predictions ahead of time tells us that maybe we really don't fully understand all the variables that make the market do what it does, any more than we understand the variation in crime rates.)
I am not alone in questioning Lott's statistical analysis—see, for example, work by Daniel Webster, Jens Ludwig, Daniel Black, and Daniel Nagin. Lott notes that his F-test is the appropriate one to answer the question of whether there was a statistically significant change in the slope in crime rates at t=0. I don't dispute that the change in the slope of crime rates may be statistically significant at t=0. After all, there might have been a real change at that point in time for reasons unrelated to the laws.
However, I claim that the slope will probably also be found to change by statistically significant amounts at most other years as well, and that would show that there's nothing special happening at t=0, the year the laws were passed. The real test of whether it was the liberalized gun laws that made the difference is that a statistically significant change in slope was found at t=0 and only at t=0.
To see this basic flaw in Lott's statistical analysis, let's imagine that some lunatic has a theory that the Nasdaq drops every full moon. Presumably, according to Lott, the way to test this theory would be to do a linear regression involving as many extraneous variables as we can think of that might affect the Nasdaq—and not to worry too much that we may not have gotten them all. Then using the regression, we need to see if the Nasdaq had a statistically significant drop on days when the moon was full. It very well might show a statistically significant drop on those days. Why not? However, I expect that the Nasdaq would also show drops (and rises) having comparable statistical significance for other lunar phases as well—thereby proving exactly nothing.
Prof. Lott, wouldn't you agree that a finding of a statistically significant change in the crime rates at years before t=0 would invalidate your results? Will you tell us what your analysis shows for the statistical significance of changes in slope at years other than t=0?
John R. Lott Jr.
The Effect Is Clear
Disarming law-abiding citizens leads to more crime
To Prof. Ehrlich, the "basic flaw" in my statistical analysis is that concealed handgun laws are likely to be just accidentally related to changes in the crime rate. He takes a simple example of explaining how the stock market changes over time. Obvious variables to include would be the interest rate and the expected growth in the economy, but many other variables—many of dubious importance—could possibly also be included. The problem arises when such variables are correlated to changes in stock prices merely by chance.
An extreme case would be including the prices of various grocery store products. A store might sell thousands of items, and the price of one—say, peanut butter—might happen to be highly correlated with the stock prices over the particular period examined. We know that there is little theoretical reason for peanut butter to explain overall stock prices, but if you go through enough grocery store prices, it just might happen that one of them accidentally moves up and down with the movements in the stock market over a particular period of time. Similar problems can occur with other obviously unrelated variables, such as the incidence of full moons or sunspots.
There are ways to protect against this "dubious variable" problem. One is to expand the original sample period. If no true causal relationship exists between the two variables, this coincidence is unlikely to keep occurring in future years. And this is precisely what I did as more data became available: Originally, I looked at data through 1992, then extended it to 1994, then up until 1996, and then, in recent working papers, up through 1998. If Ehrlich understood this, he would realize that this is equivalent to his request that I should try to "predict how the crime rates will change in the future."
Another approach guarding against the "dubious variable" problem is to replicate the same test in many different places. Again, this is exactly what I have done here: I have studied the impact of right-to-carry laws in different states at different times, and I have included new states as more and more states have adopted these laws as the time period has been extended.
As I discussed previously, I have also provided many qualitatively different tests, linking not only the changes in gun laws to changes in crime rates but also the actual issuance of permits; the changes in different types of crimes; rates of murders in public and private places; and comparisons of border counties in states with and without right-to-carry laws. Even if I accidentally found a variable that just happened to be related to crime in one of these dimensions, it seems unlikely that you would get consistent results across all these different tests.
In any case, as far as I know, no one except Ehrlich is arguing that testing whether right-to-carry laws affect crime is the theoretical equivalent of including as variables such things as full moons. Whatever one's views on the topic, there are legitimate questions over whether these laws increase or decrease crime—and the only way that we can test that is to include them as a variable in the regressions.
However, the bottom line is clear: If Ehrlich believes that there is a particular variable that has been left out and that corresponds with all these changes, I have given him the data set; instead of speculating about what might be, he should actually do the work to see if his concerns are valid. No previous study has accounted for even a fraction of the alternative explanations for changing crime rates as I have and, more important, my regressions explain over 95 percent of the variation in crime rates over time.
His concerns about using before-and-after trends make little sense to me because I report the results in many different ways: linear and nonlinear trends before and after, year-to-year changes, and before-and-after averages. Readers of my book can view the graphs with the year-to-year changes and judge for themselves when the change in trends occur.
As I explicitly note in my book (pages 146-7 in the first edition), my graphs showing the nonlinear trends before and after the change in laws are constructed similarly to how other economists have analyzed crime data. No explanation is offered for why I shouldn't have focused on whether there was a decline in crime relative to other states that did not adopt the right-to-carry laws.
Ehrlich might find it amusing that deterrence does work, but the data on guns and crime consistently shows that the greater the likelihood that a person can defend himself, the greater the deterrence. William M. Landes and I point to evidence that perpetrators of multiple victim shootings are disproportionately psychotic, deranged, or irrational. Ehrlich and others claim that a law permitting individuals to carry concealed weapons would therefore not deter shooting sprees in public places (though it might reduce the number of people killed or wounded). Yet a right-to-carry law will both raise the potential perpetrator's cost (he is more likely to be wounded or killed or apprehended if he acts) and lower his expected benefit (he will do less damage if he encounters armed resistance). Even those bent on suicide may refrain from attacking if the harm that they can do is sufficiently limited. Although not all offenders will alter their behavior in response to the law, some individuals might refrain from a shooting spree.
Instead of so casually dismissing our result as "very humorous," Ehrlich and others should rise to the challenge to examine the data and see if they can offer a better explanation for the large drops in multiple-victim public shootings when states adopt right-to-carry laws. These crimes have seriously shocked the nation, and finding ways to reduce such incidents is very important.
Finally, in both editions of my book, I respond to the critics of my work that Ehrlich mentions in his last dispatch. (I direct interested readers to chapters 7 and 9 of More Guns, Less Crime.)
This debate has focused on just my findings dealing with right-to-carry laws, but just as important are the overall effects of gun control laws. Despite the best of intentions, law-abiding citizens, not criminals, are most likely to obey the different restrictions that are imposed. Disarming the law-abiding relative to criminals has one consequence: more crime.