The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Crime

Don't end federal private prisons

|

Yesterday, the DOJ announced that it would gradually end its use of private prisons. You can read the memo by Deputy AG Sally Yates here. She writes: "I am directing that, as each contract [with a private prison corporation] reaches the end of its term, the Bureau [of Prisons] should either decline to renew that contract or substantially reduce its scope in a manner consistent with the law and the overall decline of the Bureau's inmate population."

Why? The Yates memo says: "Private prisons . . . compare poorly to our own Bureau facilities. They simply do not provide the same level of correctional services, programs, and resources; they do not save substantially on costs; and as noted in a recent report by the Department's Office of Inspector General, they do not maintain the same level of safety and security. The rehabilitative services that the Bureau provides, such as educational programs and job training, have proved difficult to replicate and outsource—and these services are essential to reducing recidivism and improving public safety."

This is unfortunate, for two reasons.

First, Yates seems to be exaggerating what empirical studies tell us about private vs. public prison comparisons. They do save money (though how much is a matter of dispute). And they don't clearly provide worse quality; in fact, the best empirical studies don't give a strong edge to either sector. The best we can say about public vs. private prison comparisons is a cautious "We don't really know, but the quality differences are probably pretty minor and don't strongly cut in either direction." The Inspector General's report doesn't give us strong reason to question that result.

Second, even if all the bad things people say about private prisons were true, why not pursue a "Mend it, don't end it" strategy? there's a new trend in corrections to develop good performance measures and make payments contingent on those performance measures. If the private sector hasn't performed spectacularly on quality dimensions to date, it's because good correctional quality hasn't been strongly incentivized so far. But the advent of performance-based contracting has the potential to open up new vistas of quality improvements—and the federal system, if it abandons contracting, may miss out on these quality improvements.

* * *

First, how many people does this affect? The Bureau of Justice Statistics report on Prisoners in 2014 reports (Table 9) that 19% of federal prisoners were held in private facilities—about 40,000 in 2013 and in 2014. But that includes about 14,000 in nonsecure facilities and home confinement. The Yates memo says that in 2013 the number of private prison inmates was 30,000, or 15% of federal prisoners, and notes in a footnote that this doesn't include federal halfway houses (and that halfway houses aren't the focus of the memo). The IG report says federal private prisoners were 22,660 in 2015, or 12% of the total. Probably these numbers are all consistent, and the different is due to different years and slightly different definitions of who's covered. So this memo affects probably about roughly 25,000 people today.

Compare that to the state prison system. Private prisons represent much less of the state systems—about 7%—but the state systems are much bigger. Overall, there were about 90,000 private state prisoners in 2014. (The 7% is of course an aggregate: it's 0% in almost 20 states that don't use privatization at all, over 25% in just three states (MT, NM, OK), and something in between (a median of about 10%) in others.) And these people are unaffected by the Yates memo, though maybe some states might be moved to follow the federal government's lead.

Also, this doesn't include immigrant detainees in ICE facilities—I haven't looked into those numbers closely, but it looks like those total numbers are comparable to the federal Bureau of Prisons numbers.

* * *

Now, let's look at what empirical studies can tell us about cost and quality. For a discussion at greater length, please see my Emory Law Journal article on Prison Accountability and Performance Measures, or this earlier post of mine (which this post quotes from liberally).

Costs are hard to compare between the public and private sectors, because the two sectors do their accounting differently. One obvious difference is that private prison firms have to pay all their own payroll, benefits, legal expenses, etc., while a lot of those costs in the public sector are borne by different agencies, not the Department of Corrections or Bureau of Prisons. Naive cost comparisons will typically be worthless, so one will need to do a sophisticated study that spells out all its accounting assumptions.

Perhaps the best example of competing, side-by-side cost studies comes from the evaluation of the federal facility in Taft, California, operated by The GEO Group.

A Bureau of Prisons cost study by Julianne Nelson compared the costs of Taft in fiscal years 1999 through 2002 to those of three federal public facilities: Elkton, Forrest City, and Yazoo City. The Taft costs ranged from $33.21 to $38.62; the costs of the three public facilities ranged from $34.84 to $40.71. Taft was cheaper than all comparison facilities and in all years, by up to $2.42 (about 6.6%)-except in fiscal year 2001, when the Taft facility was more expensive than the public Elkton facility by $0.25 (about 0.7%). Sloppily averaging over all years and all comparison institutions, the savings was about 2.8%.

A National Institute of Justice study by Douglas McDonald and Kenneth Carlson found much higher cost savings. They calculated Taft costs ranging from $33.25 to $38.37, and public facility costs ranging from $39.46 to $46.38. Private-sector savings ranged from 9.0% to 18.4%. Again averaging over all years and all comparison institutions, the savings was about 15.0%: the two cost studies differ in their estimates of private-sector savings by a factor of about five.

Why such a difference? First, the Nelson study (but not the McDonald and Carlson study) adjusted expenditures to iron out Taft's economies of scale from handling about 300 more inmates each year than the public facilities. Second, the studies differed in what they included in overhead costs, with the Nelson study allocating a far higher overhead rate.

* * *

Now, on to quality. Here, too, naive comparisons aren't much good, because how much quality one should expect depends on many factors like the demographic composition of a particular prison—which is something that the IG report didn't control for.

Most damningly, many studies don't rely on actual performance measures, relying instead on facility audits that are largely process-based. Some supposed performance measures don't necessarily indicate good performance, especially when the prisons are compared based on a "laundry list" of available data items (for instance, staff satisfaction) whose relevance to good performance hasn't been theoretically established.

Let's consider the IG report itself. One of its evaluation categories is rates of assaults, both inmate-on-inmate and inmate-on-staff. That seems fine—I think we can all agree that assaults are bad—provided the measurement methods are comparable. But the report also says that "the contract prisons confiscated eight times as many contraband cell phones annually on average as the BOP institutions." That's not the actual number of contraband cell phones—it's the number confiscated, because of course we don't know the actual number. Well, I know a great way to get that number down: just stop looking hard for contraband phones. This is an inappropriate measure because it could indicate that there are a lot of phones or that enforcement is very vigorous; you can't use it as a basis for comparison between prisons unless you know, for instance, that the level of enforcement is similar. And yet, the IG report uses that as a basis to criticize private prisons. Similarly, the IG report found that the private prisons "fail[ed] to initiate discipline in over 50 percent of incidents". But whether you should initiate discipline in any given case is a matter of judgment, and I'm sure that, in another context, people would think that a bright-line insistence on initiating discipline 100% of the time is inflexible and overly punitive.

As an example of the problems with current quality metrics, consider the performance evaluations of the private federal Taft facility. As with the cost studies discussed above, we have two competing studies, the National Institute of Justice one by McDonald and Carlson and a Bureau of Prisons study by Scott Camp and Dawn Daggett—the companion paper to Julianne Nelson's cost paper.

The Bureau of Prisons has evaluated public prisons by the Key Indicators/Strategic Support System since 1989. Taft, alas, didn't use that system, but instead used the system designed in the contract for awarding performance-related bonuses. Therefore, McDonald and Carlson could only compare Taft's performance with that of the public comparison prisons on a limited number of dimensions, and many of these dimensions-like accreditation of the facility, staffing levels, or frequency of seeing a doctor-aren't even outcomes. Taft had lower assault rates than the average of its comparison institutions, though they were within the range of observed assault rates. No inmates or staff were killed. There were two escapes, which was higher than at public prisons. Drug use was also higher at Taft, as was the frequency of submitting grievances. On this very limited analysis, Taft seems neither clearly better nor clearly worse than its public counterparts.

The Camp and Daggett study, on the other hand, created performance measures from inmate misconduct data, and concluded not only that Taft "had higher counts than expected for most forms of misconduct, including all types of misconduct considered together," but also that Taft "had the largest deviation of observed from expected values for most of the time period examined." Camp and Daggett's performance assessment was thus more pessimistic than McDonald and Carlson's.

According to Gerald Gaes, the strongest studies include one from Tennessee, which shows essentially no difference, one from Washington, which shows somewhat positive results, and three more recent studies of federal prisons by himself and coauthors, which found public prisons to be equivalent to private prisons on some measures, higher on others, and lower on yet others.

(I'll add that we similarly don't know well which sector does better on recidivism: for details, see my Emory Law Journal article or my earlier blog post.)

* * *

Bottom line: I think Yates is exaggerating what we know about the cost and quality of public vs. private prisons, and perhaps giving undue weight to the negative findings of the IG report, even though that report didn't control for inmate demographics, didn't fully use valid performance measures, and so on.

But here's what's possibly even more important: prisons have begun experimenting with performance measures and performance-based contracting. The UK has been a pioneer in this movement, but it's been used to a limited extent in the U.S. too. A small amount of the contract payment in the Taft case was performance-based. Also, there was a prison-privatization bill in Florida that never became law; it was defeated for general anti-privatization reasons, but if it had passed, it would have implemented some performance-based contracting.

It might seem surprising, but private prisons have almost never been evaluated on their performance and compensated on that basis. Pro-privatization people have often favored prison privatization on the grounds that the market (unlike, usually, government) has the advantage of greater flexibility to experiment and also has greater scope for incentives to work. Maybe that's usually true, but in the case of prisons, both of these elements have usually been false: private prisons have had limited scope for experimentation (the contracts have often reproduced the entire public-sector rulebook in excruciating detail) and have also had limited incentives (contract payments have rarely incorporated performance-based elements).

In light of that, maybe it's even surprising that private prisons have done as well as they have in the comparative studies. Be that as it may, the advent of performance-based contracting could open up possibilities for substantial quality improvements. This could work in the public sector too (bonus payments for public prison wardens?), but the private sector is probably better situated to take advantage of monetary incentives. So if the federal government stops contracting with private prison firms now, it may miss out on these potential quality improvements. Not only that: if the federal government continued contracting with private prison firms, it could itself take the lead in implementing performance-based contracting and thus be a driver in these quality gains.

This means that even if all the bad things about private prisons are true, the best strategy may be instead "Mend it, don't end it."