Guest blog post: A reflection on the Multisite Adult Drug Court Evaluation
John Roman, PhD, is a senior fellow in the Justice Policy Center at the Urban Institute, where his research focuses on evaluations of innovative crime control policies and justice programs. Roman is a co-author of the Multisite Adult Drug Court Evaluation (MADCE), a study funded by the National Institute of Justice (NIJ) to examine the impact of adult drug courts and whether there are cost savings attributable to drug court programs. The evaluation, one of the largest studies of drug courts to date, took eight years to complete and examines a sample of nearly 1,800 drug court and non-drug court probationers. I spoke with Roman about the results of the evaluation and the methodology he used to measure costs and benefits.
Q. What is the MADCE?
A. The MADCE is a study of drug courts in 23 counties in six states and six comparison sites. It studies a broad range of drug courts to understand their effectiveness in different places and among different populations. We studied drug courts that serve people with serious cocaine, heroin, crack, and methamphetamine problems, as well as drug courts that serve people charged with alcohol and marijuana crimes. We looked in rural, suburban, and urban places, all so we could say as much as possible about how drug courts work.
Q. Why is the evaluation important?
A. When NIJ released the solicitation for the evaluation in 2003, drug courts were relatively new. The first evaluation was only a decade old, and while there had been a few randomized experiments, particularly in Baltimore and Maricopa County, Arizona, the total body of research wasn’t convincing. Most studies lacked rigor and used comparison groups that you would expect to do more poorly with or without drug court. So NIJ funded this study to definitively answer the question: do drug courts work?
Q. Why was the cost-benefit analysis (CBA) an important part of the evaluation?
A. When you’re implementing a program that, as far as criminal justice programs go, is quite intensive, you need to know whether it’s a valuable use of your time and resources. To pass the bar of effective practice, people not only need better outcomes, but the outcomes need to produce benefits that offset costs. Drug court literature shows that drug courts are more costly than business as usual, so you need to know if the investment is worth making.
Q. What did the CBA show?
A. The results were really interesting. We find that drug courts are quite costly, even a bit more costly than prior literature had found, and they return benefits, which on average, yield about $2 for every dollar invested. Though that’s the headline, what underlies the cost-benefit ratio is critical too.
The CBA found tremendous variation across individuals in terms of benefits, which mostly result from drug court participants committing fewer crimes. For the typical participant, the crimes avoided are petty crimes—those that don’t cost society very much—and thus, the benefits for a typical participant actually don’t outweigh the costs. But in a rather small number of people, drug court prevents very serious crimes with enormous social costs.
So though the headline indicates a 2:1 cost-benefit ratio, it is important to add that the results are not statistically significant. That is, for the average participant, the benefits are slightly less than the costs.
Q. How do these results comport with the results of your earlier study of the Superior Court Drug Intervention Program in Washington, DC? In that study, you used Bayesian analysis to explain the variation in your sample. Why not use the same technique in the MADCE?
A. In the earlier study, we compiled data from multiple evaluations that studied different places and populations, with different evaluators with different methods. That introduced all types of uncertainty into the analysis. With the MADCE, we were the single evaluator, so we could rely on a more traditional approach. Despite the difference in technique, both studies showed similar patterns: drug courts yield enormous benefits for relatively few people, and for the typical participant, the benefits are a little less than the costs.
Q. Rather than distinguishing between costs and benefits, you use a single metric—an individual’s net benefit—to measure economic effects. Why use this approach?
A. I’ll give you a classic example that often confounds people in the drug court field. You have a treatment group that goes through drug court. Some of them fail and go to prison. At the same time, you have a comparison group that doesn’t have anything analogous to a drug court intervention. They go to court, get sentenced, and go to prison. The question then is: do you count prison for the comparison group as a cost of business as usual, or as an avoided cost or benefit of the drug court program? There’s no right answer to that question, which makes the distinction between costs and benefits arbitrary.
The lack of a clear distinction is particularly problematic when you have researchers drawing the line at different points. It’s a subtle difference with enormous implications that leaves practitioners in a bind. Who are they supposed to believe?
If you use a single metric, the analysis is pretty straightforward. You’re simply comparing the costs and benefits of drug court participants to the costs and benefits of a comparison group that did not receive drug court. If the result is positive and significant, that means drug court produced a net benefit. If it’s negative, that means drug court was more costly.
Q. How did you estimate costs?
A. We know that the criminal justice population is costly to society. They use police, prison, and court resources, and they victimize people. The goal of most programs is to reduce these harms. In this study, we estimated the price of the harm. We measured how often the participants interacted with the criminal justice system, how many victims they had, and what the harm was to those victims. Then we estimated how the participants’ wages and health care costs were affected by their participation in drug court.
Q. You used a bottom-up approach to calculating costs. Can you explain the technique and tell us why it’s appropriate for this study?
A. There are two approaches to thinking about costs—top-down or bottom-up. The top-down approach is simple division. If the budget is $100,000 and there are 200 people in the program, the cost is $5,000 per person. The approach can be misleading. Not every person used $5,000 in services, and maybe no one did, but the approach leaves no way of knowing. Drug court administrators want information about the distribution of high- and low-end users because it informs how they deploy resources.
A bottom-up approach counts everything that anyone uses and attaches a price to it. We talk about bottom-up approaches in terms of price and quantities. To calculate prison usage, for example, we take the price of a day in prison and multiply by the quantity, or the number of days the person was incarcerated. Bottom-up approaches let us say a lot about variation. They tell us who uses more of what resources and how the program needs to be adjusted to account for different levels of usage.
Q. What did the cost data tell you about variation in drug court usage?
A. We found, with few exceptions, that there isn’t a lot of variation in drug court effectiveness. All types of people do really well. This finding will be surprising to people who think that drug courts need to select only those people they perceive to be most likely to succeed or to be the least risky to society. Before MADCE, research had already challenged those that cherry-pick, suggesting that more serious offenders actually do better and that cherry pickers select exactly the wrong people. The MADCE tells us that there’s no variation across groups, and that drug court administrators would be better off just taking those that meet a basic criteria.
Q. In your cost analysis, did you find any evidence that drug courts widen the net of people involved in the criminal justice system?
A. All the research in this study and in other studies that I’ve done suggest that far too few people get drug courts. My 2008 paper estimates that less than 3 percent of all drug-involved offenders who could benefit from drug treatment get into drug court, and I’ve seen another paper circulating (not yet published) that suggests the number is actually less than 1 percent.
I do, however, worry about net widening with juveniles. The number of juveniles in most juvenile justice systems that penetrate the system all the way to the point of incarceration is extremely small. Juveniles tend to have briefer contact with the justice system. So when a drug court is introduced, mandating longer periods of commitment, that is a real concern.
Q. What sets this CBA apart from others?
A. The MADCE has tremendous advantages over the typical study. Most studies do not have the resources to collect the amount of data we collected. I also think the single-metric approach is a hallmark of this study, and a method that should be replicated. It matters because if you take our data and separate it into costs and benefits, you come up with the same average benefit (2:1) as most prior studies, but our result is not statistically significant. Separating costs from benefits obscures differences across participants.
Q. What other aspects of the MADCE can cost-benefit practitioners learn from?
A. To me, there are two areas where there’s room for improvement in the cost-benefit field.
First, we need to care more about causality. Finding a good comparison group and being able to say with certainty that outcomes would have been the same if not for the intervention is a critical step to enhancing our credibility. We took a very rigorous approach with causality in the MADCE.
Second, we should care more about uncertainty. Too often we obscure what happens on either side of the average effect—and that matters. If you see big reductions in a few people, that’s very different than seeing small reductions in lots of people. Communicating this uncertainty gives policymakers more information and puts them in a much better position. They can say, ‘There’s only a small chance the investment won’t pay off, but I’m just not willing to take the risk.’ Or they could say, ‘There’s a big chance the investment won’t pay off, but if it does, we can make dramatic improvements, and that’s good enough for me.’ That’s how policymakers make decisions, and just listing the average effect—or the cost-benefit ratio—doesn’t give them the information they need.
Q. Speaking of policymakers, how do you hope they will use the results from the MADCE?
A. I hope they will say that, in a world of uncertainty, here is something that really seems to work. It’s not a silver bullet. It doesn’t take people from prison to the corporate suite, but it makes people’s lives better, and it does it in a way that pays for itself, assuming you care about all of society and not just the government budget. There are very few interventions in criminal justice that you can say that about, and that makes it worth doing.