Four questions for Jeffrey A. Butts

By , March 1, 2012

This post is part of our “Four questions” guest blog series, which highlights people and organizations in the growing community of practice regarding cost-benefit analysis and justice policymaking. We’re grateful to Jeffrey Butts for contributing to this series. He is the director of the Research and Evaluation Center at the John Jay College of Criminal Justice at the City University of New York and is a consultant to the Vera Institute of Justice.

Jeffrey A. Butts

1. Would you tell us about your work and how it relates to cost-benefit analysis (CBA) and justice?

I have been evaluating juvenile justice policies since the late 1980s, but until about 10 or 15 years ago, costs were something people thought about only at the end of their research. At that time there was a greater interest in the effect on recidivism, and costs weren’t driving the conversation. Starting in the mid-1990s, the issue of cost began to move to the forefront of the research. Since then, it has really become less acceptable to not consider both the costs and benefits.

I first worked on a project that included the dimension of cost-effectiveness as a colleague of Bill Barton when we were both at the University of Michigan’s Institute for Social Research. Bill designed a true experimental evaluation of intensive probation in Detroit, and the basic findings on program effects were robust. Still, translating the outcomes for practitioners and policymakers was challenging. They were more persuaded by our rather simple analysis of administrative costs, which we translated as “For the same costs, you could serve three times as many offenders in the community with no added recidivism.”

2. How did you get involved in this work?

I started as a social worker in 1980, when I was responsible for drug and alcohol education for court-mandated youth at the juvenile court in Eugene, Oregon. I spent another two years in direct services in Oregon before I got involved in research. I finished my PhD in the early ’90s and since then I’ve worked mostly on policy studies and program evaluations. Most of my exposure to CBA is from working with John Roman at the Urban Institute.

For example, in our evaluation of the 10-site implementation of Reclaiming Futures, a community initiative that integrates the efforts of juvenile courts, probation, and adolescent substance abuse treatment, one of our reports determined that although Reclaiming Futures came with increased costs, the changes it induced in youth behavior probably generated enough benefit to justify those costs. John and his colleagues at the Urban Institute actually did the CBA for that report, and in helping to prepare the report, I learned a lot about how complex the assumptions and estimates that go into CBA can be. Especially in the case of Reclaiming Futures, we had to be very creative to monetize individual behavior change when the initiative itself is designed to reform systems and not to deliver treatment. I also learned that the credibility and diligence of the analyst is critical, because the average reader will never comprehend the effects of all the assumptions and estimates that can go into CBA. Fortunately, John Roman is very credible and quite often diligent as well.…

3. What’s an interesting example of how CBA has been used in criminal justice policy and planning?

It is powerful to say that a program will reduce recidivism by 12 percent, but it is much more powerful to turn that into the number of crimes averted in a lifetime. That allows you to translate an outcome into something with a lot of policy salience: dollars. Overall, it is a very good development that we learned how to talk about policy in dollar figures. This is especially true since the economy tanked in 2008. Cost-benefit analysis has been helpful in keeping preventive programs from getting cut, because you can see the results the program is getting in dollar terms.

However, the emergence of CBA presents challenges for researchers. My basic view of cost-benefit analysis is that we’re still stuck in an era of easy answers despite the complexities of CBA. Researchers have been encouraged to capture the entire results of a study in a single point estimate—say, “for every dollar invested, you can expect three dollars in benefits.” Such an estimate is comfortable and easy to digest, but it may be quite misleading. That single point estimate may represent the most likely cost result of a policy or program, but that particular result is still drawn from a distribution of other possible results. Practitioners need to understand that their return on investment—“three dollars for every dollar invested”—may happen, for example, only one time out of five.

As researchers, we know point estimates come with confidence intervals, but decision makers and the media rarely consider anything more than the single point estimate. Of course, we can’t always blame researchers for how their results are discussed in policy debates, but we do have a responsibility as researchers to reduce the chances that our work will be mishandled in practice. We have to make sure that consumers of research understand the possible effects of uncertainty.

4. What resources would you recommend to people who want to do more with CBA?

In addition to the Washington State Institute of Public Policy and the Cost-Benefit Knowledge Bank, I would refer you to the District of Columbia Crime Policy Institute, which is doing some interesting work to account for uncertainty in cost-benefit analysis.

Would you like to nominate a person or organization we should feature in our “Four questions” series? Send your suggestions to us at cbkb@cbkb.org.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>