An interview with Professor David Weimer on cost-benefit analysis methods and justice policy
David Weimer is the Edwin E. Witte Professor of Political Economy at the University of Wisconsin–Madison’s La Follette School of Public Affairs. He has written, edited, and taught extensively about using cost-benefit analysis in social policy. He is vice president of the Society for Benefit-Cost Analysis and a member of CBKB’s working group on cost-benefit methods. As part of our focus this month on cost-benefit methods, we recently spoke with him about his work.
Your cost-benefit work has involved topics as varied as organ transplantation, property rights, energy, and education. How is it different to do this work on criminal justice or juvenile justice policy?
The interesting thing is that criminal justice impacts show up in all sorts of social policy areas, so if you’re dealing with a mental health issue, for example, it can have criminal justice implications. Criminal justice analysis is fundamental to almost all cost-benefit analysis (CBA) of social policies.
From your perspective, where does the justice field stand, relative to other fields, on incorporating CBA?
I think the justice field has made reasonable progress. We actually have a long history of trying to estimate costs, particularly the costs to victims. We’ve also had studies for many years that tried to take into account criminal justice system costs. We’re starting to do better on intangible costs like fear of crime. There’s been activity for many years, particularly the work of the Washington State Institute for Public Policy, and now I think we’re on the verge of seeing even more in the next 10 years.
What can the justice field learn from CBAs conducted in other disciplines and on other topics? Are there interesting takeaways or parallels for consumers of justice-related CBAs?
Criminal justice analysts may do cost-benefit analysis as well as anybody else in the social policy area. The one thing I haven’t seen in the criminal justice area yet is what you might think of as the analog of general equilibrium models. In cost-benefit analysis we often look at one “market” at a time, so we identify isolated effects. In some instances, the markets are interrelated: Crime may affect housing values; housing values may affect revenue for schools, which affects school quality, which in turn has an effect on crime. So what aspects of criminal justice impacts would be most valuable to trace through with those sorts of interrelated models?
In areas like energy, we have computable general equilibrium models of, say, the world oil market, where the model relates petroleum product markets to the crude markets and to the transportation markets—and ties it together. Think of the relationship between crime and neighborhood quality and housing prices; you could think of tying those together in a way that would be valuable. In criminal justice, I don’t think we’ve gotten to the point where, for example, we assume that if a central city reduces its crime that it doesn’t spread anywhere else. If there are reductions in crime, we may not be tracing through all their consequences as we could if we had more sophisticated models. Maybe down the line we’ll be able to do that.
Would you tell us briefly about the course(s) you teach on cost-benefit analysis?
Every fall I teach a graduate course on CBA, primarily for the students at the La Follette School, but also for students from the Nelson Institute for Environmental Studies, the Department of Agricultural and Applied Economics, and Population Health Sciences. An important component of the course is student projects done for actual clients. Over the summer I’m contacted by state and local officials and people who work at nonprofits who wish to have a CBA done. Sometimes I go out and seek additional projects in areas that are underrepresented.
In the fall, teams of four to six students work on these projects over the course of the semester and they are given the challenge of identifying all the major impacts and monetizing them. As this is always an uncertain activity, they also structure their analysis around Monte Carlo experiments/simulations to take account of that uncertainty.
How do you choose justice-related projects for student teams to work on?
The first criterion: Can I envision doing it myself? If I can’t envision how it can be done, I’m hesitant to take it on as a project. Second, I prefer projects for which the client already has some basis of evidence. That is not always the case.
I can imagine projects without available data that can draw on evidence published in the literature, but it’s more interesting and a better learning experience for students if they actually have data to work with. And the level of client interest is also a criterion, because clients who are more interested make it much more interesting for the students. Interest by clients generates interest among students beyond just doing the analysis, but doing the analysis and learning. The idea that someone cares about the results is a strong motivation.
What are the most common types of justice-related projects your students work on?
Diversion projects of two sorts: We’ve done a number of analyses of diversion programs in Milwaukee, the largest city in the state. For example, the District Attorney’s Office was making deferred prosecution agreements with about a quarter of the candidates identified by Justice 2000, a nonprofit organization that screens potential participants. The project involved estimating the net benefit of expanding participation. We’ve also done a number of diversion programs initiated by prosecutors in small, usually rural, counties. Those are the most common sorts of studies. The former are usually well-organized diversion programs, typically with several hundred people going through per year and sometimes thousands. The projects for the rural diversions often involve dozens or at most hundreds of people going through per year, so obviously it’s more difficult to come up with statistical predictions for the smaller projects than it is for the larger ones.
Would you ever take on a cost-benefit analysis without a good sense of impact or outcomes?
Possibly. If we weren’t willing to take on those kinds of projects, we’d never have the opportunity for totally new or innovative policies, right? The first time we try something, we’re not going to have an evidentiary base. I can imagine taking on a project without an evidentiary base but with supporting literature but it’s a different sort of project and might be driven primarily by theory.
We’re often doing new things that don’t have an evidentiary base.
Even in areas that have considerable evidence, there will always be gaps—and students have to be creative in filling those gaps. For example, maybe there would be a similar study that would help them make a prediction. Can they find one that would be comparable, find some evidence to be the basis of the prediction, to support a prediction?
What impact, if any, have your students’ cost-benefit analyses had on policy?
There have been a few cases where the analysis supported actual policy adoptions. For example, the state developed a registry for prescription drugs that was heavily influenced by student analysis. I think more often the analyses bolster the case for sometimes continuing or expanding programs. Occasionally they make the case for not continuing—and it becomes part of the decision-making process.
What conditions do you think are necessary for a cost-benefit analysis to have an impact on policy?
It has to be credible. It has to have an ear, somebody who can act upon it. That’s why it’s important to link these projects to actual clients—clients who will consider acting upon the analysis. Those are the two big things.
You talk about Monte Carlo simulations—a type of sensitivity analysis—as a way to address risk and uncertainty in CBA. Why is it important to have this kind of mechanism?
One of the big advantages of doing a benefit-cost analysis is that it encourages us to be comprehensive. But we usually have only imprecise predictions across the board. In some cases we may have a pretty good evaluation that gives us a good idea of the main effect, but we may not have a strong idea of the other effects produced by the policy. So we have a lot of uncertainty and we want to convey our uncertainty in the net-benefit estimates we produce. Monte Carlo simulations provide a very natural way of doing that.
When we identify the parameters, we also identify their ranges—or ideally, what we think is the underlying probability distribution of the parameters within their ranges. In some cases we can do that fairly directly by using a point estimate and standard error. Other times we can only bound our estimate and perhaps we’re very uncertain about where the true value lies within those bounds, so perhaps we’ll assume a distribution that’s uniform across that range. We try to come up with a distribution for each of the unknown parameters and then conduct the Monte Carlo simulation by drawing values for each of the unknown parameters from their respective distributions to produce an estimate of net benefits—and then repeating that process many, many times. We end up with a distribution of predicted net benefits that reflects the uncertainties and the parameters that went into the calculation of the net benefits.
This is something every social policy benefit-cost analysis should do. In other words, every benefit-cost analysis of social policy should include a Monte Carlo simulation, because they all involve many uncertain parameters.
Given this really sharp focus on risk and uncertainty and the use of Monte Carlo simulations, is cost-benefit analysis being held to a higher standard than outcome evaluations?
If you think about a standard evaluation, we will produce an estimate of some impacts and a standard error for it. That’s what we’re trained to do. The difference is, in trying to be comprehensive in our CBA, we’re going to have many such parameters and all of them are uncertain and all of them can contribute to our uncertainty in net benefits. It’s not so much that we’re subjecting ourselves to a higher standard—we’re subjecting ourselves to much more uncertainty by trying to be comprehensive.