How do you scale evidence-based programs? A look at OJJDP’s Juvenile Justice Reform and Reinvestment Initiative
Shay Bilchik is director of the Center for Juvenile Justice Reform (CJJR) at the Georgetown University McCourt School of Public Policy. Kristen Kracke is a social science specialist at the Office of Juvenile Justice and Delinquency Prevention (OJJDP).
High recidivism rates in the juvenile justice system have long been viewed as intractable. Over the past several years, much attention has been given to evidence-based practice as an effective approach to reducing recidivism. Resources such as the Blueprints for Violence Prevention, OJJDP’s Model Programs Guide, and other similar collections of effective programs have shown the juvenile justice field which programs produce positive outcomes for youth. While this knowledge has greatly benefited the field, little progress has been made in taking evidence-based programs to scale.
All juvenile programs, whether developed through a grassroots effort or a brand-name protocol, should be measured, monitored for effectiveness, and retained if shown to produce positive results. Programs not producing positive results should be provided with an opportunity to improve based on research-based criteria.
The Juvenile Justice Reform and Reinvestment Initiative (JJRRI) was launched by OJJDP to help jurisdictions achieve this level of accountability while building on their current service delivery model. Supported by OJJDP through the Office of Management and Budget’s Partnership Fund, JJRRI is a practical but comprehensive approach to reform the juvenile justice system using a research-based decision-making platform to inform system improvements and service delivery. The intended results are better outcomes for youth and increased cost-effectiveness. As an approach to achieving these goals, the JJRRI is being piloted in three sites: Milwaukee County, Wisconsin; Iowa; and Delaware.
A key component of the JJRRI is the Standardized Program Evaluation Protocol (SPEP). The SPEP provides a uniform metric used to describe how effective a program is expected to be, based on how closely it conforms to the most effective practices found in Dr. Mark Lipsey’s meta-analysis of approximately 600 evaluation studies of juvenile justice interventions. The SPEP is a process that assigns a score between 0 and 100 points to a program based on how closely it matches those practices with regard to the type of service it provides, the quality of the service delivery, the amount of treatment, and the risk level of participating youth. This scheme allows states and localities to evaluate, and then improve, their juvenile justice services. For example, if a program is found to have lower than optimal scores based on the quantity of service it provides, JJRRI will work with the jurisdiction to develop strategies to increase the number of contact hours the youth receive without having to redesign the entire service system. Once improvements have been made, the programs will be reevaluated, allowing administrators to see the degree to which their programs have improved. This reevaluation will also provide the information necessary to conduct a cost-benefit analysis of the JJRRI.
Beyond improving specific services, the JJRRI also seeks broader system improvements, such as the improved matching of youth to services based on assessed risk and need, the linking of SPEP scores to system-level decision making about the array of services provided, increased cost-effectiveness and cost-efficiency, and reduced recidivism.
The Urban Institute’s Justice Policy Center is evaluating the initiative and conducting the cost-benefit analysis. The evaluation aims to understand the degree to which the SPEP rating system produces improvements in the quality and effectiveness of programs and services for youth involved in the juvenile justice system. It will also assess the costs associated with these programs and evaluate whether JJRRI leads to more cost-effective use of program resources. In particular, the evaluation and cost-benefit analysis will seek answers to the following questions:
- Are programs with high vs. low SPEP ratings more effective at reducing recidivism?
- Does use of the SPEP rating system lead to program improvements and reduced recidivism?
- Does use of the system improve the matching of clients to programs?
- Does use of the system improve efficiencies and cost-effectiveness in juvenile justice programs and services?
- Do JJRRI program improvements reduce recidivism?
The JJRRI builds on past work with other states that have implemented the SPEP and initiated other system improvements. For example, after adopting the SPEP in North Carolina and Arizona, programs with higher SPEP scores saw greater recidivism reductions (measured at 6 and 12 months) than programs with lower SPEP scores did. Also, in 2011 the Juvenile Justice System Improvement Project (similar to the JJRRI) was launched in Pennsylvania, Connecticut, and Florida by CJJR, the Peabody Research Institute at Vanderbilt University, and the Comprehensive Strategy Group, a juvenile justice consulting firm. These three states are enhancing their data systems so that they can use the SPEP, improve their risk-assessment tools and processes, develop dispositional matrices to guide decision making, and undertake other reforms as part of this work. The states are implementing the SPEP and related system reforms.
The JJRRI represents the next generation of this work and is still in its infancy. The three JJRRI sites (Milwaukee, Delaware, and Iowa) have begun to analyze their information systems for the data needed to generate SPEP scores, select pilot programs, and gear up for producing the first SPEP scores on those pilots. The initiative is far too young to discuss findings, but we’re optimistic that it will improve programs and outcomes for youth in all involved jurisdictions.
This project was supported by Grant No. 2010-JF-FX-0607 awarded by the Office of Juvenile Justice and Delinquency Prevention, Office of Justice Programs, U.S. Department of Justice. Points of view or opinions in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice.