Read

Program Evaluation in the Real World: From Theory, to Findings, to Solutions

This guest blog was written by Steven M. Ross, PhD, with the Center for Research and Reform in Education at Johns Hopkins University.A...

This guest blog was written by Steven M. Ross, PhD, with the Center for Research and Reform in Education at Johns Hopkins University.

A colleague recently reminded me of what might well be the oldest joke in academia.  It goes something like:  A professor is told about a highly impactful research study and dismissively responds, “So it works in practice, but how will it work in theory?”  Theories are always good starting points for innovations, but in the real world, the bottom line is discovering to what degree and how new programs and interventions positively impact people’s lives. About a year ago, I was approached by the George W. Bush Institute to help them design and conduct evaluation studies of their “Areas of Engagement” in education reform, global health, human freedom and economic growth. I was immediately impressed by the emphasis the Institute placed on conducting systematic research on each project to determine how well it was achieving defined goals.  

But the contrast with typical program evaluation needs in education and the social sciences was obvious.  Here, there would be no obvious performance test or “Adequate Yearly Progress” measure for definitively determining success. The GWBI projects involve ambitious, multi-faceted interventions for improving the “human condition” (not merely test scores) worldwide. For example, one project, “The Freedom Collection,” is a permanent archive of the struggle for human freedom and democracy around the world and works to extend the reach of human liberty around the world by promoting democracy, political freedom and individual rights.  The “Women’s Initiative Fellowship Program” is a leadership program that engages women from around the world, with an initial focus on the Middle East, and is designed to empower and equip women to catalyze change. The “Red Ribbon Pink Ribbon” program endeavors to expand the availability of vital cervical and breast cancer screening and treatment for women at risk in developing nations in sub-Saharan Africa and Latin America. And, there are presently nine others, with more on the way.

Evaluating success in these complex, real-world projects is challenging because the culminating outcomes (e.g., preparing women leaders or preventing cervical cancer) depend on so many intermediate steps and proximal achievements.  But the very nature of the projects makes the performance of systematic and rigorous evaluation studies especially critical, not merely for judging success, but for identifying the specific strategies likely to ensure success in the future.  For an evaluator-researcher like myself, the opportunity to help tackle these important evaluation needs was much too compelling to pass up, and I didn’t.  Future blogs will discuss various projects and their findings in more detail as the evaluation studies unfold.  Unlike the professor in the old joke, finding out what works in theory will be much less a focus than determining how the programming can be made to work most effectively and consistently to benefit people in a variety of real-world contexts.