by Jim Rugh, Vice President of IOCE
(Nermine Wally, Secretary of IOCE, will be co-presenter)
Being a professional evaluator could be said to involve the art of appropriately combining and bridging the paradigms, skills and needs of researchers, investigative reporters, program managers, funders, community participants and the general public, among others. It is challenging to make convincing cases to a variety of such stakeholders as to whether or not, or how well, a program met its objectives, or even whether or not those objectives were relevant to improving the quality of life of the intended beneficiaries in a sustainable way. To attempt to conduct an impact evaluation of a program using only one pre-determined tool is to suffer from myopia, which is unfortunate. To prescribe to donors and senior managers of major agencies that there is a single preferred design and method for conducting all impact evaluations can and has had unfortunate consequences for all of those who are involved in the design, implementation and evaluation of international development programs.
Certainly relevant and rigorous impact evaluations need to be conducted. And they need to assess not only changes in the target population but also the counterfactual, i.e. what would have happened without a program’s interventions. There are a range of elements of evaluation that need to be a part of ‘rigorous’ impact evaluation. These go beyond randomly assigning subjects into treatment and control groups, then (typically) just measuring fairly direct cause-effect correlations of quantifiable short-term outcome indicators within a relatively simple blueprint logic model, on the assumption that once tested ‘best practice’ recipes can subsequently be replicated in a mechanical way elsewhere, under different conditions. In the real world most situations and programs are complicated, multi-faceted in terms of actors, interventions and evolving conditions. Rigorous impact evaluation should include (but is not limited to): 1) thorough consultation with and involvement by a variety of stakeholders, 2) articulating a comprehensive logic model that includes relevant external influences, 3) getting agreement on desirable ‘impact level’ goals and indicators, 4) adapting evaluation design as well as data collection and analysis methodologies to respond to the questions being asked, 5) adequately monitoring and documenting the process throughout the life of the program being evaluated, 6) using an appropriate combination of methods to triangulate evidence being collected, 7) being sufficiently flexible to account for evolving contexts, 8) using a variety of ways to determine the counterfactual, 9) estimating the potential sustainability of whatever changes have been observed, 10) communicating the findings to different audiences in useful ways, etc. (The point is that the list of what’s required for ‘rigorous’ impact evaluation goes way beyond initial randomization into treatment and ‘control’ groups.)
To address these and other considerations, this proposed presentation will identify a variety of evaluation designs that help clients and evaluators to choose those that are most relevant and feasible given real world situations. It will also identify a range of elements involved in planning for and conducting rigorous evaluations, at least evaluations that are sufficiently relevant and reliable to provide answers to the questions of key stakeholders regarding the results and impact being produced by international development programs. The presenters will also give examples from evaluation reports and point to sources of guidance for conducting holistic impact evaluations.
Jim Rugh is the American Evaluation Association’s (AEA) Representative to the International Organization for Cooperation in Evaluation (IOCE, the 4th network of NONIE) where he currently serves as Vice President. He has had 47 years of experience in international development, 32 years as a specialist in evaluation. For 12 years he was head of Design, Monitoring and Evaluation for CARE International. After retiring from CARE he has continued to take consultancy assignments with a variety of international agencies. He was a co-author of the very popular RealWorld Evaluation book ( www.RealWorldEvaluation.org), and had led workshops on that topic at many professional conferences in many countries. In 2010 he was awarded the Alva and Gunnar Myrdal Award by AEA for contributions to the practice of evaluation.