By: Andre Lee (MIDP ’18)

With the steady rise of the number of impact evaluations (IEs) per year, it should come as no surprise that not every single IE will show a positive impact.

Figure 1. Impact Evaluations published per year (1981-2012) (Cameron et al., 2015)

The authors of “no impact” evaluations will understandably be worried that their work will not be academically published nor be used for public policy. There is, however, still value in such information. Evidence that a particular program does not work paves the way for alternative interventions to happen. Licona (2017) provides several examples where null results in Mexican education programs encouraged the tweaking of aspects such as selection criteria, consolidation of redundant programs, and budget optimization.

In Kenya, Innovations for Poverty Action (IPA) implemented a safe water program in which chlorine tablets actually did not lead to an expected decrease in diarrhea-related illnesses. The null results led to the subsequent implementation of an alternative: public chlorine dispensers. The dispensers had special metered valves that provided accurate dosing, reducing the likelihood that treated water will have a chemical aftertaste. In addition, chlorine-treated water requires agitation and waiting time to become fully safe to drink; when the chlorine was added at the source of water, purification was accomplished on the program participants’ walk home. Appreciating past failures also led to greater cost-effectiveness. Chlorine is so cheap that when it is sold or distributed in small containers, the packaging often costs more than the chemical itself—a shortcoming realized during the “failed” program that was subsequently addressed by its successor (Gudrais, 2014).

Because null results are not necessarily “useless” results, we should not automatically resort to excuses like: “Perhaps the implementation lacked fidelity?”; “Maybe the design of the intervention didn’t fully match the theory of change”; or “We just didn’t have enough observations?”. After all, if expensive studies give us more questions than answers, it is harder to make the case for funding them in the first place (Brown, 2017). Be a great evaluator and appreciate how answers can come even from “no impact” evaluations.

For further reading, the U.S. Department of Education provides a great resource to help people develop possible answers to and explanations for their null results (link: <>).



Brown, A. N. (2017, January 18). Null results should produce answers, not excuses. R&E Research for Evidence. Retrieved June 14, 2017, from

Cameron, D. B., Mishra, A., & Brown, A. N. (2015). The growth of impact evaluation for international development: how much have we learned? Journal of Development Effectiveness,8(1), 1-21. doi:10.1080/19439342.2015.1034156

Elizabeth Gudrais, E. (2014, March 03). For Clean Water, An Approach That Works. Retrieved June 14, 2017, from

Licona, G. H. (2017, June 12). Any chance to use impact evaluations with no impact?: The Mexican Case. Retrieved June 14, 2017, from



Tagged on:     

Leave a Reply

Your email address will not be published. Required fields are marked *