Let me preface this by saying that these thoughts have been brewing for a while; they do not refer to any specific client or individual experience…
The industry of evaluation is interesting. Most of our team comes from a research background. While we do program evaluations, we approach it with the tactics of a researcher. I distinguish this because I truly believe it makes a significant difference in the process and product. Some performance/project evaluation reports I’m reading have found the magical ability to report on an intervention’s impact or efficiency without a credible means for constructing a counterfactual. I struggle when a client asks us to report a quantitative impact for their project with a single round of non-representative beneficiary data. It actually doesn’t bother me that they ask, it bothers me that previous evaluation teams have trained them to expect it.
This is where the research vs. evaluation background becomes important. Researchers prepare products to defend their findings against the public and their peers. Evaluators prepare products that will make their client think they have received a superior product. The only person that they have to defend it to – is the person who is paying for it. How could this not create a situation where a client can pay for results (Not talking about the latest public policy trend). My concern is this: the more impact a report shows, the happier a client is, and the more likely they are to hire the evaluation team again. Let me be clear – I don’t think that NGOs are intentionally paying firms to report on impact that doesn’t exist. I do believe that it would be easy for someone with limited knowledge of econometrics (the kind of person who is likley to hire a consultant in the first place) to think that impressive findings = best analysis.
A couple of things that I don’t completely understand:
- Why are implementing agencies (NGOs) responsible for hiring (and paying) the consultant to write reports to donors? Shouldn’t the donor hire the “Independent and External” evaluator?
- Who is reading these reports? I consistently ask clients who the target audience for a report is and hear, “The general public.”
- I have yet to have a client ask for a 2 page report… When was the last time the “General public” read something about poverty that was longer than a magazine article and not written by Jeff Sachs or Greg Mortenson… I actually think Mercy Corps does a good job at publishing succinct and digestible reports.
- This industry needs to become clearer with its jargon – or better yet get rid of it. Until then, I will continue to attach this or this to the annex of reports so that I can use the ridiculous words the donor thinks they need to hear.
- Why is the need for a midterm/final evaluation report a surprise? It seems as if organizations do not realize they need to contract an evaluator until about 5 weeks before the report is due. If you glance at ALNAP’s consultancy list on any given week this is what you will find: “5 year project ends next month. Need Consultant to conduct field visit, review all project materials, conduct lit review, and write comprehensive impact report. Report due 3 weeks from today.” Five years ago, everyone knew that you needed an evaluation team. Why not contract them then – they could have helped design the logical framework…
End Rant */