The OECD-DAC recently added to its list of evaluation criteria—the de facto norm through which organizations like Causal Design frequently organize evaluations and reporting. Specifically, after a multi-year process of considering how to best adapt its existing criteria, OECD added Coherence: How well does the intervention fit? to the existing and remaining five criteria.
Reactions around our proverbial dinner table were appropriately mixed: How does this further a wider learning agenda? How does this differ from the existing Relevance? (which at times already overlaps with Sustainability) What does “fit” actually mean, and how do we use it, and meaningfully?
Indeed at first blush “coherence” is a bit clunky. In some way or another, we are probably already addressing how programs “fit” (a loose term), or at least cognizant that a program is in some ways part of a greater milieu, and either complements or competes with policies and programs, for good or bad. But the new criterion does add an analytical dimension that is often overlooked, and the political economists within us find the addition overdue, and necessary—or at least an explicit recognition of key elements of the project environment that have real bearing on outcomes, and which some projects and interventions address better than others.
COHERENCE: The extent to which other interventions (particularly policies) support or undermine the intervention, and vice versa. Includes internal coherence and external coherence: Internal coherence addresses the synergies and interlinkages between the intervention and other interventions carried out by the same institution/government, as well as the consistency of the intervention with the relevant international norms and standards to which that institution/government adheres. External coherence considers the consistency of the intervention with other actors’ interventions in the same context. This includes complementarity, harmonisation and coordination with others, and the extent to which the intervention is adding value while avoiding duplication of effort. -OECD
Particularly with the inclusion of “external coherence” (versus internal coherence of the program under focus) all of us will be pushed to “zoom out,” and consider the wider context, existing programs and interventions, and enabling or conflicting public policy. (Fun fact: “complementarity,” highlighted above by the OECD, is partly where my own research agenda took root, so my bias in favor of the inclusion of “coherence” is fully acknowledged!)
This certainly adds a greater burden to those of us on the ground conducting evaluations, and adds an extra, and for some a likely new, research component to every evaluation. But omitting this would be like (as too often happens anyhow) examining a program, and its technocratic components, as if in a test tube, versus in the messy world in which it is embedded. Examining and reporting on “coherence” for implementing organizations will undoubtedly add nuance, augment learning, and might lead to more adaptive and context-sensitive “interventions” (a term I use reticently).
My one caution would only be that, while examining the public and programmatic spaces in which a program is instituted, for the study of “coherence,” we continue to overlook half of the milieu, and arguably the less relevant one to local leaders and constituents. If anything, all of us in the evaluation space should seize this opportunity to go further: to examine or at least acknowledge the informal institutions, role of religious leaders, or historic grievances (and so on) that constitute the whole of the messy environment under which a program is attempting to insert itself. Very often, these factors are stubborn, powerful, rich, and meaningful.
While not all evaluations can be ethnographic studies in informal institutions, I recognize, forcing the evaluation community to consider a broader context, and where a program under study fits within it, is good practice, more fair, and likely to be humbling.
Causal Design welcomes the inclusion of ‘Coherence’ to the list of DAC criteria, and is already equipped to provide its clients with informed insights, across sectors, geographies, and research objectives.
By: Matthew Klick,
Director of Research and Learning at Causal Design