New USAID Policy on Cost-Analysis in Impact Evaluations

Kristen Schubert, Senior Economist

At the end of October, USAID released a revised version of its policy on designing and implementing development projects and activities, governed by ADS Chapter 201. This is one of the largest revisions of the ADS in recent years and it includes some significant changes to the way USAID plans to operate its Program Cycle. While many of these changes are targeted towards streamlining USAID internal processes and decision-making, some changes will have broader implications for the rest of our community. One single sentence in particular stood out to us:

All impact evaluations must include a cost-analysis of the intervention or interventions being studied (ADS 201.3.6.4)

Collecting credible data on project expenditures is not as simple as it sounds. Sure – it is easy enough to find how much was spent on a project as a whole, but ex-post expenditures per intervention, per participant/output, or per outcome are rarely available. According to a 2019 research working paper at the World Bank, fewer than one-in-five published impact evaluations globally estimates the cost per impact of the evaluated program. As a community, we do not understand costs per participant or per output for even the most common interventions – such as the costs of teacher training. And we do not understand why these costs may vary in different contexts or with different delivery strategies or design approaches. This simple sentence is poised to change that at USAID.

What is cost-analysis?

While cost-analysis is not defined in the ADS, we understand cost-analysis to be any type of analysis that ties specific costs to interventions, outputs, or outcomes. Costs include monetary expenditures made by the implementing partner/donor, and may also incorporate in-kind donations and any expenditures by other stakeholders towards the success of the project (e.g., expenditures by the host government, other development partners or the private sector). Finally, costs may also include opportunity costs, or the time spent by people to lead to the development outcome (for example, the time farmers spend in a training program).

Cost analysis does not necessarily mean costs must be tied to outputs (such as costs per participant, costs per teacher trained, costs per vaccine administered, etc.) or tha

t costs are tied to outcomes (costs per lives saved, costs per incidence of reduced child mortality, costs per incidence of children reading at grade level, etc.). Tying costs to outputs is a specific type of analysis called a cost-efficiency analysis, and tying costs to outcomes is referred to as a cost-effectiveness analysis.

Cost analyses can also include cost-benefit analyses, which directly compare costs to development outcomes by estimating the monetary value of those benefits. This allows for a calculation of a rate of return on the investments made in these interventions.

Cost analysis by intervention may answer questions such as what did it cost to deliver this intervention? How much was spent on different tasks? How much did this intervention require of the host country? Cost-efficiency and cost-effectiveness analysis questions could include: what were the costs per unit? How did those costs change in each province where the intervention was? Or by different stakeholder groups? Or how did this intervention as a whole compare, on a per-unit basis, to other similar interventions by other organizations? Cost-benefit analysis questions could include, are the benefits worth the costs invested in this intervention?

Because cost analysis is now a requirement specifically for impact evaluations, this analysis lends itself very easily to a cost-effectiveness analysis. Impact evaluations, as opposed to the more common performance evaluations at USAID, establish a “counterfactual”, or an understanding of what would have happened in the absence of a project/activity in order to isolate the change in a development outcome—or a set of development outcomes—that is attributable to that particular intervention/activity as opposed to other unrelated factors. That incremental change in the development outcome(s) can then be tied directly to the costs used to achieve that specific outcome.

So why do we care?

Our community is regularly asking ourselves “Is this cost-effective?” but how do we know if our project is cost-effective without a way to compare it to other, similar projects? If we are installing latrines that cost $40 per household, is that good? Cost-effectiveness figures are not particularly useful without a comparison of the relative costs of achieving the same outcome in different interventions, delivery strategies, or different places.  Encouraging the provision of cost data, and especially if it is disaggregated into costs per intervention, per output, or per outcome, will help the development community collectively understand if they are implementing interventions in a relatively efficient (or expensive) way.

Collecting cost data should help our community begin to establish benchmarks for cost-effective development programming. Imagine a scenario where an implementing partner would like to propose a program that installs toilets at $40 per household, but reads an impact evaluation that suggested it can be done at $20 per household. It’s not difficult to imagine that project designers might ask more questions or more closely examine their design to find inefficiencies and adapt their project so as to stretch those limited funds to more households.

Similarly, cost analysis will provide data that is critical for decisions about scaling or replicating interventions and for understanding the cost trade-offs of those decisions. This may help in identifying costs when transitioning a project to a host government or designing budgets for follow-on activities.

What are some practical implications of this requirement for implementing partners?

As evaluators, we are consistently encouraging USAID and its partners to plan for impact evaluations as early as possible. This helps to ensure that the right kind of data can be collected to measure an impact, and it is very difficult to collect the appropriate data towards the end of an intervention. This same principle applies to cost analysis questions.

Disaggregating and reporting expenditure data so that it can be tied to a specific intervention, output, or outcome is much easier if the financial systems are originally designed to track data in that way. Invoices for multiple interventions or supplies are much easier to disaggregate in financial systems in real-time than 5 years later during an evaluation. Different cost analysis questions will require different data tracking procedures. For any activity that will undergo an impact evaluation, work carefully with your USAID counterparts to understand their expectations for the cost analysis and design your financial system and procedures accordingly.

In our experience, tracking and disaggregating material costs is not particularly challenging, especially if the financial systems are set up to do this. Challenges will arise when evaluators need to figure out how your fixed costs should be split across interventions (e.g. how much time did you Chief of Party spend on the teacher training intervention specifically?). Depending on the type of analysis, the evaluator may need to ask you about how much time your beneficiaries or other stakeholders spent on this intervention and what the value of that time is.

The USAID Office of Education has already been working on creating guidance for cost analysis and reporting within their sector. This provides a useful blueprint for considerations in other sectors as well.

Our take-away

This short sentence tucked away in the ADS is celebrated here at Causal Design. This mirrors a quiet but steady effort at the Millennium Challenge Corporation (MCC) to introduce more ex-post cost-benefit analyses into their evaluations. With both these USG development organizations encouraging more analysis and provision of ex-post cost data, they seek a higher level of accountability for their development dollars. Eventually, this data will help implementing partners design and implement more cost-effective interventions to benefit a larger number of people in the communities we serve.