Causal Design

EnglishKhmerSpanish
CategoriesBlog Post Opinion

Evaluation in the Age of Climate Change

Between Greta Thunberg’s Time ‘Person of the Year’ awards, and almost concurrent end to a do-nothing UN climate talks in Madrid (to borrow from our US Congress lexicon), two things were reinforced, again, these past weeks: the urgency of climate change, and our inability to address it, at least from on-high.

Remarkably, however, for an industry that is rooted in evidence, which examines disparities in health and well being across time and space (and especially among groups and sub-groups), and which holds dear an ethical mantra of ‘do no harm,’ Evaluation has done little to either mainstream the role of climate in our own practice, nor to ameliorate the industry’s impact (because, while disparate and far flung, we are an industry).

CategoriesBlog Post Opinion Research

DAC Coherence First Thoughts

The OECD-DAC recently added to its list of evaluation criteria—the de facto norm through which organizations like Causal Design frequently organize evaluations and reporting. Specifically, after a multi-year process of considering how to best adapt its existing criteria, OECD added Coherence: How well does the intervention fit? to the existing and remaining five criteria.

Reactions around our proverbial dinner table were appropriately mixed: How does this further a wider learning agenda? How does this differ from the existing Relevance? (which at times already overlaps with Sustainability) What does “fit” actually mean, and how do we use it, and meaningfully?

Grad Fellow Notes: Loops in STATA

This week’s blog will feature a set of Stata tricks we used to addresses a particular issue that we encountered in our dataset.  Many of the variables were in string form and were not useable for Stata analysis.  Furthermore, the values of the variables were not in the correct order for our purposes.  A couple of commands came in handy here.  Loops are useful for many different repetitive commands.  They allowed us to quickly recode the values of a set of variables that have similar categorical values and also enabled us to destring sets of variables, setting them to numeric values.  These numeric values were in turn reordered to fit a desired pattern.  Finally, the labels for the numeric values were recoded to appear as the original text instead of just “1, 2, 3, etc”.

CategoriesBlog Post Graduate Fellow

Grad Fellow Notes: How to Check If Survey Respondents Are Paying Attention

When companies need to know what their consumer base is thinking, surveying is often the only scalable way to find out this information. Nevertheless, surveys take up a lot of time and can be incredibly boring. As the respondents’ patience gets zapped by the umpteenth question and their willpower dies, they employ coping strategies known as “satisficing”— a fancy way of saying that respondents just try to meet the lowest threshold of acceptability for an answer, rather than making the time to give the best response. This can be seen when questionnaires come back with all answers being “5/5”, “extremely happy”, or other arbitrary patterns that call to question their authenticity and potentially hurt data quality.

CategoriesBlog Post Graduate Fellow STATA

Grad Fellow Notes: STATA Command -inspect-

On a recent project, the client wanted an idea of the skew of each of a large number of variables. The data originated from a satisfaction survey (1=very dissatisfied; 5=very satisfied). On our Excel presentation sheet, we were to choose from the following options to describe the population’s view regarding each variable: right-skewed (generally very dissatisfied), left-skewed (generally very satisfied), U-shaped (most were either very dissatisfied or very satisfied, with few being neutral), or normal-shaped (most were neutral, with few being either very dissatisfied or very satisfied).

CategoriesBlog Post Graduate Fellow RCT Research

Grad Fellow Notes: The Impact of “No Impact” Evaluations

With the steady rise of the number of impact evaluations (IEs) per year, it should come as no surprise that not every single IE will show a positive impact. The authors of “no impact” evaluations will understandably be worried that their work will not be academically published nor be used for public policy. There is, however, still value in such information. Evidence that a particular program does not work paves the way for alternative interventions to happen. Licona (2017) provides several examples where null results in Mexican education programs encouraged the tweaking of aspects such as selection criteria, consolidation of redundant programs, and budget optimization.