Tuesday, September 24, 2013

Evidence in Evaluation -- beyond a [faith-based?] approach to the dominant RCT-for-all approach?

As we try to build capacity, sustainability, ownership, results, the question of evaluation comes around frequently. We're naturally regularly faced with the "what's the evidence?" question - and behind that, is the frequent not-so-silent assumption that without an RCT (Randomized Controlled Trial) or some pseudo-experimental version of the same concept, you really don't have anything to show.

RCTs are great, when they're applied on questions and in contexts where they are appropriate and feasible. In my view, the "golden standard" image that they carry has more to do with our dominant professional cultural belief system than serious methodological discussion of the issues at hand. Talk long enough with people who actually conduct RCTs, and you'll find out that (1) it doesn't work all the time, (2) interpretation can be very sensitive to small tweaks in the model, and (3) most of the time on the current and burning complex development questions of the time, they require that parts of an intervention be parceled out for evaluation, or that an intervention be modified to fit the RCT design. In other terms, it's like going for your physical and having the nurse say: "please bend your knees as my measuring rod doesn't reach that high!" So, golden standard? Not always.

RCT implementers themselves are usually careful about their claims (except of course when responding to call for proposal; we're only human). The problem is with the mass of non-specialists who may have skipped the subtleties and have developed a nearly faith-based attachment to RCTs as the answer to every question. Disciples are always the most problematic...

Let me stop venting. I just wanted to point out to some useful and fairly recent publications, which shed an interesting and balanced light on the topic. Not surprisingly, the issue of the complexity of the evaluation question at hand comes regularly into play.

Let's promote a new Gold Standard: the method most appropriate to build evidence should be determined by careful consideration of the question at hand, and equally important, who wants to know. And yes, it's harder than having a one response-fits all solution.

So, let's keep at it, and stand straight, even when we're asked to bend our knees. Fit the evaluation method to the program, not the other way around*.

Enjoy the readings, and if you have other suggested references, please use the comment box to add them.
DfID now recommends Stern et al (2012), Broadening the Range of Designs and Methods for Impact Evaluations in their solicitations for proposals.
The American Evaluation Association (AEA) has a recent special issue about Mixed Methods and Credibility of Evidence in Evaluation. It's a rich publication (which I'm still going through) deserving some serious attention. 
Finally, Michael Quinn Patton, former president of the AEA, offers a course based on his book Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. It's not per se about impact evaluation, but it's certainly relevant to the questions we face at CEDARS about sustainability evaluation. The book, and Michael himself, pack a punch.

Thanks,


Eric

* Thanks to Florence Nyangara for inspiring this entry.
[picture source can be found here]