Wednesday, January 25, 2017

Designing programs that work better in complex adaptive systems

Ann Larson
Social Dimensions
ann.larson@socialdimensions.com.au

I became interested in complex adaptive systems (CAS) in 2013. I lead a team identifying lessons from 18 programs scaling up women’s and children’s health innovations. It became clear that a critical success factor was how effective the implementation team was at recognising and responding to challenges. With colleagues, I learned about the properties of CAS, examined if they were present in several case studies of national scale-ups, and uncovered the effective and ineffective responses to the unexpected turn of events (Larson, McPherson, Posner, LaFond, & Ricca, 2015). As a result of this work, I see properties of CAS operating everywhere.

A consensus on how to create change within a CAS is emerging, based on experience and backed by a growing body of research.  This presentation briefly describes some of the most commonly stated principles, and then asks the question, ‘why are these practices not informing the design of programs, especially in international development?’ It appears that this is not due to lack of knowledge or interest. Instead, it arises from the nature of donor organizations and the power relations between those who commission, conduct and assess the designs on the one hand and the government officials, local NGO staff, front line staff and community members on the other hand.

Next, the presentation gives an overview of complexity-sensitive design approaches that could improve projects’ implementation and impact. There are many promising methods being trialled.  However, they should be accompanied with a large yellow sticker: USE WITH CAUTION. Recent reviews suggest that they are difficult for stakeholders to understand and conduct (Carey et al., 2015) and are not congruent with donor requirements (Ramalingam, Laric, & Primrose, 2014). Importantly, pilots of the use of systems thinking to design programs are not validated; we do not know if they actually create accurate descriptions of how systems work or contribute to improved outcomes (Carey et al., 2015).

This presentation, originally given at an evaluation conference, argues that evaluators and the process of evaluation should be central to complexity-sensitive design. First, the information used to inform designs needs to unite rigorous, generalizable evidence and nuanced experience of working within the specific context. Evaluators regularly draw on both sources of knowledge. Second, these design approaches need evaluating. What value do they offer over conventional methods and are they really as appropriate, effective and efficient as their proponents promise?

Presentation is available here


References:


Wednesday, April 27, 2016

A review of "Systems science and systems thinking for public health: a systematic review of the field"

By Eric Sarriot

A recent publication in BMJ titled Systems science and systems thinking for public health: a systematic review of the field by Gemma Carey et al. describes findings from a systematic review of current literature on systems science research in public health, with a focus on specific “hard” and “soft” systems thinking tools currently in use. A review of the literature sub-selected for analysis in this paper revealed the absence of some pertinent articles that may have enriched the discussion, but as the authors acknowledge, quoting Williams and Hummelbrunner, “holism is ‘somewhat of an ideal. In reality all situations, all inquiries are bounded in some way.”

An interesting application of systems thinking can be found in David Peters and Ligia Paina’s paper on the Develop-Distort model. The Develop-Distort model paper does not reference the great thinkers of Soft Systems or Systems Dynamics, which could be why it did not qualify to be part of this systematic review, yet it is also of great interest. With this model, and other emerging ones, the question then becomes whether new tools and methods, that abide by key principles, should and could fit into the constantly evolving field of systems thinking. Of course, this question in and of itself, does pose some bias.

The review by Carey et al. continues by ascribing sub-selected literature with four types of systems thinking categories:
  • Position pieces: the literature in this category mostly advocates for greater uses for systems thinking in public health;
  • Papers with an analytic lens: most articles here maintain the caveat that once analysis using a systems thinking approach is complete, many researchers revert back to previously used analytic tools, likely due to a lack of practice and training in systems methodologies;
  • Bench-marking of best practices: where systems thinking is used to evaluate public health practice – with some articles evaluating the best practice based on whether it abides by systems thinking principles, rather than whether the application of systems thinking advanced thinking and performance; and
  • Systems modelling: modelling of real-life or dynamic processes using systems thinking.

While the discussion is fairly long, it makes several good points, including that systems thinking is not a panacea and should not be approached as such, that there is a need for greater verifiability of models, and last but not least, that there is a need to improve skills of public health researchers in systems methods and thinking. The authors then move to discussion on the value of soft system methodologies emphasizing how metaphors can be used as a useful heuristic. The authors describe this evolution in thinking as a challenge to how health policy makers define “evidence,” and conclude with a note that systems thinking in health will improve if and as we learn to ask the right questions of systems science, and play down some of the accompanying rhetoric.

Thursday, December 3, 2015

Scenario Planning for Development

CEDAR's Sustainability Framework has long talked about the importance of planning for various scenarios by taking the long view in project planning, management, and evaluation activities, reiterated again in a recent paper by Eric Sarriot et al., A causal loop analysis of the sustainability of integrated community case management in Rwanda which also studied scenarios.

Source: http://neomer.us/wp-content/uploads/2012/11/page_scenarioplanning.png
As Wilson Center's New Security Beat writes in its article Scenario Planning for Development: It's About Time, "scenario planning systematically looks at existing and emerging trends and their plausible - though sometimes unlikely - combinations in order to reduce risk. It's an exercise that does not produce single point predictions, but examines a range of possible situations to help prepare for the unexpected."

It's interesting, and heartening, to see USAID changing its approach to development over the last few years by incorporating longer term goals, "adaptive programming" to better respond to external influences such as natural disasters, disease outbreaks, or shifts in governance structures, and increasing focus on "exit pathways". New Security Beat writes of these adaptive approaches that are increasingly embraced, in line with and in response to the rise in uncertainty, preparedness/response shortfalls, and growing complexity. This indeed seems a direction to impact development work for years to come.

Read more about scenario planning and its increasing use by USAID here.

Tuesday, December 1, 2015

Social accountability - review of existing literature and learning

Social accountability is an essential element to improving health outcomes and facilitating health sector reform. The following links provide two important summaries of some literature on the topic. 
CORE Group

Source: http://www.coregroup.org/storage/documents/
Resources/Tools/Social_Accountability_Final_online.pdf
 This review discusses three social accountability models used in various sectors at community, district, and national levels, to increase accountability and improve health outcomes. The approaches reviewed, analyzed, and described are: (1) Citizen Voice and Action, implemented by World Vision; (2) Partnership Defined Quality, implemented by Save the Children; (3) and the Community Score Card, implemented by CARE.


Voice and Accountability in the Health Sector
Health & Education Advice & Resource Team (HEART)

This resource by HEART is a nice and concise review of key peer publications of voice and accountability in the health sector, assessing specific initiatives in the health sector, using Bangladesh as a country example, and providing available models for increasing social accountability.


Wednesday, November 25, 2015

Training Opportunity

CEDARS tends to focus on health-in-development and related topics, and we have focused a lot of attention on design, management and evaluation processes to enhance sustainability, making use of quantitative data as much as possible, while being firmly anchored in implementation.

Here's however a major training opportunity for people who are interested in the hard science, the quantitative underpinnings of some of the major issues in sustainability. No better place to do it than the Santa Fe Institute. The focus is on urban sustainability, but it's probably a good opportunity to learn about methodologies which we need to pay more attention to.

Here's to the young innovators ready to learn new things. And if you're not that young, you just qualified by virtue of thirsting to learn.

Check out the course information here.

Eric

Friday, January 30, 2015

General Relativity Comes to Global Health—thinking about sustain-scale in health systems interventions

ICF International Center for Design and Research in Sustainability | Sustainable Health & Human Development (CEDARS), January 2015


A metaphor for sustain-scale?
I want to offer a metaphor to question some of  the ways that we, as the global health  community, have been talking about and dealing  with two concepts: sustainability and scale.

I need to start with a little bit of recent history…


Tuesday, November 4, 2014

Resources for adaptive management practices and cost-effectiveness in development

We recently added two new documents to our CEDARS Center resource repository to help development professionals think about adaptive management practices and program cost-effectiveness during implementation, planning, or evaluation.

Have a read through below and click through the links. As always, should you have comments or questions (or additional resources we can share with the sustainable health and human development community), do not hesitate to reach out to us.

Navigating Complexity: Adaptive Management at the Northern Karamoja Growth, Health, and Governance Program [document available here]

This paper, by Engineers Without Borders Canada, under contract with Mercy Corps (MC), is a case-study regarding adaptive management practices within Mercy Corps’ (USAID-funded) project, Growth, Health & Governance Program (GHG). The paper covers building the culture necessary for learning and adaptation, discusses some tools and processes that support adaptation, and some implications for funders and practitioners. Throughout the document, culture is emphasized as the most important factor to be successful in adaptive management and provides strategies and attitudes deemed necessary in achieving this culture. The tools and processes are presented with the purpose of reinforcing the described culture. One of the tools, the Results Chain, is an interesting way of conceptualizing the path to reaching the goals of the project and is similar to a results framework.

A blog post, with a summary is available here: http://usaidlearninglab.org/lab-notes/navigating-complexity-adaptive-management-northern-karamoja-growth-health-governance

Cost-Effectiveness Measurement in Development: Accounting for Local Costs and Noisy Impacts [document available here]

This policy research working paper from the World Bank Group, Africa Region, can help individuals think about cost-effectiveness within their programs from implementation, planning, or evaluation perspectives. As evidence from rigorous impact evaluations grows in development, there have been more calls to complement impact evaluation analysis with cost analysis, so that policy makers can make investment decisions based on costs as well as impacts. This paper discusses important considerations for implementing cost-effectiveness analysis in the policy making process. The analysis is applied in the context of education interventions, although the findings generalize to other areas. First, the paper demonstrates a systematic method for characterizing the sensitivity of impact estimates. Second, the concept of context-specificity is applied to cost measurement: program costs vary greatly across contexts -- both within and across countries -- and with program complexity. The paper shows how adapting a single cost ingredient across settings dramatically shifts cost-effectiveness measures. Third, the paper provides evidence that interventions with fewer beneficiaries tend to have higher per-beneficiary costs, resulting in potential cost overestimates when extrapolating to large-scale applications. At the same time, recall bias may result in cost underestimates. The paper also discusses other challenges in measuring and extrapolating cost-effectiveness measures. For cost-effectiveness analysis to be useful, policy makers will require detailed, comparable, and timely cost reporting, as well as significant effort to ensure costs are relevant to the local environment.

You can find additional information on this paper here: http://documents.worldbank.org/curated/en/2014/09/20196499/cost-effectiveness-measurement-development-accounting-local-costs-noisy-impacts-cost-effectiveness-measurement-development-accounting-local-costs-noisy-impacts