Have you ever wondered how offering family planning to communities in Madagascar might be affecting the size of fish in the Mozambique Channel? Or how working with octopus gleaners may be impacting women’s use of contraception? Or how seaweed farming might be improving children’s access to education? Well, I certainly have!
Blue Ventures’ award-winning Population-Health-Environment (PHE) approach, with its biodiversity conservation, livelihoods, health and education initiatives, clearly appears to be making an impact, but how? And how would we uncover that?
Part of the challenge lies in the fact that our PHE programme is a “complex intervention” with several, interacting components. Often these generate non-linear chain reactions. (An example of this would be a woman who decides to space her children using family planning has more time to earn money from growing seaweed, which she uses to send her children to school, increasing their opportunities to pursue livelihoods beyond fishing such as midwifery or teaching, thus reducing pressure on marine ecosystems.) These kind of interactions and feedback loops can be difficult to predict, and may lead to unintended consequences. In addition, context clearly matters. I doubt that this approach would have worked in exactly the same way in a different part of Madagascar.
All of this means that the traditional scientific approach to determining the effectiveness of an intervention (comparing outcomes between two similar groups where one group has been exposed to an intervention and the other hasn’t) may not tell us what we want to know. It definitely won’t tell us anything about how any changes have been brought about, nor will it tell us about the mediating effect of context on the generation of the outcomes that we see. We need a different approach to understanding our programme.
Thanks to my friends and colleagues at the University of Exeter Medical School, Professors Paul Dieppe and Rob Anderson, I have been introduced to the realist approach to evaluation, and have recently returned from the first International Conference on Realist Approach to Evaluation and Synthesis in Liverpool. I was there with Paul and Rob to present and seek feedback on our preliminary work on the evaluation of our PHE programme, and to learn more in general about this approach.
The realist approach to evaluation, which is particularly suitable for looking at complex programmes, seeks to understand “what works, for whom, in what circumstances and why” (Pawson and Tilley, 2007). Using this approach offers us the opportunity to understand how our programme works, as well as to measure its impact. This approach, with its emphasis on outcomes being contextually dependent (and on “mechanisms of change” interacting with the context to generate the outcomes that we observe) will help us to effectively replicate our programme, or more precisely, to replicate the results we wish to see in different contexts.
The approach has clearly proven itself to be of value in helping others to understand how their programmes work. I came across lots of examples of it being used in the evaluation of interventions in the NHS, DFID has funded research using the realist approach, and it has been used to evaluate development programmes that are every bit as holistic as anything that we would label as PHE.
Professor Dieppe presented our PHE programme and his initial findings (having interviewed nearly 30 programme staff and community representatives), and a useful discussion ensued about how we could build on this work to conduct a comprehensive realist evaluation. I was reassured by the fact that all of those present demonstrated some understanding of complexity and open systems (the idea, for example that it simply isn’t possible to “close” the Velondriake area off to all influences other than Blue Ventures’ activities). Everyone also seemed to understand multi-sector working, and this was very different to my experience of presenting PHE at health or conservation conferences, where few organisations stray from a single-sector lens.
I was reminded of the importance of engaging our partner communities in our evaluation, and of allowing their perspectives to drive the process of building the theory of how our programme works. It would be all too easy for us to focus on and prioritise the ideas generated through discussion with a limited number of colleagues here in the UK. Not only does this risk completely excluding some valuable insights, it misses the opportunity for greater community engagement and participation.
Many of the qualitative, “value-added” benefits of the PHE approach and possible causal mechanisms have been described before, yet getting support or funding for PHE programmes remains a real challenge for many of us. Why is this? Are we not doing a good enough job at communicating the value of PHE? Do we need to tell a different story? Are our results not convincing enough? As well as helping us to understand how our programme works, I would like our evaluation to have credibility with funders, policy makers and implementing organisations. Being no expert in realist evaluation, I’m deeply grateful to Professor Anderson for his ongoing guidance and support as we think about designing and implementing our evaluation.
I return from Liverpool with greater confidence that realist evaluation will help us to deepen our understanding of how our programme works, as well as help us and others to replicate our results. I believe that it will provide us with more of the evidence that we need to demonstrate the value of programmes such as ours (and I would welcome anyone’s thoughts on what else we can do to support greater adoption of PHE). I also come away reassured that minds significantly greater than mine have grappled with the issue of how to evaluate complex interventions, and I look forward to harnessing this wisdom. If we get this right, we stand to make an important contribution to our collective understanding of how and why PHE programmes work.