Rachel Aston, Ruth Aston, Timoci O’Connor
How often do we really use research to inform our evaluation practice? Many of us tend to use research and evidence to help us understand what we are evaluating, what outcomes we might expect to see and in what time frame, but we don’t often use research to inform how we do evaluation.
At both the Australian Evaluation Society’s International Conference in September and the Aotearoa New Zealand Evaluation Association conference in July Rachel and Ruth Aston, Timoci O’Connor and Robbie Francis presented our different perspectives to the discussion—Tim and Rachel as evaluators, Ruth as a researcher, and Robbie as a program practitioner.
In Ruth’s study of 7,123 complex interventions to reduce exposure to modifiable risk of cardiovascular disease, she found that two program-specific factors can influence the magnitude (amount) of impact of interventions. These were:
Eleven specific indicators make up these two factors, but Ruth found that 80 per cent of reviewed interventions did not monitor many of these indicators. She concluded that often we don’t monitor the indicators that can give us critical information about how to improve the effectiveness of complex interventions.
Research can help us address this practice gap. Evaluative thinking, along with design-thinking and implementation science can help us operationalise and embed the process, principles and decision-making structures to facilitate progressive impact measurement and continuous improvement.
An evaluation practice example
Rachel is currently working on a three-year impact evaluation of a stepped care model for primary mental healthcare services. One of the challenges in this project is that mental health outcomes are not likely to shift over the course of the evaluation. Further, the intervention for the stepped care model is dynamic – it’s being rolled out in three stages over two years but progression towards impact needs to be monitored from day one.
By incorporating the research on the importance of monitoring design and implementation, we are able to look at the quality, fidelity and reach of implementation of the stepped care model. One of the tools we’re using to do this is the Consolidated Framework for Implementation Research (CFIR) – a validated framework that incorporates a large number of validated constructs (developed by Laura Damschroder and her colleagues), https://cfirguide.org/.
The constructs and overall framework can be used to build data collection tools, such as surveys, interview schedules, observation protocols and to develop coding frameworks for analysis. Using the CFIR and focusing on how, how well and how much the stepped care model has been implemented, we can develop actionable feedback to improve implementation and consequently, the effectiveness of the model.
A program practitioner’s perspective
Robbie Francis, Director of The Lucy Foundation described how the Foundation has utilised information gained from the monitoring and evaluation of the design and implementation of their exciting Coffee Project in Pluma, Hidalgo Mexico. Her reflections reinforce how adaptations can be made to program design and implementation to improve potential for impact. Robbie also provides an important practical message about the place of principles in evaluating the impact of the Coffee Project.
Click on image below for video of Robbie Francis, The Lucy Foundation
We have a role and a duty as evaluators to use the evidence we have at hand to inform and enhance our practice. This includes traditional research, evaluation practice experience, and program practitioner insights.
While this is important for any evaluation, it is arguably more important when evaluating complex interventions aiming to achieve social change. If we are going to continue to invest in and evaluate complex interventions, which seems likely given the challenging nature of the social problems we face today, then we need to think critically about our role as evaluators in advocating for the importance of:
Above all, we need to accept, review and use all forms of evidence we have at our disposal. This will enable us to continually learn, become evidence-informed practitioners and use evaluative thinking in our work for the purposes of improving our practice, and generating useful, accurate and timely evaluation.
Damschroder, L., Aron, D., Keith, R., Kirsh, S., Alexander, J. and Lowery, J. (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4(1).
Dr Ruth Aston
Research Fellow, University of Melbourne
Ruth has nine years’ experience in research and project evaluation. Ruth has managed several large-scale evaluations across Australia and internationally. She recently completed her PhD on 'Creating Indicators for Social Change in Public Health'. She also has a strong interest in interdisciplinary research with diverse cultural groups.
Senior Consultant, ARTD Consultants
Rachel is an experienced social researcher and evaluator who joined ARTD in 2018. She brings over six years’ experience conducting research and evaluation for government, NGOs and in the higher education sector. Rachel’s academic background is in anthropology and social research.
Lecturer, University of Melbourne
Timoci has over ten years’ experience in conducting research and evaluation projects in the public health, education, international development and community sectors. He holds a Masters of Public Health and is currently doing his PhD exploring the nature of feedback in community-based health interventions utilising mobile technologies and describing its influence on program outcomes. He is Ikiribati/ Fijian.
Director, The Lucy Foundation
Robbie Francis is a young woman who has packed a lot into 29 years. Having lived with a physical disability since birth, she has worked in the disability sector for over a decade as a support worker, documentary maker, human rights intern, researcher, consultant and as an advisor. In 2014 Robbie co-founded The Lucy Foundation, a social enterprise committed to empowering people with disabilities by working with local communities to promote education, employment and a culture of disability inclusiveness through sustainable trade.
By Liz Smith
At the 2018 AES conference, Ignite presentations were introduced to light some fire in our evaluation belly. Ignite presentations are a set formula of five minutes and 20 slides with each slide advancing automatically after 15 seconds. Presenters have to concisely and quickly pitch their idea.
My thoughts in 2017 when submitting an Ignite conference abstract was: ‘Great idea, let’s get a piece of this action. Let’s push my boundaries and try something new. Woohoo!’. In contrast, my thoughts one week out from #aes18LST were: ‘WTF have I got myself into this time!’
Let me share, my lessons on doing my first Ignite presentation.
Effective Ignite presentations have one central theme about which you are passionate
I was arguing for short, plain English evaluation reports. I wanted to offer tips to create readable reports. Over the last two years, Litmus has implemented a company-wide plain English strategy. As a finalist in New Zealand’s Plain English Awards, this is a topic I am very passionate about and have much (probably too much) to say.
Work on content first to create a compelling and interesting story
I followed the advice from Ignite gurus and developed my story first. I worked out 15 seconds equals 35 words a slide. I struck to this rule of thumb. My first writing attempt fit with the formula. But, it was a shopping list of tips to write readable reports. Pretty boring as my critical friends agreed!
I decided to use an analogy to create a more compelling presentation around plain English reporting. I was presenting the week of Suffrage 125; a celebration of New Zealand women winning the right to vote in 1893. I set myself the challenge of using women’s suffrage to spark interest in my presentation. I found using women’s suffrage as a backdrop resulted in a story that caught and held attention.
As a feminist, I also wanted to shine the light on women’s suffrage and their achievements at #aes18LST.
Three key ideas and two critical friends are a winning formula
Getting to a compelling and interesting Ignite presentation that captured the audience’s attention was challenging. I had two colleagues – Phoebe Balle and Sam Abbato - who advised and cajoled me through the development phases. The phase when many an idea hits the cutting room floor. At times, a painful but very necessary process. The key tip, they both constantly reiterated, was I needed three points to support the central idea. Dump the rest!
Practice, then practice some more, and if needed cheat!
You have no excuse not to practice. In one hour, you can practice your Ignite presentation at least ten times. Practice does pay off. You get a sense of the flow between your script and your slides. And again more Ignite content hits the cutting room floor to burn to ashes.
On the big day, you are supposed to eloquently present your Ignite without reference to your carefully crafted script (really!). The argument goes the story flow will be better and less stilted. But be warned, time constraints do not allow for off-piste and off-the-cuff ideas.
I, like some at #aes18LST, cheated. We had our scripts (our comfort blankets) to keep us on track. Not being a purist, I’m okay with this. I believe it is better to give things a go, in whatever way that works for you.
You need to breathe slowly and prepare for the worst
In preparing for my Ignite, I watched others at #aes18LST to refine my presentation. The AES audience was definitely on your side. However, I observed the audience’s anxiety levels mirrored the presenter’s. I found the trick was to breathe and pause to create an environment inducive for the audience to listen.
You also needed to prepare for the worst - technology failure. It happened! Kudos to Jade Maloney/Katherine Rich and Joanna Farmer who presented their Ignites slideless. Their ability to create visual images through words and actions was admirable and entertaining. Missing out on Joanna’s cat in a box picture was a conference low.
#aes19 is your chance to give Ignite a go
The Ignite presentations at #aes18LST were informative and entertaining. I was amazed by how much you could learn from a carefully structured five minute Ignite presentation.
I am hoping #aes19SYD has the option for this dynamic presentation format. AES conferences offer evaluators a safe environment to present and test their boundaries. So what is your big Ignite theme for 2019? Go on, light some evaluation fires!
You can find more technical tips at the great resources I used for developing my Ignite presentation:
Liz Smith, Partner Litmus Limited, a New Zealand based research and evaluation agency specialising in health and justice sectors. Vice President AES 2013-2018.
By Denika Blacklock
I have been working in development for 15 years and have specialised in M&E for the past 10 years. In all that time, I have never been asked to design an M&E framework for or undertake an evaluation of a project which did not focus entirely on a logframe. Understandably, it is a practical tool for measuring results – particularly quantitative results – in development projects.
However, as the drive for increased development effectiveness and, thankfully, more accountability to stakeholders has progressed, simply measuring what we have successfully done (versus what we have successfully changed or improved) requires more than just numbers. More concerning is the fact that logframes measure linear progression toward preset targets. Any development practitioner worth their degree can tell you that development – and development projects – is never linear, and at our best, we guess at what our output targets could conceivably be under ideal conditions, with the resources (money, time) available to us.
I have lately found myself faced with the challenging scenario of developing M&E frameworks for development projects in which ‘innovation’ is the objective, but I am required to design frameworks with old tools like logframes and results frameworks (organisational/donor requirements) which cannot accommodate actual innovation in development.
Word cloud image sourced from Google
The primary problem: logframes require targets. If we set output targets, then the results of activities will be preconceived, and not innovative. Target setting molds how we will design and implement activities. How can a project be true to the idea of fostering innovation in local development with only a logframe at hand to measure progress and success?
My argument was that if the project truly wanted to foster innovation, we needed to ‘see what happens, not decide beforehand what will happen with targets.’ Moreover, I was able to counterargue the idea that a target of ‘x number of new ideas for local development’ was a truly ineffective (if not irresponsible) way of going about being ‘open-minded about measuring innovation.’ There could be 15 innovative ideas that could be implemented, or one or two truly excellent ones. It was not going to be the number of ideas or how big their pilot activities were that would determine how successful ‘innovation in local development’ would be, but what those projects could do. The project team was quick to understand that as soon as we set a specific numerical or policy target, the results would no longer be innovative. It would no longer be driven by ideas from government and civil society, but by international good practice and development requirements that we measure everything.
There was also the issue of how innovation would be defined. It does not necessarily need to be ‘shiny and new’ but it does need to be effective and workable. And whether the ideas ended up being scalable or not, the entire process needed to be something we could learn from. Working out how to measure this using a logframe felt like one gigantic web of complication and headaches.
My approach was to look at all of the methods of development monitoring ‘out there’ (i.e. Google). When it came to tracking policy dialogue (and how policy ideas could be piloted to improve local development), outcome mapping seemed the most appropriate way forward. I created a tool (Step 1, Step 2, etc.) that the project team could use on an annual basis to map the results of policy dialogue to support local development. The tool was based on the type of information the project team had access to, the people that the project team would be allowed to speak to, as well as the capacity within the project team to implement the tool (context is key). Everyone was very happy with the tool – it was user-friendly, and adaptable between urban and rural governments. The big question was how to link this to the logframe.
In the end, we opted for setting targets on learning, such as how many lessons learned reports the project team would undertake during the life of the project (at the mid-term and end of the project). At its core, innovation is about learning: what works, what does not and why. Surprisingly, there was not a lot of pushback on having targets which were not a direct reflection of ‘what had been done’ by the project. Personally, I felt refreshed by the entire process!
I completed the assignment even more convinced than I already was that despite the push to change what we measure in development, we will never be effective at it unless those driving the development process (donors, big organisations) really commit to moving beyond the ‘safe’ logframe (which allows them to account for every cent spent). As long as we continue to stifle innovation needing to know – in advance – about what the outcome will be, we will only be accountable to those holding the money and not to those who are supposed to benefit from development. Until this change in mindset happens at the top of the development pyramid, we will remain ‘log-framed’ in a corner that we cannot escape from because we have conditioned ourselves to think that the only success that counts is that which we have predicted.
Denika is a development and conflict analyst, and independent M&E consultant based in Bangkok.
Personal blog: http://theoryinpracticejournal.blogspot.com/
By the AES blog team
The Launceston conference certainly set us some challenges as evaluators. The corridors of the Hotel Grand Chancellor were abuzz with ideas about how we can transform our practice to make a difference on a global scale, harness the power of co-design on a local level, take up the opportunities presented by gaming, and ensure cultural safety and respect. Since then, the conversations have continued in blogland. Here’s what some of our members had to say.
Elizabeth Smith, Litmus: The shock and awe of transformations: Reflections from AES2018 Conference – on the two challenges that struck a chord: the need to transform evaluation in Indigenous settings and support Indigenous evaluators and the need to focus globally and act locally to transform the world https://www.linkedin.com/pulse/shock-awe-transformations-reflections-from-aes2018-conference-smith/
Charlie Tulloch, Policy Performance: Australian Evaluation Society Conference: Lessons from Lonnie – on the evolution of AES conferences, from presentations about projects to sharing insights, including from failures and challenges https://www.linkedin.com/pulse/australian-evaluation-society-conference-lessons-from-charlie-tulloch/
Fran Demetriou, Lirata Consulting: AES 2018 conference reflections: power, values, and food – on the experience of an emerging evaluator and all those great food metaphors https://www.aes.asn.au/blog/1474-aes-2018-conference-reflections.html
ARTD team: Transforming evaluation: what we’re taking from #aes18LST – on the very different things that spoke to each of us, from the challenge to ensure cultural safety and respect to leveraging big data and Gill Westhorp’s realist axiology https://artd.com.au/transforming-evaluation-what-we-re-taking-from-aes18lst/16:216/
Natalie Fisher, NSF Consulting: Australasian Evaluation Conference 2018 – Transformations – on measuring transformation (relevance, depth of change, scale of change and sustainability), transforming our mindsets and capabilities, the power balance and how we write reports http://nsfconsulting.com.au/aes-conference-2018/
Joanna Farmer, beyondblue: Evaluating with a mental health lived experience – on the strengths and challenges this brings, and breaking the dichotomy between evaluator and person with lived experience by being explicit about values and tackling power dynamics https://www.linkedin.com/pulse/evaluating-mental-health-lived-experience-joanna-farmer
Byron Pakula, Clear Horizon: The blue marble flying through the universe is not so small... – on Michael Quinn Patton take outs – transformation should hit you between the eyes and we should assess whether this intervention contributed to the transformation https://www.clearhorizon.com.au/all-blog-posts/the-blue-marble-flying-through-the-universe-is-not-so-small.aspx
David Wakelin, ARTD: AES18 Day 1: How can we transform evaluation? – on how big data may help us transform evaluation and tackle the questions we need to answer, without losing sight of ethics and the people whose voice we need to hear https://artd.com.au/aes18-day-1-how-can-we-transform-evaluation/16:215/
Jade Maloney, ARTD: How will #aes18LST transform you? – on Michael Quinn Patton’s call to action – evaluating transformations requires us to transform evaluation – the take-outs from Patton and Kate McKegg’s Principles-Focused Evaluation workshop https://www.aes.asn.au/blog/1466-how-will-aes18lst-transform-you.html
Jess Dart, Clear Horizon: Values-based co-design with a generous portion of developmental evaluation – on Penny Hagan’s tools that integrate design and evaluation, including the rubric and card pack they have developed for assessing co-design capability and conditions. https://www.clearhorizon.com.au/all-blog-posts/values-based-co-design-with-a-generous-portion-of-developmental-evaluation.aspx
AES Blog Working Group: Eunice Sotelo, Jade Maloney, Joanna Farmer and Matt Healy