Share your ideas
AES Blog

Welcome to the AES Blog

Australasia has some excellent evaluators. More than that, we have an evaluation community full of ideas and a willingness to share. The AES has long provided a place for us to come together, at regional events and the annual conference, to develop our community together. Now we’re taking it online! The new AES blog will be a space for AES members – both new and experienced – to share their perspectives, reflecting on their theory... If you have an idea, please contact us on blog@aes.asn.au. Please also view our blog guidelines.

Thinking outside the logframe: M&E frameworks for ‘innovative’ development projects

Thinking outside the logframe: M&E frameworks for ‘innovative’ development projects Word cloud image sourced from Google

By Denika Blacklock

I have been working in development for 15 years and have specialised in M&E for the past 10 years. In all that time, I have never been asked to design an M&E framework for or undertake an evaluation of a project which did not focus entirely on a logframe. Understandably, it is a practical tool for measuring results – particularly quantitative results – in development projects.

However, as the drive for increased development effectiveness and, thankfully, more accountability to stakeholders has progressed, simply measuring what we have successfully done (versus what we have successfully changed or improved) requires more than just numbers. More concerning is the fact that logframes measure linear progression toward preset targets. Any development practitioner worth their degree can tell you that development – and development projects – is never linear, and at our best, we guess at what our output targets could conceivably be under ideal conditions, with the resources (money, time) available to us.

I have lately found myself faced with the challenging scenario of developing M&E frameworks for development projects in which ‘innovation’ is the objective, but I am required to design frameworks with old tools like logframes and results frameworks (organisational/donor requirements) which cannot accommodate actual innovation in development.

The primary problem: logframes require targets. If we set output targets, then the results of activities will be preconceived, and not innovative. Target setting molds how we will design and implement activities. How can a project be true to the idea of fostering innovation in local development with only a logframe at hand to measure progress and success?

My argument was that if the project truly wanted to foster innovation, we needed to ‘see what happens, not decide beforehand what will happen with targets.’ Moreover, I was able to counterargue the idea that a target of ‘x number of new ideas for local development’ was a truly ineffective (if not irresponsible) way of going about being ‘open-minded about measuring innovation.’ There could be 15 innovative ideas that could be implemented, or one or two truly excellent ones. It was not going to be the number of ideas or how big their pilot activities were that would determine how successful ‘innovation in local development’ would be, but what those projects could do. The project team was quick to understand that as soon as we set a specific numerical or policy target, the results would no longer be innovative. It would no longer be driven by ideas from government and civil society, but by international good practice and development requirements that we measure everything.

There was also the issue of how innovation would be defined. It does not necessarily need to be ‘shiny and new’ but it does need to be effective and workable. And whether the ideas ended up being scalable or not, the entire process needed to be something we could learn from. Working out how to measure this using a logframe felt like one gigantic web of complication and headaches.

My approach was to look at all of the methods of development monitoring ‘out there’ (i.e. Google). When it came to tracking policy dialogue (and how policy ideas could be piloted to improve local development), outcome mapping seemed the most appropriate way forward. I created a tool (Step 1, Step 2, etc.) that the project team could use on an annual basis to map the results of policy dialogue to support local development. The tool was based on the type of information the project team had access to, the people that the project team would be allowed to speak to, as well as the capacity within the project team to implement the tool (context is key). Everyone was very happy with the tool – it was user-friendly, and adaptable between urban and rural governments. The big question was how to link this to the logframe.

In the end, we opted for setting targets on learning, such as how many lessons learned reports the project team would undertake during the life of the project (at the mid-term and end of the project). At its core, innovation is about learning: what works, what does not and why. Surprisingly, there was not a lot of pushback on having targets which were not a direct reflection of ‘what had been done’ by the project. Personally, I felt refreshed by the entire process!

I completed the assignment even more convinced than I already was that despite the push to change what we measure in development, we will never be effective at it unless those driving the development process (donors, big organisations) really commit to moving beyond the ‘safe’ logframe (which allows them to account for every cent spent). As long as we continue to stifle innovation needing to know – in advance – about what the outcome will be, we will only be accountable to those holding the money and not to those who are supposed to benefit from development. Until this change in mindset happens at the top of the development pyramid, we will remain ‘log-framed’ in a corner that we cannot escape from because we have conditioned ourselves to think that the only success that counts is that which we have predicted.

Denika is a development and conflict analyst, and independent M&E consultant based in Bangkok.
Personal blog: http://theoryinpracticejournal.blogspot.com/
Twitter: @DenikaKarim 

Post image: Word cloud image sourced from Google

Bridging the Research to Practice Gap in Evaluatio...
Igniting the evaluation fire – lessons from my fir...

ABOUT US   |    CONTACT US   |    LEGALS
Search site
© 2023 Australian Evaluation Society Limited 
ABN 13 886 280 969 ACN 606 044 624
Address: 425 Smith Street, Fitzroy, Victoria, 3065, Australia

We acknowledge the Australian Aboriginal and Torres Strait Islander peoples of this nation. We acknowledge the Traditional Custodians of the lands in which we conduct our business. We pay our respects to ancestors and Elders, past and present. We are committed to honouring Australian Aboriginal and Torres Strait Islander peoples’ unique cultural and spiritual relationships to the land, waters and seas and their rich contribution to society.