Member login
 
   
Forgot Login?   Sign up  

This is the AES Blog, where we regularly post articles by the Australasian evaluation community on the subjects that matter to us. If you have an idea, please contact us on This email address is being protected from spambots. You need JavaScript enabled to view it.. Blog guidelines can be found here.



September 2018
By Jade Maloney

aes18 hero socialmedia

Our world is transforming at a dizzying rate. What does this mean for evaluation and, by extension, evaluators?

That’s the question posed by the 2018 Australasian Evaluation Society conference in Launceston this week. So what do our keynotes think?

Kate McKegg – well known for her work advancing developmental evaluation practice – asks us to think deeply about what we really mean when we say transformation. What might the dimensions be? What exactly is it we are trying to transform: people, places, practices, structures, systems, technologies or something else? Does it have to be global? Or does what occurs at the national, regional, local, family or individual level count? Will we recognise transformation for what it is as it happens and be able to capture it? Can we really deliver transformation or does it have to be experienced?

McKegg’s co-presenter, Michael Quinn Patton (of utilisation-focused, developmental, and now principles-focused evaluation fame), tells us that evaluating transformation means transforming evaluation and lays down a challenge. Is evaluation going to be part of the problem (maintaining the status quo) or part of the solution (supporting and enabling transformation)?

The pair’s pre-conference workshop had everyone buzzing, both those who had read Principles-Focused Evaluation from cover to cover and those who were new to the concept. Participants learned the distinctions between rules – where the focus is on compliance and there isn’t need for interpretation – and principles – which provide guidance and direction, but need to be interpreted within specific contexts. They also learned about layering principles and that less is more in both number and description.

For Lee-Anne Molony, Managing Director at Clear Horizon, who chaired the session: a quote from William Easterly (The Tyranny of Experts) neatly summed up the value of taking a principles-based approach: ‘It is critical to get the principles of acting right before acting’. This plays out most in good ‘design’ but as evaluators our role is to support the process of ensuring those ‘right principles’ are clarified well enough that they are meaningful and relevant (provide sufficient guidance for decision makers); are able to be adhered to (at least in theory); and the results they would produce if adhered to are clear (or can be determined).

For Keryn Hassall, one of the participants: principles-focused evaluation offers an opportunity for transforming evaluation practices, and for supporting more sophisticated program management. Principles are the best way to guide decisions in complex, adaptive contexts, and where there are no easy answers to how to solve problems. Programs where the journey is just as important than the destination can look like failure when evaluated using government evaluation guidelines that focus on reporting on specified outcomes. Learning about principles-focused evaluation helps evaluators deepen their role to help program managers deliver meaningful programs.

But principles-focused evaluation is only one of the ideas on the table. Penny Hagen is strengthening the relationship between co-design and evaluation, Karol Olejniczak is getting us to gamify, and Sharon Gollan and Kathleen Stacey are asking us to apply the lens of cultural accountability to ensure evaluation is culturally safe.

With all of this on offer, you’d be hard-pressed not to find a way to transform your practice by the end of the week.

Thanks to aes18 conference convenor Jess Dart for coordinating input from keynotes and Eunice Sotelo for curating the questions.

Jade is a partner at ARTD Consultants.

A practitioner’s take on developmental evaluation

September 2018
By Zazie Tolmer

russian

Late last year an opportunity came up for a Clear Horizon consultant to work full time as an embedded evaluator in a Collective Impact initiative. I jumped at the opportunity and have been part of the backbone team for the last eight months.

Over that time, the way I approach my practice has changed considerably and I finally feel that I am getting a handle on what it feels like to be a Developmental Evaluator. I have learnt:

To go where the positive energy is – Rather than trying to situate evaluation through planning, I focus on the pockets of people where there is real current interest in drawing on evaluative thinking, for example, wrapping evaluation around a small prototype or developing a theory of change. This provides a place to work from that is immediately useful and creates demand for evaluation. I tried initially to drive the scoping and development of an M&E framework and plan for the initiative, but I did not get very far, fast! Collective Impact initiatives operate deliberately in the complex, and the rationalisation that is required in more standard M&E planning goes against the grain.

To do Russian Doll evaluation – When you don’t have a plan but you want to start, it can help to do a small discrete evaluation project first. For example, an exploratory qualitative study looking at partners’ perceptions and experience of the impacts of the work. Once you have one piece completed, you can start to spring off it into other evaluative projects. It can be really hard to reconcile all the different evaluation needs and tensions on a Collective Impact initiative. I have found that if you produce one evaluative output, the rest of the backbone and partners: a) understand concretely what evaluation can do; and b) are better able to articulate what they might need next from evaluation. In my mind, this approach to evaluation looks like a Russian doll set where you start with the smallest doll and keep building on it and wrapping more evaluation layers around it, until you have built up your evaluation to cover the full initiative.

To listen, keep listening and never stop listening – I have learnt to leave the consultant in me at the door and to be guided by the group rather than taking the lead in my area of ‘expertise’. The group have a much better understanding than I do of what is needed next. My job is really to listen out for where there might be an opportunity for an evaluative piece and to translate this into something that can be delivered within the time and resourcing constraints. I’m also learning to leave ‘me’ out of it. For example, I have stopped to think about the evaluation pieces as my work and am emphasising quality (honestly the best that can be done with the available resources and time) over rigour (bullet proof evidence).

Listening also means anticipating. The evaluation work I do for the initiative I am working on includes evaluation pieces that have been identified together and others that are bubbling along in the background ready to be shared when the timing is right. These pieces are more like hunches that sometimes work out and sometimes don’t. When they do, they create good momentum!

At this year’s AES conference in Launceston, the team and I will be presenting on the transformative power of Developmental Evaluation.

Zazie is a principal consultant at Clear Horizon

September 2018
By Ruby Fischer

Picture1

Evaluations are like diets – you know they’re good for you, you always start off with good intentions and desperate optimism, but eventually you slip back into your old habits.

So how do you stick to them?

Here are 5 tips from AES NSW’s latest seminar on how NGOs can stick with evaluation in our do-more-with-less world.

1. Evaluation is a lifestyle, not a quick fix
Evaluation shouldn’t be an after thought and you shouldn’t do it just because everyone else is doing it. Evaluation needs to be embedded in your culture. That means everyone needs to be 100% committed.

And that means as the evaluation champion, you need to tell the right story. Effective storytelling embeds the evaluation change in your organisations DNA. The word evaluation is often met with the ‘I just ate a sour lemon' look. Positioning your evaluation as about continuous improvement, less judgement on the team, can help ensure they take the little steps for long lasting change, like encouraging feedback during service delivery.

It definitely means answering the question, ‘why does this matter?’

2. Find your purpose and motivation
So why does the evaluation matter? Is the evaluation for accountability? Is it for learning and development? Is it answering the questions - how much did we do, how well did we do it, what differences did we make?
Take some time to really get your head around your purpose. This makes it easier to keep your motivation high and helps you to design your evaluation by focusing you on what you need to know.

Picture2

3. Be realistic
You are probably not going to become a spinach smoothie drinking, marathon running, yogi master overnight. Similarly, you are not going to solve Australia’s most wicked social issues, no matter how awesome your program may be. You need to have realistic evaluation expectations given your limited time and budget, and you need to manage funders expectations as well.

Ask yourself, what can we reasonably expect to achieve? What outcomes reflect those reasonable expectations, and how do we measure them? Your evaluation needs to be fit for purpose, for community and for resources. Remember, it is better measure a few things well, rather than many poorly.

4. Phone a friend
Friends are super helpful. Use them. Leverage your network. Do you know experts? What online resources can you use? Hepatitis NSW, who presented a fantastic case study at the NSW AES seminar, checked in with an academic about their survey structure improved readability and questions. Their response rate doubled from 10% to 20%.

Remember partnerships are a two-way street. You’ve got a lot of value to offer too. Don’t forget it!

5. Incentivise
A little incentive goes a long way. It’s no surprise that incentives increase response rates. Used tactically, they’re also really cost effective.
It’s always important to say thank you for the time your client took to provide feedback. The incentive could be a chance to win a voucher or a little present. Know your target group, and you will know an appropriate incentive.

What lessons have you learnt on your evaluation journey? This email address is being protected from spambots. You need JavaScript enabled to view it..

Ruby is a consultant at Nexus Management Consulting.

 

 

August 2018
By Jade Maloney

conference 2017

There’s still a chill in the air, but the days are starting to lengthen, and you can sense the promise of spring. Must nearly be time for another AES conference.

I remember my first one: Canberra, 2009. I was still ‘green’, 18 months after falling out of publishing and into a role in evaluation. Andrew Leigh had just come out with his proposal for a hierarchy of evidence to inform Australian policy making, and there was an afternoon panel, including Leigh himself, to discuss it. The proposal in itself was nothing new (it drew on models from medical research in the US and social policy in the UK), but it added fuel to the still burning embers of the fire that was (is?) the methodology wars.

I didn’t yet know enough to unpack the arguments for and against the primacy of Randomised Control Trials (RCTs), but I couldn’t help thinking that there must be a more nuanced question and answer than the heated audience commentary suggested.

Fast forward to Canberra 2017. I’ve now got my own views about the kinds of questions that RCTs can and cannot answer, and I nearly choke on my chicken as economist Nicholas Gruen says RCTs are not the panacea they’re made out to be. We need to ask the right questions at the right times and choose appropriate methods to answer them. I agree.

The purpose of this trip down memory lane is not to ignite a methodological debate. It’s to say that the conference is a window into what matters to evaluation at the time. It seems to me that the discussion has also matured and that meaningful conversations at the conference – including candid discussions about failures – have an important role in developing the discipline. (I suppose discipline is the right term since we’re not technically a ‘profession’ – although the pathways to professionalisation project will lead us there).

So I’m excited about how the interactive sessions planned for Launceston 2018 will help us reflect on how we are transforming evaluation and what comes next.

As someone who spans the roles of design, implementation and evaluation, I’ll be jumping into sessions on design and discussions about what the rise of co-design means for the evaluator role and required competencies.

As someone who works in disability policy and has lived experience of mental health issues, I’m also keen to find out about how others are implementing participatory and empowerment approaches in practice, like Joanna Farmer’s session on the challenges of managing values and power in evaluating with a lived experience.

And as conference co-convener for Sydney 2019, I’m keen to hear from other AES members what they love about the conference and what else might be possible. Because, after all, while keynotes and panellists can strike the match, only the participants can carry the torch through conversations.

Jade is a partner at ARTD Consultants.