Member login
 
   
Forgot Login?   Sign up  

This is the AES Blog, where we regularly post articles by the Australasian evaluation community on the subjects that matter to us. If you have an idea, please contact us on This email address is being protected from spambots. You need JavaScript enabled to view it.. Blog guidelines can be found here.



October 2018
By Fran Demetriou

caterpillar

The theme of transformations resonated with me. I’m relatively new to evaluation and it’s been an intense journey over the last two years in learning about what evaluation is and how to go about it well. This conference (my first ever evaluation conference) was a pivotal point in that journey.

As an ‘emerging evaluator’, my first question was… ‘what does that mean?’ I participated in one of the emerging evaluators panels, where one of the facilitators, Eunice Sotelo, did some excellent miming of the concept (I can’t justify it with text, so you’ll have to ask her nicely to demonstrate it). An audience member in the session called us caterpillars, following on from butterfly references in Michael Quinn Patton’s inspiring opening plenary. I’m not sure we have a working definition of transformation yet, but I’ve got some good imagery.

This caterpillar came to the conference with a good grounding in evaluation, but with a lot more to understand, including where I was at and what I needed to do to develop.

Here’s what I’ve taken away from my first AES conference:

Community spirit and failing forwards
I was struck by the diversity of content in the sessions. There is so much to learn about and so much innovation underway to enable us to better address complex social problems. This felt overwhelming as a newcomer, but I was comforted to find a community of evaluators at the conference who wanted to share, collaborate and learn from one another.

It was great to have so many interactive sessions to enable those connections. As an emerging evaluator, I also appreciated the focus given at the conference to welcome us into the community, focus on our development, and provide platforms to hear our perspectives on opportunities to develop the sector. 

The emphasis on learning from failure was valuable. One of my conference highlights was Matt Healey’s interactive session (Learning from failure: A Safe Space Session) where, under Chatham House rules, evaluators with various backgrounds, specialisations and levels of experience shared some of those facepalm moments. It was comforting to know others had made similar mistakes to me, but even more beneficial to learn from others’ mistakes to help mitigate them in my own practice.

I learned that, as we continue to transform our practice to tackle complex problems, there are going to be failures along the way – and that’s ok, so long as we recognise them, learn and adapt. I went along to the panel session Umbrellas and rain drops: Evaluating systems change lessons and insights from Tasmania and listened as a highly experienced team shared the challenges they have encountered implementing systems change through the Beacon Foundation in Tasmanian schools. For me, it helped surface the importance of having strong relationships with partners and funders who are willing to fail forwards with us. 

We have power! Let’s share it, empower others and be ready to let go
The conference reiterated for me the power that we hold as evaluators. We have the power to influence who is included in evaluations, and how – and we need to push back to make sure those who are affected by decisions are involved meaningfully in the process.

Through some enlightening role play, the We are Women! We are Ready! Amplifying our voice through Participatory Action Research (Tracy McDiarmid and Alejandra Pineda from the International Women’s Development Agency) session helped me to reflect on the ever-present power dynamics between evaluation stakeholders, and how to critically assess and address this to ensure stakeholders are included.

I learned that power isn’t just about how you include stakeholders, but what you bring to each evaluation through your own identity, and the often unstated cultural values you hold. A challenge I will be taking back to my practice is to be more critically aware of my own identity and the impact it has on evaluations I work on.

These conversations and discussions were summed up for me in Sharon Gollan’s and Kathleen Stacey’s plenary with the galvanising question: “When will YOU challenge the power when it is denying inclusion?”

It’s all about values
Very much connected to power is whose values are heard and counted in an evaluation. I went to several sessions dedicated explicitly to values in evaluations. It was exciting to see both the development of theory and the sharing of practical tools for eliciting values in evaluations.
In their plenary, Sharon Gollan and Kathleen Stacey provided a reminder that the benchmark for doing evaluation has been defined by the dominant culture. This was a powerful insight for me – it seems obvious, but it’s something easily overlooked. The way we undertake evaluation has cultural values embedded deep within it, and we must take care to think about the suitability of our approaches especially with Indigenous communities.
Being able to elicit values in each stage of an evaluation is a separate challenge altogether from understanding they are important. It was great then to have several sessions focused on identifying different types of values, articulating values approaches, specifying where values fit into an evaluation (at the start, and then they permeate everything), and how to work with these values, especially in culturally appropriate ways.

We like food metaphors
And finally, we must be a hungry bunch, because the sessions were peppered with food references. 

Some savoury metaphors included policy being described as spaghetti, with evaluation making it a bento box (Jen Thompson in Traps for young players: a panel session by new evaluators for new evaluators), and a key takeaways slide with a pizza image (Joanna Farmer in When an evaluator benefits: the challenges of managing values and power in evaluating with lived experience).

Pudding was offered up by Jenny Riley’s and Clare Davies’ appetisingly named Outcomes, Dashboards and Cupcakes and Matt Healey’s ignite session on evaluators as cake Just add water: The ingredients of an evaluator.

My favourite food reference, reflecting the importance of power and values, was from one of the panellists in Developmental evaluation in Indigenous contexts: transforming power relations at the interface of different knowledge systems: “If you’re not at the table, you’re on the menu”.

What’s next?
I don’t know about you, but I certainly feel well nourished! 

I’ll be transforming my work to better address values, power and inclusion, and I look forward to the Emerging Evaluators Special Interest Group kicking off soon to continue learnings with and from others.

Thanks for a great first conference, and I look forward to seeing you in Sydney next year!

Fran Demetriou works at Lirata Consulting as an Evaluator, and volunteers as an M&E advisor for the Asylum Seeker Resource Centre’s Mentoring Program. 
LinkedIn: https://www.linkedin.com/in/francesca-demetriou-975345a5/
Twitter: @Fran_Demetriou

September 2018
By Ruby Fischer

Picture1

Evaluations are like diets – you know they’re good for you, you always start off with good intentions and desperate optimism, but eventually you slip back into your old habits.

So how do you stick to them?

Here are 5 tips from AES NSW’s latest seminar on how NGOs can stick with evaluation in our do-more-with-less world.

1. Evaluation is a lifestyle, not a quick fix
Evaluation shouldn’t be an after thought and you shouldn’t do it just because everyone else is doing it. Evaluation needs to be embedded in your culture. That means everyone needs to be 100% committed.

And that means as the evaluation champion, you need to tell the right story. Effective storytelling embeds the evaluation change in your organisations DNA. The word evaluation is often met with the ‘I just ate a sour lemon' look. Positioning your evaluation as about continuous improvement, less judgement on the team, can help ensure they take the little steps for long lasting change, like encouraging feedback during service delivery.

It definitely means answering the question, ‘why does this matter?’

2. Find your purpose and motivation
So why does the evaluation matter? Is the evaluation for accountability? Is it for learning and development? Is it answering the questions - how much did we do, how well did we do it, what differences did we make?
Take some time to really get your head around your purpose. This makes it easier to keep your motivation high and helps you to design your evaluation by focusing you on what you need to know.

Picture2

3. Be realistic
You are probably not going to become a spinach smoothie drinking, marathon running, yogi master overnight. Similarly, you are not going to solve Australia’s most wicked social issues, no matter how awesome your program may be. You need to have realistic evaluation expectations given your limited time and budget, and you need to manage funders expectations as well.

Ask yourself, what can we reasonably expect to achieve? What outcomes reflect those reasonable expectations, and how do we measure them? Your evaluation needs to be fit for purpose, for community and for resources. Remember, it is better measure a few things well, rather than many poorly.

4. Phone a friend
Friends are super helpful. Use them. Leverage your network. Do you know experts? What online resources can you use? Hepatitis NSW, who presented a fantastic case study at the NSW AES seminar, checked in with an academic about their survey structure improved readability and questions. Their response rate doubled from 10% to 20%.

Remember partnerships are a two-way street. You’ve got a lot of value to offer too. Don’t forget it!

5. Incentivise
A little incentive goes a long way. It’s no surprise that incentives increase response rates. Used tactically, they’re also really cost effective.
It’s always important to say thank you for the time your client took to provide feedback. The incentive could be a chance to win a voucher or a little present. Know your target group, and you will know an appropriate incentive.

What lessons have you learnt on your evaluation journey? This email address is being protected from spambots. You need JavaScript enabled to view it..

Ruby is a consultant at Nexus Management Consulting.

 

 

September 2018
By Jade Maloney

aes18 hero socialmedia

Our world is transforming at a dizzying rate. What does this mean for evaluation and, by extension, evaluators?

That’s the question posed by the 2018 Australasian Evaluation Society conference in Launceston this week. So what do our keynotes think?

Kate McKegg – well known for her work advancing developmental evaluation practice – asks us to think deeply about what we really mean when we say transformation. What might the dimensions be? What exactly is it we are trying to transform: people, places, practices, structures, systems, technologies or something else? Does it have to be global? Or does what occurs at the national, regional, local, family or individual level count? Will we recognise transformation for what it is as it happens and be able to capture it? Can we really deliver transformation or does it have to be experienced?

McKegg’s co-presenter, Michael Quinn Patton (of utilisation-focused, developmental, and now principles-focused evaluation fame), tells us that evaluating transformation means transforming evaluation and lays down a challenge. Is evaluation going to be part of the problem (maintaining the status quo) or part of the solution (supporting and enabling transformation)?

The pair’s pre-conference workshop had everyone buzzing, both those who had read Principles-Focused Evaluation from cover to cover and those who were new to the concept. Participants learned the distinctions between rules – where the focus is on compliance and there isn’t need for interpretation – and principles – which provide guidance and direction, but need to be interpreted within specific contexts. They also learned about layering principles and that less is more in both number and description.

For Lee-Anne Molony, Managing Director at Clear Horizon, who chaired the session: a quote from William Easterly (The Tyranny of Experts) neatly summed up the value of taking a principles-based approach: ‘It is critical to get the principles of acting right before acting’. This plays out most in good ‘design’ but as evaluators our role is to support the process of ensuring those ‘right principles’ are clarified well enough that they are meaningful and relevant (provide sufficient guidance for decision makers); are able to be adhered to (at least in theory); and the results they would produce if adhered to are clear (or can be determined).

For Keryn Hassall, one of the participants: principles-focused evaluation offers an opportunity for transforming evaluation practices, and for supporting more sophisticated program management. Principles are the best way to guide decisions in complex, adaptive contexts, and where there are no easy answers to how to solve problems. Programs where the journey is just as important than the destination can look like failure when evaluated using government evaluation guidelines that focus on reporting on specified outcomes. Learning about principles-focused evaluation helps evaluators deepen their role to help program managers deliver meaningful programs.

But principles-focused evaluation is only one of the ideas on the table. Penny Hagen is strengthening the relationship between co-design and evaluation, Karol Olejniczak is getting us to gamify, and Sharon Gollan and Kathleen Stacey are asking us to apply the lens of cultural accountability to ensure evaluation is culturally safe.

With all of this on offer, you’d be hard-pressed not to find a way to transform your practice by the end of the week.

Thanks to aes18 conference convenor Jess Dart for coordinating input from keynotes and Eunice Sotelo for curating the questions.

Jade is a partner at ARTD Consultants.

A practitioner’s take on developmental evaluation

September 2018
By Zazie Tolmer

russian

Late last year an opportunity came up for a Clear Horizon consultant to work full time as an embedded evaluator in a Collective Impact initiative. I jumped at the opportunity and have been part of the backbone team for the last eight months.

Over that time, the way I approach my practice has changed considerably and I finally feel that I am getting a handle on what it feels like to be a Developmental Evaluator. I have learnt:

To go where the positive energy is – Rather than trying to situate evaluation through planning, I focus on the pockets of people where there is real current interest in drawing on evaluative thinking, for example, wrapping evaluation around a small prototype or developing a theory of change. This provides a place to work from that is immediately useful and creates demand for evaluation. I tried initially to drive the scoping and development of an M&E framework and plan for the initiative, but I did not get very far, fast! Collective Impact initiatives operate deliberately in the complex, and the rationalisation that is required in more standard M&E planning goes against the grain.

To do Russian Doll evaluation – When you don’t have a plan but you want to start, it can help to do a small discrete evaluation project first. For example, an exploratory qualitative study looking at partners’ perceptions and experience of the impacts of the work. Once you have one piece completed, you can start to spring off it into other evaluative projects. It can be really hard to reconcile all the different evaluation needs and tensions on a Collective Impact initiative. I have found that if you produce one evaluative output, the rest of the backbone and partners: a) understand concretely what evaluation can do; and b) are better able to articulate what they might need next from evaluation. In my mind, this approach to evaluation looks like a Russian doll set where you start with the smallest doll and keep building on it and wrapping more evaluation layers around it, until you have built up your evaluation to cover the full initiative.

To listen, keep listening and never stop listening – I have learnt to leave the consultant in me at the door and to be guided by the group rather than taking the lead in my area of ‘expertise’. The group have a much better understanding than I do of what is needed next. My job is really to listen out for where there might be an opportunity for an evaluative piece and to translate this into something that can be delivered within the time and resourcing constraints. I’m also learning to leave ‘me’ out of it. For example, I have stopped to think about the evaluation pieces as my work and am emphasising quality (honestly the best that can be done with the available resources and time) over rigour (bullet proof evidence).

Listening also means anticipating. The evaluation work I do for the initiative I am working on includes evaluation pieces that have been identified together and others that are bubbling along in the background ready to be shared when the timing is right. These pieces are more like hunches that sometimes work out and sometimes don’t. When they do, they create good momentum!

At this year’s AES conference in Launceston, the team and I will be presenting on the transformative power of Developmental Evaluation.

Zazie is a principal consultant at Clear Horizon