By Legacy B on Monday, 08 April 2019
Category: Leaders

Patricia Rogers, in conversation

by Anthea Rutter

While Patricia Rogers is one of the most recently named Fellows, many of you will be familiar with her work from AES conference keynotes, Better Evaluation and her report on Pathways to advance professionalisation within the context of the AES (with Greet Peersman). She is Professor of Public Sector Evaluation, RMIT University, and an award-winning evaluator, well known around the world.

While she is one busy lady, I managed to catch her at the last conference in Launceston, which was apt because conferences were a key thread in her reflections. 

 

Patricia talked to me about her interest in different models of evaluation and her passion for looking for ideas that would make a difference.  One of those ideas was Michael Scriven’s goal-free evaluation.

In 1986 I was working in Sydney but about to move back to Melbourne to work in local government.  The AES conference was on in Sydney – I hadn’t heard about it, but I went to meet up with some people after Michael Scriven’s opening keynote and saw people in uproar over the notion that you could and perhaps should evaluate without reference to the stated goals.

That was my first introduction to the AES.  The following year I went to the AES conference in Canberra and was introduced to program logic, as being done by Brian Lenne, Sue Funnell and others in NSW.


Patricia went on to write a book on program logic with Sue Funnell, Purposeful Program Logic.

 

What are your main interests now?

I’m interested in all sorts of ways that evaluation, and evaluative thinking, can be more useful. I guess I’m particularly interested in how to develop, represent and use theories of change. At first, I was interested in theories of change in terms of developing performance indicators, but then I learned how useful they could be for helping people have a coherent vision of what they are trying to do, for bringing together diverse evidence, and for supporting adapting learning from successful pilots to other contexts.

Another area of ongoing interest for me is how to address complexity.  Again this stemmed from an AES conference – I can see a common thread here!  I was puzzling over how to make sense of an evaluation involving hundreds of projects with diverse types of evidence.  Michael Patton gave a keynote drawing on Brenda Zimmerman’s ideas about simple, complicated and complex. It gave me a way to distinguish between different types of challenges in evaluation and different strategies to address them.

 


Who has influenced your wide-ranging interests?

The AES has been pivotal. I was reading down the list of fellows, and I really felt pleased that I know them all and I have worked with a lot of them and respect them. I have learnt from conference sessions, had helpful feedback, plus mentoring and peer support – that sort of generosity and friendship. In terms of individual people, Jerry Winston’s insights into evaluation have been amazing. I met him 30 years ago when I started teaching at Phillip Institute (now RMIT University). His approach around systems, seeing evaluation as a scientific enquiry, and using adult learning principles for evaluation and evaluation capacity building were way ahead of everyone else. In many ways I’m still catching up to and understanding his thinking.

In terms of practice and theory Michael Patton has also resonated with me. I value his consistent focus on making sure evaluation is useful, especially through actively engaging intended users in evaluation processes, his use of both quantitative and qualitative data, and his incorporation of new ideas from management and public administration into his practice.

 


Evaluation has changed a lot over the 30 years Patricia has been in the field. What has she noticed most?

One of the problems is that while the field of evaluation has changed, the common understanding of evaluation has not always kept up. So there continues to be misconceptions, such as that evaluation is only about measuring whether goals have been achieved. Also there is a perception of evaluation as being low-quality research. i.e. if you can’t make it as a serious researcher then you do low-quality research which is called evaluation. Whereas good quality evaluation, which needs to be useful and valid and ethical and feasible all at the same time, is enormously challenging and also potentially enormously socially useful. Not just in terms of producing findings but in supporting the thinking and working together to identify what is of value and how it might be improved.


I agree, evaluation is never an easy endeavour, so it is reassuring to hear from others that it doesn’t always go smoothly, but you can recover.

 

What has been one of your biggest challenges?

One of my biggest disappointments was when I was working with a government department which had commissioned a big evaluation of a new initiative, but the people who had asked for it had moved on. The department was still obliged to do the evaluation to meet Treasury requirements, but they were not at all keen on it. I asked to meet with the senior management and tried to use an appreciative inquiry approach to identify what a good evaluation might be for them, and how we might achieve that.  I asked them, ‘Tell me about an evaluation which has really worked for you.’  There was a long silence, and then they said they couldn’t think of any.  It’s hard when people have had such a negative experience of evaluation that they can’t imagine how it could be useful. In hindsight, I should have called the issue – and either got commitment to the evaluation or walked away.

Patricia and I talked about the skills and competencies evaluators need today so that they can keep up with emerging trends. This led us to Ikigai – finding the intersect of what you like doing, what you are good at, what the world needs and what you can get paid for.

Getting this right, we agreed, would help you jump out of bed in the morning.

 

What do evaluators need today?

Evaluators all need to keep learning about new methods, new processes, and new technologies. It’s not just about summarising surveys and interviews any more. We need to take the leap into digital technology and crowd sourced data and learning. For most people, it would be useful to learn more about how to use new technologies including digital tools to gather, analyse, report data and support evaluative thinking.

Another important skill and competency is managing uncertainty for yourself and your team as situations and requirements will change over time.

Most of us also need to learn more about culturally responsive evaluation and inclusive practice, including being more aware of power dynamics and having strategies for them.

We need to be engaged in ongoing learning about new ways of doing evaluation and new ideas about how to make it work better. That’s why my work is now focused on the BetterEvaluation project, an international collaboration which creates and shares information on ways to do evaluation better.

 


Beyond continuous learning what do evaluators need to be focused on over the next decade? What issues do they need to resolve?

It’s about democracy. It’s about being inclusive, respectful, and supporting deliberative democracy and what does that mean. We should be ensuring that the voice of those less powerful, for example Indigenous groups and migrants, are heard as well as being part of the decision-making. 

 


The last question I asked Patricia, and can I say that this was mainly answered on the run – literally as I walked down with her to the session she was chairing! – was about the AES’s role in the change process.

The AES has an important role to play in improving professional practice in evaluation (including by evaluators and those managing evaluations). With my colleague Greet Peersman, we have just produced a report for the AES on Pathways to Professionalisation which includes discussing the positioning of the AES. We need more people to know about the AES, and we need more people to be more engaged in AES events like the conference and more AES people engaged in public discussions.  

How can we make the conference more accessible – for example, more subsidised places or lower-cost options. How can the AES be more involved in discussions about public policy and service delivery?


-------------------------------------------------------------

Patricia Rogers is Professor of public sector evaluation at RMIT, and currently on three years’ leave to lead the evidence and evaluation hub at the Australia and New Zealand School of Government.

Related Posts