by Anthea Rutter
Interviewing John was a pleasure for me. He was my teacher at the Centre for Program Evaluation back in the 90s. Indeed, John and his colleagues have taught a large number of the members of the AES over the years. John and I have also worked on projects together. Even though we have a shared history, I was curious to find out what brought him into the field of evaluation in the first place.
I was at the Australian Council of Educational Research in 1975. I was asked to be the officer in charge of a national evaluation of a science education curriculum in secondary schools. However I had no knowledge of evaluation, so in order to do the project, I started reading books about evaluation and how I could translate some of these ideas into a framework, so I could undertake the study. My background was in science, in particular physics, and in my last couple of years at the Melbourne Advanced College of Education, I got interested in science education and taught courses for aspiring teachers.
John’s knowledge in the field of evaluation is vast and I was keen to find out what he regarded as his main area of interest.
Theories of evaluation. I was concerned that traditional evaluation did not make much of a difference to the planning and delivery of interventions, so I became interested in the utilisation of evaluation findings, and how policies and programs can be improved in terms of utilisation. More generally, I was engaged in research and knowledge utilisation, including factors that affected take-up of this kind of knowledge.
I felt strongly that someone who has been in a field for a number of years must have had challenges to his practice, and John was no exception.
I guess even though I had done this project at ACER I wasn’t aware of the breadth of the thinking about evaluation that was emerging in the 1980s, so when I came back to Melbourne College of Advanced Education, the Principal asked us to set up the Centre for Program Evaluation with Gerry Ellsworth around that time. The challenge for us was to actually decide how a Centre would work and how we could incorporate all of the emerging theories into a coherent package for a teaching course. We knew that we had an opportunity to offer something that was not offered in Australasia. There were a lot of new learning going on. The challenge was to put it together and make sense for people about to work in the field of evaluation. The other challenge was political. The course was not just for teachers. We tried to protect ourselves in the institution. My challenge was to actually see myself as a teacher of evaluation which was different to one in science education.
When I first came back from ACER, they asked me to be the coordinator of a graduate course in curriculum. I had worked in innovation and change. I managed to integrate my work in innovation and change into the evaluation program.
Apart from challenges, a career as broad as John’s must have had a number of highlights and so I asked John about the major ones.
I guess we are talking about post PhD – for me getting a doctorate was a highlight. After that, I guess, when I became Director of the Centre [for Program Evaluation at The University of Melbourne]. One highlight was working to develop a distance education course in evaluation. Once again, this was something new: we had a new Centre that had been operating for a while, which was innovative, and now we were thinking of an innovative offering in teaching. Actually, I learnt a lot about the evaluation field in Australia from the AES. Getting that course up and running was a highlight. Another highlight was being made a Fellow of the Society – very thrilled, an acknowledgement – and the consequent involvement in the Society. Really enjoyed the Society which has been effective in promoting and maintaining the profession.
For most of us, we are not lone operators and there are a number of influences – individuals as well as different evaluation or research models that have influenced our practice. I wanted to find out from John what he considers were the major influences that really helped to define his practice.
The notion of evaluation for decision making underlies my practice. In terms of people and models, I do remember coming across Dan Stufflebeam’s CIPP model. If I was looking at a conceptual influence it may be that one. He had the notion of context, input, process and product. Another one is that I used to get concerned about evaluating a complex problem, then suddenly I came across program logic ideas. It was not heavily used until the 90s. Possibly Joe Wholey had referred to it. Now I understood how to unpack the intervention. Possibly having a scientific background helped me to understand the logic approach.
For those of us who have been in the evaluation field for a long time we are aware that changes occur in practice and I was keen to get John to reflect on them.
When I first started reading about evaluation it was about measuring impact and using rigid methods of determining impacts implied by quantitative methods of evaluation. Since then the field has expanded and been influenced by thinkers such as Michael Patton, the emphasis on utilisation by Marv Elkin and the expansion of the field – so in a sense evaluative inquiry could be used to influence programs as they were being delivered rather than as an assessment at the end. My book [Program Evaluation: Forms and Approaches] summarises my view of these things.
The notion of skills and competencies are very important to John’s role as a teacher, so I wanted to find out what he saw as the main skills or competencies the evaluator of today needs to keep pace with emerging trends in practice.
First of all, they need skills and competencies. There seems to be a view among certain organisations that anyone can do evaluation. There are two sets of skills, one related to epistemology which gets to what knowledge is needed, and also the different models which could be used. The second set are methodological skills. At least an understanding of data management, and being able to be creative in designing methodologies which help you with the compilation of information from which you can make findings and conclusions. Also, the need for evaluators to have the attitude that they can refine methodology if they need it. I am sure there are methodologies associated with technology which need to be learned. But there is a basic underlying rationale.
During John’s time as an evaluator and teacher I felt that he must have reflected on some of the social and political issues which we as a profession ought to be thinking about and trying to resolve.
Perhaps if I was going to put some energy into something, I think that evaluation in government is still at a basic level. I think in the helping professions, education and social interventions, we have a pretty good track record, but I don’t think that we have tackled the big problem of how government departments deal with evaluation, i.e. feedback loops around collecting data, producing findings and using them. Perhaps these organisations are large, more complex, but I don’t see much of accountability. There is general research around which shows that there is little effort in using evaluation in designing and delivering programs. So, this is a major issue to be looking at by the AES and by leaders in government.
The AES has been an important part of John’s life and so I felt he would have views on how the Society can best position itself to still be relevant in the future. I was not wrong!
I have a strong view about this. To position ourselves, we should make more links with societies which have cognate interests, so we can influence the work of evaluation and applied research more. By talking to people like auditors and market researchers. There are groups out there who could have an indirect influence on the Society. We need some work to make links and partnerships, and policies which acknowledge that evaluation could be an umbrella approach useful to other professional organisations. I have long held that position. You hear about the huge conferences which auditors have. We should be in there talking to these people and talking to them about the fact that our knowledge could benefit them.
What do you wish you had known before starting out as an evaluator?
I would have benefitted from a graduate course/subject in sociology, particularly one that dealt with the sociology of knowledge. Unfortunately such courses were not readily available at university, and even if they had been, they would not have readily meshed with my science studies. Perhaps a course on the philosophy of science would have also been good, so that I could have come to grips with giants like Popper and Russell.
John Owen has 40 years of experience in evaluation and is currently a private consultant. His major roles in evaluation include Director at the Centre for Program Evaluation, as a teacher, and presenter at workshops.