Member login
 
   
Forgot Login?   Sign up  

This is the AES Blog, where we regularly post articles by the Australasian evaluation community on the subjects that matter to us. If you have an idea, please contact us on This email address is being protected from spambots. You need JavaScript enabled to view it.. Blog guidelines can be found here.



Fellows johnO
 

May 2019
by Anthea Rutter

Interviewing John was a pleasure for me. He was my teacher at the Centre for Program Evaluation back in the 90s. Indeed, John and his colleagues have taught a large number of the members of the AES over the years. John and I have also worked on projects together. Even though we have a shared history, I was curious to find out what brought him into the field of evaluation in the first place.

I was at the Australian Council of Educational Research in 1975. I was asked to be the officer in charge of a national evaluation of a science education curriculum in secondary schools. However I had no knowledge of evaluation, so in order to do the project, I started reading books about evaluation and how I could translate some of these ideas into a framework, so I could undertake the study. My background was in science, in particular physics, and in my last couple of years at the Melbourne Advanced College of Education, I got interested in science education and taught courses for aspiring teachers.

John’s knowledge in the field of evaluation is vast and I was keen to find out what he regarded as his main area of interest.

Theories of evaluation. I was concerned that traditional evaluation did not make much of a difference to the planning and delivery of interventions, so I became interested in the utilisation of evaluation findings, and how policies and programs can be improved in terms of utilisation. More generally, I was engaged in research and knowledge utilisation, including factors that affected take-up of this kind of knowledge.

I felt strongly that someone who has been in a field for a number of years must have had challenges to his practice, and John was no exception.

I guess even though I had done this project at ACER I wasn’t aware of the breadth of the thinking about evaluation that was emerging in the 1980s, so when I came back to Melbourne College of Advanced Education, the Principal asked us to set up the Centre for Program Evaluation with Gerry Ellsworth around that time. The challenge for us was to actually decide how a Centre would work and how we could incorporate all of the emerging theories into a coherent package for a teaching course. We knew that we had an opportunity to offer something that was not offered in Australasia. There were a lot of new learning going on. The challenge was to put it together and make sense for people about to work in the field of evaluation. The other challenge was political. The course was not just for teachers. We tried to protect ourselves in the institution. My challenge was to actually see myself as a teacher of evaluation which was different to one in science education.

When I first came back from ACER, they asked me to be the coordinator of a graduate course in curriculum. I had worked in innovation and change. I managed to integrate my work in innovation and change into the evaluation program.

Apart from challenges, a career as broad as John’s must have had a number of highlights and so I asked John about the major ones.

I guess we are talking about post PhD – for me getting a doctorate was a highlight. After that, I guess, when I became Director of the Centre [for Program Evaluation at The University of Melbourne]. One highlight was working to develop a distance education course in evaluation. Once again, this was something new: we had a new Centre that had been operating for a while, which was innovative, and now we were thinking of an innovative offering in teaching. Actually, I learnt a lot about the evaluation field in Australia from the AES. Getting that course up and running was a highlight. Another highlight was being made a Fellow of the Society – very thrilled, an acknowledgement – and the consequent involvement in the Society. Really enjoyed the Society which has been effective in promoting and maintaining the profession.

For most of us, we are not lone operators and there are a number of influences – individuals as well as different evaluation or research models that have influenced our practice. I wanted to find out from John what he considers were the major influences that really helped to define his practice.

The notion of evaluation for decision making underlies my practice. In terms of people and models, I do remember coming across Dan Stufflebeam’s CIPP model. If I was looking at a conceptual influence it may be that one. He had the notion of context, input, process and product. Another one is that I used to get concerned about evaluating a complex problem, then suddenly I came across program logic ideas. It was not heavily used until the 90s. Possibly Joe Wholey had referred to it. Now I understood how to unpack the intervention. Possibly having a scientific background helped me to understand the logic approach.

For those of us who have been in the evaluation field for a long time we are aware that changes occur in practice and I was keen to get John to reflect on them.

When I first started reading about evaluation it was about measuring impact and using rigid methods of determining impacts implied by quantitative methods of evaluation. Since then the field has expanded and been influenced by thinkers such as Michael Patton, the emphasis on utilisation by Marv Elkin and the expansion of the field – so in a sense evaluative inquiry could be used to influence programs as they were being delivered rather than as an assessment at the end. My book [Program Evaluation: Forms and Approaches] summarises my view of these things.

The notion of skills and competencies are very important to John’s role as a teacher, so I wanted to find out what he saw as the main skills or competencies the evaluator of today needs to keep pace with emerging trends in practice.

First of all, they need skills and competencies. There seems to be a view among certain organisations that anyone can do evaluation. There are two sets of skills, one related to epistemology which gets to what knowledge is needed, and also the different models which could be used. The second set are methodological skills. At least an understanding of data management, and being able to be creative in designing methodologies which help you with the compilation of information from which you can make findings and conclusions. Also, the need for evaluators to have the attitude that they can refine methodology if they need it. I am sure there are methodologies associated with technology which need to be learned. But there is a basic underlying rationale.

During John’s time as an evaluator and teacher I felt that he must have reflected on some of the social and political issues which we as a profession ought to be thinking about and trying to resolve.

Perhaps if I was going to put some energy into something, I think that evaluation in government is still at a basic level. I think in the helping professions, education and social interventions, we have a pretty good track record, but I don’t think that we have tackled the big problem of how government departments deal with evaluation, i.e. feedback loops around collecting data, producing findings and using them. Perhaps these organisations are large, more complex, but I don’t see much of accountability. There is general research around which shows that there is little effort in using evaluation in designing and delivering programs. So, this is a major issue to be looking at by the AES and by leaders in government.

The AES has been an important part of John’s life and so I felt he would have views on how the Society can best position itself to still be relevant in the future. I was not wrong!

I have a strong view about this. To position ourselves, we should make more links with societies which have cognate interests, so we can influence the work of evaluation and applied research more. By talking to people like auditors and market researchers. There are groups out there who could have an indirect influence on the Society. We need some work to make links and partnerships, and policies which acknowledge that evaluation could be an umbrella approach useful to other professional organisations. I have long held that position. You hear about the huge conferences which auditors have. We should be in there talking to these people and talking to them about the fact that our knowledge could benefit them.

What do you wish you had known before starting out as an evaluator?

I would have benefitted from a graduate course/subject in sociology, particularly one that dealt with the sociology of knowledge. Unfortunately such courses were not readily available at university, and even if they had been, they would not have readily meshed with my science studies. Perhaps a course on the philosophy of science would have also been good, so that I could have come to grips with giants like Popper and Russell.

-------------------------------------------------------------

John Owen has 40 years of experience in evaluation and is currently a private consultant. His major roles in evaluation include Director at the Centre for Program Evaluation, as a teacher, and presenter at workshops.


 

Fellows patriciaR
 

April 2019
by Anthea Rutter

While Patricia Rogers is one of the most recently named Fellows, many of you will be familiar with her work from AES conference keynotes, Better Evaluation and her report on Pathways to advance professionalisation within the context of the AES (with Greet Peersman). She is Professor of Public Sector Evaluation, RMIT University, and an award-winning evaluator, well known around the world.

While she is one busy lady, I managed to catch her at the last conference in Launceston, which was apt because conferences were a key thread in her reflections. 

Patricia talked to me about her interest in different models of evaluation and her passion for looking for ideas that would make a difference.  One of those ideas was Michael Scriven’s goal-free evaluation.

In 1986 I was working in Sydney but about to move back to Melbourne to work in local government.  The AES conference was on in Sydney – I hadn’t heard about it, but I went to meet up with some people after Michael Scriven’s opening keynote and saw people in uproar over the notion that you could and perhaps should evaluate without reference to the stated goals.

That was my first introduction to the AES.  The following year I went to the AES conference in Canberra and was introduced to program logic, as being done by Brian Lenne, Sue Funnell and others in NSW.

Patricia went on to write a book on program logic with Sue Funnell, Purposeful Program Logic.

What are your main interests now?

I’m interested in all sorts of ways that evaluation, and evaluative thinking, can be more useful. I guess I’m particularly interested in how to develop, represent and use theories of change. At first, I was interested in theories of change in terms of developing performance indicators, but then I learned how useful they could be for helping people have a coherent vision of what they are trying to do, for bringing together diverse evidence, and for supporting adapting learning from successful pilots to other contexts.

Another area of ongoing interest for me is how to address complexity.  Again this stemmed from an AES conference – I can see a common thread here!  I was puzzling over how to make sense of an evaluation involving hundreds of projects with diverse types of evidence.  Michael Patton gave a keynote drawing on Brenda Zimmerman’s ideas about simple, complicated and complex. It gave me a way to distinguish between different types of challenges in evaluation and different strategies to address them.

Who has influenced your wide-ranging interests?

The AES has been pivotal. I was reading down the list of fellows, and I really felt pleased that I know them all and I have worked with a lot of them and respect them. I have learnt from conference sessions, had helpful feedback, plus mentoring and peer support – that sort of generosity and friendship. In terms of individual people, Jerry Winston’s insights into evaluation have been amazing. I met him 30 years ago when I started teaching at Phillip Institute (now RMIT University). His approach around systems, seeing evaluation as a scientific enquiry, and using adult learning principles for evaluation and evaluation capacity building were way ahead of everyone else. In many ways I’m still catching up to and understanding his thinking.

In terms of practice and theory Michael Patton has also resonated with me. I value his consistent focus on making sure evaluation is useful, especially through actively engaging intended users in evaluation processes, his use of both quantitative and qualitative data, and his incorporation of new ideas from management and public administration into his practice.

Evaluation has changed a lot over the 30 years Patricia has been in the field. What has she noticed most?

One of the problems is that while the field of evaluation has changed, the common understanding of evaluation has not always kept up. So there continues to be misconceptions, such as that evaluation is only about measuring whether goals have been achieved. Also there is a perception of evaluation as being low-quality research. i.e. if you can’t make it as a serious researcher then you do low-quality research which is called evaluation. Whereas good quality evaluation, which needs to be useful and valid and ethical and feasible all at the same time, is enormously challenging and also potentially enormously socially useful. Not just in terms of producing findings but in supporting the thinking and working together to identify what is of value and how it might be improved.

I agree, evaluation is never an easy endeavour, so it is reassuring to hear from others that it doesn’t always go smoothly, but you can recover. What has been one of your biggest challenges?

One of my biggest disappointments was when I was working with a government department which had commissioned a big evaluation of a new initiative, but the people who had asked for it had moved on. The department was still obliged to do the evaluation to meet Treasury requirements, but they were not at all keen on it. I asked to meet with the senior management and tried to use an appreciative inquiry approach to identify what a good evaluation might be for them, and how we might achieve that.  I asked them, ‘Tell me about an evaluation which has really worked for you.’  There was a long silence, and then they said they couldn’t think of any.  It’s hard when people have had such a negative experience of evaluation that they can’t imagine how it could be useful. In hindsight, I should have called the issue – and either got commitment to the evaluation or walked away.

Patricia and I talked about the skills and competencies evaluators need today so that they can keep up with emerging trends. This led us to Ikigai – finding the intersect of what you like doing, what you are good at, what the world needs and what you can get paid for.

Ikigai PatriciaRogersv2

Getting this right, we agreed, would help you jump out of bed in the morning.

What do evaluators need today?

Evaluators all need to keep learning about new methods, new processes, and new technologies. It’s not just about summarising surveys and interviews any more. We need to take the leap into digital technology and crowd sourced data and learning. For most people, it would be useful to learn more about how to use new technologies including digital tools to gather, analyse, report data and support evaluative thinking.

Another important skill and competency is managing uncertainty for yourself and your team as situations and requirements will change over time.

Most of us also need to learn more about culturally responsive evaluation and inclusive practice, including being more aware of power dynamics and having strategies for them.

We need to be engaged in ongoing learning about new ways of doing evaluation and new ideas about how to make it work better. That’s why my work is now focused on the BetterEvaluation project, an international collaboration which creates and shares information on ways to do evaluation better.

Beyond continuous learning what do evaluators need to be focused on over the next decade? What issues do they need to resolve?

It’s about democracy. It’s about being inclusive, respectful, and supporting deliberative democracy and what does that mean. We should be ensuring that the voice of those less powerful, for example Indigenous groups and migrants, are heard as well as being part of the decision-making.

The last question I asked Patricia, and can I say that this was mainly answered on the run – literally as I walked down with her to the session she was chairing! – was about the AES’s role in the change process.

The AES has an important role to play in improving professional practice in evaluation (including by evaluators and those managing evaluations). With my colleague Greet Peersman, we have just produced a report for the AES on Pathways to Professionalisation which includes discussing the positioning of the AES. We need more people to know about the AES, and we need more people to be more engaged in AES events like the conference and more AES people engaged in public discussions.  

How can we make the conference more accessible – for example, more subsidised places or lower-cost options. How can the AES be more involved in discussions about public policy and service delivery?

-------------------------------------------------------------

Patricia Rogers is Professor of public sector evaluation at RMIT, and currently on three years’ leave to lead the evidence and evaluation hub at the Australia and New Zealand School of Government.


 

May 2019
by Eunice Sotelo & Victoria Pilbeam

BradAstburyblogpic

Many evaluators are familiar with realist evaluation, and have come across the realist question “what works for whom, in what circumstances and how?” The book Doing Realist Research (2018) offers a deep dive into key concepts, with insights and examples from specialists in the field.

We caught up with Brad Astbury from ARTD Consultants about his book chapter. Before diving in, we quickly toured his industrial chic coworking office on Melbourne’s Collins Street – brick walls, lounges and endless fresh coffee. As we sipped on our fruit water, he began his story with a language lesson.

Doing Realist Research (2018) was originally intended to be a Festschrift, German for ‘celebration text’, in honour of recently retired Ray Pawson of Realistic Evaluation fame. Although the book is titled ‘research’, many of the essays in the book, like Brad’s, are in fact about evaluation.

The book’s remit is the practice and how-to of realist evaluation and research. Our conversation went wide and deep, from the business of evaluation to the nature of reality.

His first take-home message was to be your own person when applying evaluation ideas.

You don’t go about evaluation like you bought something from Ikea – with a set of rules saying screw here, screw there. I understand why people struggle because there’s a deep philosophy that underpins the realist approach. Evaluators are often time poor, and they’re looking for practical stuff. At least in the book there are some examples, and it’s a good accompaniment to the realist book [by Pawson and Tilley, 1997].

Naturally, we segued into what makes realist evaluation realist.

The signature argument is about context-mechanism-outcome, the logic of inquiry, and the way of thinking informed by philosophy and the realist school of thought. That philosophy is an approach to a causal explanation that pulls apart a program and goes beyond a simple description of how bits and pieces come together, which is what most logic models provide.

[The realist lens] focuses on generative mechanisms that bring about the outcome, and looks beneath the empirical, observable realm, like pulling apart a watch. I like the approach because as a kid I used to like pulling things apart.

Don’t forget realist evaluation is only 25 years old; there’s room for development and innovation. I get annoyed when people apply it in a prescriptive way – it’s not what Ray or Nick would want. [They would probably say] here’s a set of intellectual resources to support your evaluation and research; go forth and innovate as long as it adheres to principles.

Brad admits it’s not appropriate in every evaluation to go that deep or use an explanatory lens. True to form (Brad previously taught an impact evaluation course at the Centre for Program Evaluation), he cheekily countered the argument that realist evaluation isn’t evaluation but a form of social science research.

Some argue you don’t need to understand how programs work. You just need to make a judgment about whether it’s good or bad, or from an experimental perspective, whether it has produced effects, not how those effects are produced. Evaluation is a broad church; it’s open for debate.

If it’s how and why, it’s realist. If it’s ‘whether’ then that’s less explicitly realist because it’s not asking how effects were produced but whether there were effects and if you can safely attribute those to the program in a classic experimental way. Because of the approach’s flexibility and broadness, you can apply it in different aspects of evaluation.

Brad mused on his book chapter title, “Making claims using realist methods”. He preferred the original, “Will it work elsewhere? Social programming in open systems”. So did we.

The chapter is about external validity, and realist evaluation is good at answering the question of whether you can get something that worked in some place with certain people to work elsewhere.
Like any theory-driven evaluation question, the realist approach can answer multiple questions. Most evaluations start with program logics, so we can do a better job at program logics if we insert a realist lens to help support evaluation planning, and develop monitoring and evaluation plans, the whole kit and caboodle.

Where realist approaches don’t work well is estimating the magnitude of the effect of a program.

As well as a broad overview of where realist evaluation fits in evaluation practice, Brad provided us with the following snappy tips for doing realist research:

Don’t get stuck on Context-Mechanism-Outcome (CMO)

When learning about realist evaluation, people can get stuck on having a context, mechanism and outcome. The danger of the CMO is using it like a generic program logic template (activities, outputs and outcomes), and listing Cs, Ms and Os, which encourages linear thinking. We need to thoughtfully consider how they’re overlaid to produce an explanation of how outcomes emerge.

A way to overcome this is through ‘bracketing’: set aside the CMO framework, build a program logic and elaborate on the model by introducing mechanisms and context.

Integrate prior research into program theory

Most program theory is built only on the understanding of stakeholders and the experience of the evaluator. It means we’re not being critical of our own and stakeholders’ assumptions about how something works.

A way to overcome this is through ‘abstraction’: through research, we can bring in wider understandings of what family of interventions is involved and use this information to strengthen the program theory. We need to get away from ‘this is a very special and unique program’ to ‘what’s this a case of? Are we looking at incentives? Regulation? Learning?’ As part of this work, realist evaluation requires evaluators to spend a bit more time in the library than other approaches.

Focus on key causal links

Brad looks to the causal links with greatest uncertainty or where there are the biggest opportunities for leveraging what could help improve the program.

When you look at a realist program theory, you can’t explore every causal link. It’s important to focus your fire, and target evaluation and resources on things that matter most.

When asked for his advice to people interested in realist evaluation, Brad’s response was classic:

Just read the book ‘Realistic Evaluation’ from front to back, multiple times.

As a parting tip, he reminded us to aspire to be a theoretical agnostic. He feels labels can constrain how we do the work.

To a kid with a hammer, every problem can seem like a nail. Sometimes, people just go to the theory and methods that they know best. Rather than just sticking to one approach or looking for a neat theoretical label, just do a good evaluation that is informed by the theory that makes sense for the particular context.

-------------------------- 

Brad Astbury is a Director at ARTD Consultants. He specialises in evaluation design, methodology, mixed methods and impact evaluation.

Eunice Sotelo, research analyst, and Victoria Pilbeam, consultant, work at Clear Horizon Consulting. They also volunteer as mentors for the Asylum Seeker Resource Centre’s Lived Experience Evaluators Project (LEEP).


 

 

Fellows anonaA 

March 2019
by Anthea Rutter

Although a number of AES members have founded consultancies to channel their evaluation work, it is another thing to think about – and actually achieve – the founding of a professional society. This is exactly what Emeritus Professor Anona Armstrong did. Through her company Evaluation Training & Services, the fledging society was born in the early 80s. Not only did Anona found the AES, she had the honour and distinction of having a piece of music written for her and performed at the AES International Conference in 1992.

Unfortunately, I wasn’t able to interview Anona in person, but I am very aware of her achievements – Anona and I go back a long way! Though I know some of the earlier history of the society, it was important to get an accurate record, so the first question on our string of emails was:

Forming a society is no mean feat…how did it happen? I guess I sort of anticipated her answer (something on the lines of ’well…Rome wasn’t built in a day’).

Like all good things the society was formed by slow steps. It started with the first ‘National Evaluation Conference’ which was held in 1982 by the Department of Psychology and the Program in Public Policy Studies at the University of Melbourne.

I compiled a Directory of Australian Evaluators in 1983, and 99 people responded, identifying 52 different categories in which they were conducting evaluation. This Directory became the foundation for a mailing list, and the building blocks for the development of the Australasian Evaluation Society (AES).

Anona also acknowledged that the growth in the membership of the AES was due “in no small way to formal teaching but also to evaluation training provided by AES members.

Like many evaluators, I was curious about how others began their careers, particularly in the ‘80s when evaluation was still a fledging field in Australia.

So how did you get into evaluation?

In the 80s most people had never heard of evaluation. I was regularly asked ‘What do you mean?’ I remember receiving phone calls from people asking if I could do an evaluation of their property. At the time, evaluation was a novelty. Many government program managers thought that setting performance objectives for government programs could not be done.

Anona was referring to her foray into impact assessment, when she realised that there were no accepted measures. So she looked into the ‘new’ research on social indicators, which gave her the “foundation for measuring impact as well as performance management in the public sector.” It’s no less than what I would expect from someone who founded the AES.

As an evaluator I was also curious about Anona’s thoughts on some of the early writers who influenced the field of evaluation in Australia. I was not disappointed – I got a valuable history lesson on her take on the assent of evaluation in this country.

Evaluation in Australia owes its origins to the US. Madaus, Stufflebeam and Scriven (1983) traced evaluation in the US back to the educational reforms of the 19th Century, but modern evaluation is usually associated with the 1960s when the US Government’s General Accounting Office (GAO) sponsored evaluation of the large-scale curriculum development projects introduced under President Lyndon Johnson’s ‘War on Poverty’ (Madeus et al, 1983).

By the 70s, new conceptual frameworks were introduced that were specific to evaluation (e.g. goal-free evaluation, Scriven, 1974; naturalistic, Guber and Lincoln, 1981; needs assessment, Stufflebeam, 1977; systematic, Rossi, Freeman and Wright, 1979; utilisation focused, qualitative evaluation, Patton, 1978, 1980).

As a practitioner who has been in the evaluation game for a long time, I was very keen to get Anona’s thoughts on the changes she had seen in the field of evaluation.

A lot has changed! The discipline of evaluation in Australia in those early years was owned by academia and generally regarded as a form of research. The main purposes for evaluation in those days were to improve government programs or to monitor performance.

Evaluation has expanded: from a focus on government programs to all areas of endeavour; from a small field to a core competency of professional roles; and from academia to consulting. There’s also an increased focus on internal organisation evaluation and performance measurement.

Another area I was keen to get Anona’s input were skills that evaluators need now.

The basic evaluation skills have not changed very much, but new skills are required for the use of technology, implementing agile organisations and data mining.

In her speech to the Australasian Evaluation Society International Conference in 2015 on the future of evaluation, Anona expanded on this theme.

Governments are coping with much more complex international and local environments. They face new challenges: globalisation of trade, global conflict, climate change, the gap between rich and poor, rapid advances in information and communications technology, and generational shifts in values. Locally, Australian governments are experiencing a combination of slower economic growth, an aging population, shrinking PAYE taxes, and growth in global companies that can manipulate their finances to minimise their tax. Society, too, is experiencing massive changes evident in global migration, and the influence of social media. At the same time, a more informed electorate is emerging, composed of a diversity of communities with many different values, but all with rising expectations.

Anona then went on from the original question on what skills are needed to what could be the role of evaluators in the present day.

In this environment, what is the role of evaluators? Well, we still have a role within organisations as designers of programs, determining needs and evaluating performance.

I then took this question further and asked about the areas in which evaluators and the field of evaluation could make a difference.

Anona suggested five directions in which evaluators could have a major impact:

  • Addressing social issues

Now is the time to extend the focus of our activities and use our skills to address some of the larger social issues. Evaluators have a role in establishing the value to society, and to organisations, of actions that count as corporate social responsibility and sustainability. While sustainability has different meanings in different contexts, there is a need for measures that allow comparability at least between entities engaged in the same industry or service.

  • Flexibility of methods

Evaluators have traditionally used social science methods to determine the worth of programs. The debate over qualitative versus quantitative methodologies is surely over. We need the different data provided by both methodologies. The new debate is probably about social science methods versus financial methods. Value for money is becoming a standard component of evaluation. Whether in health, education or social services, it is time that evaluators addressed the reality that financial decisions drive governments as much as political ones and answering financial questions requires financial analysis.

  • New technology

Evaluators need to take advantage of new data mining and other IT systems that are powerful tools for analysis and communication. This means adding financial analysis, modelling and data mining to the evaluator’s competency frameworks.

  • Growing importance of governance

Across the world, there is a greater emphasis on best practice governance. Corporate governance is defined as the overarching framework of rules, relationships and processes within and by which authority is exercised and controlled in order to achieve objectives. Standards for governance are now issued by governments and professional associations. In the higher education system, universities must meet the governance standards, set by TESQA, called the Threshold standards.

  • Growing need for evaluation

Trade agreements with New Zealand, several of the 10 ASEAN countries, India, South Korea and China, etc. are opening up new avenues for Australian trade and services. Services generate 72.7% of Australia’s GDP. The services that will be required are financial, education, business, transport, health, and government services. Every new government program will require an evaluation.

As a final point, I asked Anona what role the AES should have in ensuring the future of evaluation. Her response was very much Anona – visionary and straight to the point.

The AES must address some of the big issues facing society, not only Indigenous and failing organisational issues; not only failures, but how to achieve success.

The AES may need to fund some significant projects to market itself. There’s an opportunity to tie the image of AES to the use of agile technology and to the investigation of the new organisation structures, modes of employment, threats to democracy and of unstable government, and cross-border cultural conflicts.

-------------------------------------------------------------
Emeritus Professor Anona Armstrong is the Director of the Centre for Corporate Governance Research – Victoria University of Technology, and Chair of the Board of Southern Cross Institute of Higher Education. She received an AM Order of Australia, in recognition of her contribution to community and education.

The material for this article is taken from interview questions completed by Anona as well as extracts from her welcome address to the Australian Evaluation Society Conference, Reaching Across Borders Melbourne, 5-9 September 2015.