Share your ideas
AES Blog

Welcome to the AES Blog

Australasia has some excellent evaluators. More than that, we have an evaluation community full of ideas and a willingness to share. The AES has long provided a place for us to come together, at regional events and the annual conference, to develop our community together. Now we’re taking it online! The new AES blog will be a space for AES members – both new and experienced – to share their perspectives, reflecting on their theory... If you have an idea, please contact us on Please also view our blog guidelines.

The ten success factors for building evaluation capabilities in the public sector

by Andrew Benoy and Kale Dyer

With government finances tight, it is more important than ever for agencies to demonstrate that every dollar being spent is generating value.

So it makes sense that monitoring, evaluation and learning (MEL) is in the spotlight, and that many agencies are looking to build their internal MEL capabilities.

But building that MEL capability is no easy task. Drawing on our experience working with government agencies across Australia, the United Kingdom and Canada, we are pleased to share some key things you should consider when investing in and growing MEL capabilities in the public sector. 

The appetite for evaluation capacity building is growing but faces challenges

There is renewed interest among Australian government agencies in building evaluation capability.

As Katy Gallagher, the Minister for the Public Service, said last October: "Evaluation is [a] priority for this government. It helps us see if we're actually doing what we said we would. To understand what is working and what isn't. And being accountable to all Australians."

In the past 18 months, the Federal Department of Finance has released a new Commonwealth Evaluation Policy, a new Evaluation in the Commonwealth Resource Management Guide, and updated guidance around embedding evaluation planning into new policy proposals (NPPs). These complement and build on previous work, such as the Productivity Commission's Indigenous Evaluation Strategy.

This reflects a broader trend, with similar pushes in states and territories to outline requirements, recommend practices and update guidance (for example, NSW recently updated its evaluation policy), as well as in countries including Canada and the United Kingdom.

Government agencies are thinking more about how to get better at designing, conducting and commissioning evaluations of policies and programs. Agencies are also focusing on how they can embed cultures of reflection, learning and continuous improvement.

But agencies looking to scale up their internal evaluation capabilities face several challenges:
  • making the case for long-term investment in an environment of heightening fiscal pressures
  • balancing quick wins that demonstrate the value of evaluations with laying the foundation for long-term success
  • competition when recruiting for specialist evaluation skills in a tight labour market.

Meeting these challenges can require some difficult strategic decisions and deft management. 

There is a spectrum of evaluation operating models

A fundamental decision that agencies need to make is on the size and operating model for its evaluation capabilities. Government agencies that commission and conduct regular evaluations typically employ one of three operating models: 

While there are benefits and drawbacks to each approach, in our experience, best practice is typically realised through a more centralised operating model. This can be part of either the centralised approach or a hybrid approach shown above.

There are several benefits to this model. A centralised evaluation function can:

  • provide economies of scale relative to having discrete evaluation staff located across an agency
  • more easily drive capability development and the application of consistent evaluation practice across an agency
  • be more easily located alongside complementary functions, such as economic analysis, strategic policy, data and analytics, and performance measurement to increase the likelihood of value-adding cross-pollination
  • provide an additional level of independence by being separated from policy development and program delivery, which supports the credibility and objectivity of the evaluations
  • establish an identity and brand that can be used to drive change internally and attract external talent to the agency.

The challenge in implementing a centralised model is to retain strong links and knowledge of activity area to ensure evaluation is practical and valuable, from the front-line to strategic decision-making. 

There are ten success factors

The task of establishing a central evaluation function can be daunting. There are myriad decisions to make, each of which will influence how effectively the function supports accountability, evidence-based decision making, learning, improvement and stakeholder feedback in the agency's policy and programs.

While each decision is important, in our experience ten factors are critical to the success of the evaluation function and how effectively it can support decision-making.

   – PLANNING (first three months)

  1. Get early agreement on the why. Engage early with your leadership team to clarify their expectations and get shared agreement on the business case for building evaluation capabilities.
  2. Be clear about the trade-offs. In a resource-constrained environment, you need to understand what you want your evaluation function to focus on. For example, do you want broad coverage across all your activities and deeper coverage of a few? Do you want to conduct more frequent evaluations of program delivery and quality, or do you want them to be less frequent and explore in-depth what outcomes are occurring and why?
  3. Get the governance right. Establish internal governance arrangements that will ensure the evaluation unit's work is viewed as independent, credible and useful – by staff, ministers, program participants and the general public. This governance includes getting regular executive engagement to determine which projects the evaluation function should focus on, to increase the likelihood of sustained support from senior leaders.

    – BUILDING (first six months)

  4. Have a clear strategy. Establish an agency-wide strategy that establishes clear and consistent expectations about how, when and why evaluations are conducted.
  5. Tailor the value proposition. Understand the diverse needs of business areas, tailor intellectual property and capability development to the needs of different parts of your agency, and strive to continually demonstrate the value of evaluations and evaluative thinking to staff across the agency.
  6. Identify champions in the senior executive. Leverage the knowledge and enthusiasm of senior leaders who understand the value of evaluations to champion the function across the agency. Identifying multiple champions is important given the average turnover in most agencies. 

    – EMBEDDING (first twelve months)

  7. Develop accessible evaluation guidance. Develop easy-to-use templates and guidance materials that empower staff to build evaluative thinking into their business areas.
  8. Communicate and demonstrate value. Clearly communicate the evaluation function's value and how much the evaluation process can contribute to agency-wide decision-making.
  9. Engage early. Engage with policy and program areas during the early stages of policy development or program implementation to ensure they are building monitoring and evaluation into their delivery plans.
  10. Communicate and clearly link the findings of evaluations. Make evaluation findings easily accessible across the agency, and, ideally, publicly. Get agreement during evaluation planning about the timing and nature of the decisions that evaluation findings will inform. It is also important to consider the implementation of recommendations and anticipate (where possible) the accountability, priority and levels of effort associated with each recommendation. 

Agencies need to source the key skills

Ultimately, to build internal evaluation expertise, agencies need staff with the right skills.

We have seen public sector agencies use a variety of strategies, including bringing together existing pockets of excellence from line areas, training staff in evaluation skills, developing capability as part of evaluation projects conducted by external providers, recruiting staff from agencies with known centres of excellence, and targeting lateral hires who have worked on evaluations. Most agencies will need to draw on several of these strategies.

Building and acquiring specialist capabilities and changing culture takes time. Agencies need to be ambitious, but also realistic about what is achievable in the short term.

As a first step, you can assess your current level of evaluation maturity and then chart a clear path forward.

As government and citizen expectations of evaluation grows, agencies need to build their capability. For leaders, the time to act is now.


Andrew Benoy (photo left) is a Canberra-based Principal and Kale Dyer (right) is a Perth-based Director at Nous Group, an Australian-founded international management consultancy. 

Pictures, storytelling & play – tools for evaluati...
How to run your MEL program digitally using free t...

Search site
© 2023 Australian Evaluation Society Limited 
ABN 13 886 280 969 ACN 606 044 624
Address: 425 Smith St VIC 3065, Australia

We acknowledge the Australian Aboriginal and Torres Strait Islander peoples of this nation. We acknowledge the Traditional Custodians of the lands in which we conduct our business. We pay our respects to ancestors and Elders, past and present. We are committed to honouring Australian Aboriginal and Torres Strait Islander peoples’ unique cultural and spiritual relationships to the land, waters and seas and their rich contribution to society.