Workshop: Evaluand Analysis, or what is a policy or program? (Online 19 & 26 July 2023)
Workshop: Evaluand Analysis, or what is a policy or program?
Date and time: Wednesday 19 July and Wednesday 26 July, 10.00am to 1.00pm AEST (registration from 9.15am) Registrants are to attend both sessions. (full day workshop - 2 sessions)
Venue: Via Zoom. Details will be emailed to registrants just prior to the workshop start time
Facilitator: Andrew Hawkins (Chief Evaluator, ARTD)
Register online by: 18 July 2023, unless sold out prior limited to 25 participants.
Fees (GST inclusive): Members $260, Organisational member staff $375, Non-members $425, Student member $125, Student non-member $210 * Students must send proof of their full-time student status to
All evaluation starts with questions. Unsurprisingly, asking the right questions lies at the heart of successful evaluation. What is this thing? What can be known about it? What information is most useful to know?
The purpose of this workshop is to provide evaluators with a range of lenses through which to view the ‘evaluands’ often encountered in social policy and program evaluation. Being explicit about what the evaluand ‘is’ will provide evaluators with greater confidence to navigate the dizzying array of approaches to evaluation. This is crucial for cost effective and valuable evaluation that makes a difference.
The workshop will use the metaphors of the telescope, the microscope, the radar and the magnifying glass to compare and contrast four of the most fundamental and divergent perspectives from which any evaluation may commence; experimental, realist, systems, and propositional evaluation.
An important aspect of the training is learning how to adjust to the strengths, weaknesses, uncertainties, blind spots, and priorities that are apparent in any view and be able to apply different approaches to an evaluand at any stage in its lifecycle, including before the first participant is enrolled.
The workshop will suit those types who are willing to question their fundamental assumptions about what a program ‘is’ and what is the realistic and most powerful role the evaluator can play in contributing to making positive change in the world.
Evaluation often appears to be a discipline that requires extensive training in social science research methods and methodology. While there is no doubting the importance of these methods it is easy to miss that evaluation is something that all people do all the time and ways of doing it systemically should be in reach of any citizen or public servant.
This workshop will equip participants to train their natural ability for evaluation to the problems of social life. It will focus on four visual aids that augment the naked eye. It will provide the ability to adopt a different perspective on the evaluand based on the circumstances, needs and constraints for any given evaluation. Evaluators like any worker in the real world needs different tools for different jobs.
In workshop 1 participants will consider the fundamental nature of a social policy evaluand from the perspectives of social science, public sector accountability, complexity science and praxeology or reasoned action – such as whether it is most usefully considered as
- a theory or a hypothesis (social science, experimental and realist)
- a promise (accountability)
- an intervention into an existing set of relationships (systems science)
- a plan or a proposition for action (propositional evaluation)
We will then discuss some example evaluands and consider which perspective might be most useful and how these different perspectives on the evaluand lead to different approaches to the core role of evaluation. We will discuss how experimental, realist, systems, and propositional evaluation treat:
- The unit of analysis
- The role of context
- The relative focus on explaining the past, the present or the future.
- The attitude towards the ’validity’ and ‘shelf-life’ of knowledge being created.
- The dominant impulse for method selection and data analysis.
In workshop 2 we will focus on the least well-known of the four approaches to evaluation - Propositional Evaluation. This approach casts the evaluator in the role of critical friend who is concerned with generating evidence and insight to identify and manage the risk that an evaluand does not lead to its intended outcome(s).
Participants will be shown how to set out an evaluand as a proposition in the form of premises and conclusions. They will learn how to evaluate the proposition and test whether it makes sense ‘on paper’ (i.e., test if it is ‘valid’) & think about how they would test it ‘in reality’ (i.e., test if it is ‘well-grounded’). The evaluand will be ‘valid’ if the form of the argument provides confidence to think that if the premises hold (i.e., outputs generated, and assumptions hold) that the conclusion (i.e., outcomes) will follow. It will be ‘well-grounded’ when we use social science research methods to determine whether and to what extent the premises did actually hold, for whom and to what extent.
The focus will be on establishing the validity of the argument. Validity is established with the use of necessary and sufficient conditions. The idea being that all the conditions or premises (i.e., outputs and assumptions) must be necessary for the program to work as intended (otherwise there are redundancies), and collectively these premises must be sufficient to bring about some outcome. Much of the rest of evaluation theory tends to focus on the well groundedness of these premises in reality and methods of establishing these are not featured heavily in the workshop.
The learning strategies will include short introductions to ideas (5-10 mins) followed by short bursts of individual analysis in a large group format – this allows participants to work (with their microphone muted and volume turned down) and others to ask questions to clarify. Sparing use of small groups discussions will occur towards the end of the workshop to ensure participants have enough content to ‘stay on topic’ while in break out rooms.
The objective of the workshop is for practitioners to feel confident they can set out the core aspects of any evaluand at any stage in its life cycle, prioritise aspects most in need of empirical evaluation and feel confident in knowing what they need to consider in developing an evaluation that will be useful to those intending to make use of its findings.
This workshop aligns with competencies in the AES Evaluator’s Professional Learning Competency Framework. The identified domains are:
- Domain 1 – Evaluative attitude and professional practice
- Domain 2 – Evaluation theory
- Domain 3 – Culture, stakeholders and context
- Domain 4 – Research methods and systematic inquiry
Who should attend?
This workshop could be of interest evaluators from novice to advanced levels.
Novices might find the workshop content immediately useful to demystify evaluation jargon like ‘program logic’ or ‘theory of change’ and help them plan out a credible and cost-effective evaluation.
Intermediate evaluators might appreciate the way it helps them improve their program logic and prioritize evaluation questions for an evaluation. It might provide the pragmatic theoretical underpinning for the work they have always done but struggled to support with a ‘theory of evaluation’ suited to evaluation of social policy and programs in complex adaptive systems.
Old hands may find the approach useful as a stimulus for reflecting on, critiquing, and advancing their own ideas about the nature of evaluation. They may find it challenges assumptions about the primacy of social science research in evaluation. It may also help some create their own answers to questions like, ‘what makes a program logic ‘logical’? or where is the ‘theory’ in theory of change.
Theorists will be interested to critique the use of a configurational theory of causality based on establishing necessary and sufficient conditions for an intended change.
Workshop start times
- VIC, NSW, ACT, TAS, QLD: 10.00am
- SA, NT: 9.30am
- WA: 8.00am
- New Zealand: 12.00pm
- For other time zones please go to https://www.timeanddate.com/worldclock/converter.html
About the facilitator
Andrew is Chief Evaluator and a partner at ARTD Consultants where he has worked for the last 16 years as full-time evaluator. He has led or participated in hundreds of evaluation projects. Andrew has a deep interest in evaluation theory and the history and philosophy of science. He was at one time co-chair of the AES Realist Evaluation Special Interest Group and is currently co-chair of the AES Systems Evaluation Special Interest Group. He believes (apologies to Marx) that ‘social scientists have sought to understand how the world works, the point of evaluation is to understand how to change it’. A more fulsome treatment of these ideas can be found at www.propositionalevaluation.org
|Event Date||19 Jul 2023 10:00am|
|Event End Date||26 Jul 2023 1:00pm|
|Cut Off Date||18 Jul 2023 12:00pm|