AES submission to the Joint Committee of Public Accounts and Audit's (JCPAA) enquiry into the development of the Commonwealth Performance Framework.
ANZOG and AES Joint Submission on the Enhanced Commonwealth Performance Framework
Download as PDF
Technology for Evaluation in Fragile and Conflict Affected States: An Introduction for the Digital Immigrant Evaluator
Authors: Cheyanne Scharbatke-Church & Aditi G. Patel
Description: Are you old to evaluation but new to technology? This new guide helps the seasoned evaluator engage effectively with technology. Whether one works in humanitarian response to conflict, good governance, anti-corruption or peacebuilding, technology is being incorporated in ever increasing and innovate ways. But where does technology fit into the field of evaluation in fragile and conflict affected contexts? What are the opportunities and pitfalls that an evaluator needs to be aware of? How does an evaluator who has seldom used technology in an evaluation learn where to begin? This paper addresses these questions specifically with the “digital immigrant” in mind. It targets experienced evaluation managers and seasoned evaluators who work in fragile and conflict affected contexts and are not fluent in the application of technology to their evaluation practice.
The first magazine exclusively dedicated to gender-responsive evaluation, focusing on gender mainstreaming in the United Nations.
Why gender-responsive evaluation matters for the SDGs?
As we have entered the era of the 2030 Agenda for implementing the Sustainable Development Goals, Transform issue #6 showcases how gender-responsive evaluation has an important role to play to ensure "no one is left behind".
DIY M&E: A step-by-step guide to building a monitoring and evaluation framework
A monitoring and evaluation framework (M&E framework) is an integral part of understanding how well your program (policy or system) has performed. It helps you plan and record the activities you need to assess your program’s performance and whether it is on the right track.
This guide is an easy to read and understand starting point for the development of an M&E framework. The use of a fictional program throughout the guide provides a helpful and practical example to demonstrate the guidance. A word template is also provided, so users can start drafting their own M&E framework straight away. Sections covered include: Introduction; About the program Approach to evaluation; Monitoring approach; Data collection; and Other content possibilities.
The guide is useful for those who need to draft an M&E framework. It is intended for use by program and policy managers who want to ensure they are setting their program up for success through consideration of their monitoring and evaluation needs. Those involved in evaluation capacity building can also use it as an education resource due to its approachable nature.
Download your copy here: http://resources.grosvenor.com.au/building-a-monitoring-and-evaluation-framework
Developing Monitoring and Evaluation Frameworks
by Anne Markiewicz and Ian Patrick
Developing Monitoring and Evaluation Frameworks is a practical book that provides clear, step-by-step guidance on how to develop a monitoring and evaluation framework in a participatory, logical, systematic, and integrated way. Authors Anne Markiewicz and Ian Patrick outline the key stages and steps involved, including: scoping the framework; identifying planned results; using program theory and program logic; developing evaluation questions; identifying processes for ongoing data collection and analysis; determining means to promote learning; reporting; and dissemination of results. A final chapter focuses on planning for implementation of the framework, with reference to the broader program and organizational context. The authors draw on their extensive experience in developing monitoring and evaluation frameworks to provide examples of good practice that inform organizational learning and decision making, while offering tips and guidelines that can be used to address common pitfalls.
Program Evaluation: a Plain English Guide
By Dana Cross, Grosvenor Management Consulting
For some, the art of program evaluation can seem a bit of a mystery. What is it? More importantly, how do I do it well?
Introducing Grosvenor's 11-step guide to program evaluation. This educational resource is intended for program managers or those who do not consider themselves as experts but need to have a general understanding of what an evaluation might involve. Topics include:
• the what and why of program evaluation
• how to articulate the workings of your program using program theory and program logic
• the tools available for planning your program evaluation
• how to monitor program performance
• the best ways to communicate your findings
• and more.
Download your copy here.
American Evaluation Association Statement On Cultural Competence In Evaluation
A statement on Cultural Competence in Evaluation was drafted by the Cultural Competence in Evaluation Task Force of the American Evaluation Association's Diversity Committee, reviewed by the AEA Board of Directors, and approved by a vote of the full AEA membership.
The statement is the result of a recommendation made by the Building Diversity Initiative, an effort of the AEA and the W.K. Kellogg Foundation that began in 1999 to address the complexity of needs and expectations concerning evaluators working across cultures and in diverse communities.
To read more about this statement click here.
National M&E Guidelines estrablished by the Planning Commission of Nepal
Published in July 2013.
To improve the living standards of common people and to achieve the goals set forth by development
plans, it is essential that the implementation of plans, policies, programmes and projects be
effective. For this purpose, the role of monitoring and evaluation systems at different levels is
critical. The National Planning Commission has accomplished a momentous task by preparing
these National Monitoring and Evaluation Guidelines with the objective of improving and
systematizing the monitoring and evaluation process.
Khil Raj Regmi, Chairman, Council of Ministers anChairman, National Planning Commission
Available for download here.
Evaluating Communication for Development. A Framework for Social Change
by June Lennie and Jo Tacchi, RMIT University, Australia
Published by Routledge in 2013 www.routledge.com/books/details/9780415522595
This book presents a comprehensive framework for critically thinking about and understanding development, social change, and the evaluation of communication for development (C4D). This framework combines the latest thinking from a number of fields in new ways. It has been designed as a way to focus on achieving sustainable social change and to continually improve and develop C4D and other social change initiatives. The authors critique dominant measurement-oriented, upward accountability approaches to evaluation and offer an alternative holistic, participatory, learning-based approach based on systems and complexity thinking and other key concepts. The benefits and rigour of this approach is supported by numerous examples from projects undertaken by the authors over the past fifteen years.
The authors consider ways of increasing the effectiveness of evaluation capacity development from grassroots to management level in the development context, an issue of growing importance to improving the quality, effectiveness and utilisation of evaluation studies in this field. The book includes a critical review of the key approaches, methodologies and methods that are considered effective for planning evaluations, assessing the outcomes of C4D, and engaging in continuous learning. It also includes practical ideas and processes for implementing the framework and strategies for overcoming the many challenges associated with evaluating C4D. The authors highlight the need to take a long-term view of the value of this approach, which can be cost effective when its many benefits are considered.
This rigorous book will be of immense theoretical and practical value to students, scholars, professionals, and practitioners researching or working in development, communication and media, evaluation and program planning, and applied anthropology.
Research integration using dialogue methods
Published August 2009 as an e-book
Research and evaluation addressing real-world, complex problems—like restoration of wetlands, the needs of the elderly, effective disaster response and the future of the airline industry—require expert knowledge from a range of disciplines, as well as from stakeholders affected by the problem and those in a position to do something about it. This book charts new territory in taking a systematic approach to research integration using dialogue methods to bring together multiple perspectives. It links specific dialogue methods to particular research integration tasks.
Fourteen dialogue methods for research integration are classified into two groups:
- Dialogue methods for understanding a problem broadly: integrating judgements
- Dialogue methods for understanding particular aspects of a problem: integrating visions, world views, interests and values.
The methods are illustrated by case studies from four areas of research and evaluation: the environment, public health, security and technological innovation.
Published August 2009 as an ebook (free download here) and also available as print-on-demand from the same site.
Suggested citation: McDonald, D, Bammer, G & Deane, P 2009, Research integration using dialogue methods, ANU E Press, Canberra.
World Bank Independent Evaluation Group
- Improving effectiveness & outcomes for the poor in health, nutrition and population
- Institutionalizing impact evaluation within the framework of a monitoring and evaluation system
- Insider insights: Building a results-based management and evaluation system in Colombia
- Egypt: Positive results from knowledge sharing and modest lending
Country-led monitoring and evaluation systems
Published by UNICEF, in partnership with the World Bank, UN Economic Commission for Europe, IDEAS (International Development Evaluation Association), IOCE (International organization for Cooperation in Evaluation), DevInfo and MICS, this book brings together the vision, lessons learned and good practices from twenty-one stakeholders on how country-led monitoring and evaluation systems can enhance evidence-based policy making. A presentation with key findings is also available.The book, as well as a Powerpoint presentation summarising the major issues, can be downloaded free of charge at http://www.unicef.org/ceecis/resources_10597.html
The book builds on the previous publication "Bridging the gap. The role of M&E in evidence-based policy making"
(1.85 Mb) [download] published in 2008 and presented/distributed in major M&E conferences worldwide.
Facilitating evaluation around the world: A description and assessment of the International Organisation for Cooperation in Evaluation
that the IOCE Immediate Past President, Ross Conner, kindly coordinated with a number of colleagues who are currently serving on the IOCE Board of Trustees as well as former AES Board members Penny Hawkins and Gloria Esperanza Vela.
This paper intends to revisit the spirit and the actions of IOCE since its birth, attempting to capture its history through changing leadership and growth of the organisation. We would like to share these important memories with you and to stimulate your thoughts on how we can further move our global organisation forward, to meet and exceed service to its members, and to strengthen the role of evaluation to make this world an increasingly better place to live.
Books on evaluation
The Handbook of environmental policy evaluation
Earthscan Publishers, London
This book aims to be a practical tool for both academics and professionals interested in (environmental) policy evaluation. The handbook goes into general principles of policy evaluation, it describes specific characteristics of environmental policy evaluation, and it gives ample treatment of a variety of evaluation approaches, illustrated with examples from all over the world. While the book focuses on environmental policy evaluation, it is useful for all interested in up-to-standards policy evaluation methods.
The book is co-authored by Ann Crabbé (University of Antwerp) and Pieter Leroy (Radboud University Nijmegen). Download Flyer and order form (775 Kb.)
Journal of MultiDisclipinary Evaluation
Executive Editor: Chris L. S. Coryn
Editors: E. Jane Davidson, Kristin A. Hobson, Daniela C. Schröter, and Michael Scriven
Evaluations that Make a Difference: Stories from around the world
This project is supported by EvalPartners, the African Development Bank and the Inter-American Development Bank
Evaluations that Make a Difference: Stories from around the world is one of the first pieces of systematic research looking at factors that contribute to high quality evaluations that are used by stakeholders to improve programs and improve people's lives. This initiative collected stories about evaluations that made a difference, not only from the perspective of the evaluators but also from the commissioners and users. The stories in this collection tell powerful stories about the findings in the evaluations and the ways the evaluations contributed to the impact of the programs.
The report can be accessed at: https://evaluationstories.wordpress.com/evaluation-story-publications/
Some outstanding news is that this research supports what many of the wiser evaluators already knew. Evaluations can make a difference if evaluations:
- Focus on making an impact
- Give voice to the voiceless
- Provide credible evidence
- Use a positive approach
- Ensure users and intended beneficiaries are actively engaged
- Embed evaluation within the programme
- Are recognized as important
- Have a champion within program
This is just the beginning. Evaluators need to take a systematic approach to collecting, analyzing and using information to learn more about whether evaluations are making a difference and what factors are the most important. Yes, we need to evaluate our own work more often. Hopefully, this will inspire others to think more critically about evaluation design and implementation and to do more research into what works.