Share your ideas
AES Blog

Welcome to the AES Blog

Australasia has some excellent evaluators. More than that, we have an evaluation community full of ideas and a willingness to share. The AES has long provided a place for us to come together, at regional events and the annual conference, to develop our community together. Now we’re taking it online! The new AES blog will be a space for AES members – both new and experienced – to share their perspectives, reflecting on their theory... If you have an idea, please contact us on blog@aes.asn.au. Please also view our blog guidelines.

Evaluators as brokers of tech: How to vet and integrate tools without 'drinking the Kool-Aid'

AI_blog_cover_final
By Val Malika Reagon

Technology in evaluation is no longer a future concept; it's a daily reality. We're navigating a tech tsunami: AI dashboards, digital storytelling platforms, and mobile data tools all promise to make our work faster, more inclusive, or magically insightful. The marketing is seductive: "faster insights," "real-time feedback," and "inclusive engagement." But here's the truth: more technology doesn't always mean better evaluation. 
I discovered this through experience. An alleged revolutionary AI tool was brought into a project I was involved with. It appeared to have impressive, automated theme extraction, appealing visual outputs, and immediate analysis. However, it overlooked context, misinterpreted community stories, and distorted nuances. Ultimately, it squandered weeks and damaged trust with partners who felt neglected and misrepresented. 

This experience changed my perspective. Evaluators aren't merely users of technology. We're gatekeepers. We are increasingly seen as brokers of technology: guiding its selection, ensuring ethical use, and assessing its long-term value. However, these decisions are frequently made without clear roadmaps, consultation, or considering the implications after the pilot concludes.

As evaluation adapts to rapid technological change, this article examines how evaluators can embrace their emerging role as tech brokers, navigating complexity with discernment and grounding decisions in equity, context, and evidence. 

Why evaluators are being pulled into the tech conversation

Evaluation today demands faster, more visual insights, pushing tools like AI summarizers, mobile data collection platforms, and automated dashboards into our workflows. Funders and program teams want real-time feedback, while communities desire to see their stories meaningfully reflected in the data. The pressure is on evaluators to deliver it all. 

But here's where it gets tricky: these tools often land in our laps with a mandate to "use them," yet without a parallel conversation about what it takes to use them responsibly. 

Before implementation, we seldom pause to ask: 
  • Are we technically ready? 
  • Is this culturally relevant? 
  • What are the ethical risks? 
  • Can we sustain it? 

Too often, these questions go unasked and unbudgeted.

Evaluators are being positioned as frontline tech decision-makers, yet we are not always trained, resourced, or supported in that role. It's not about rejecting innovation; it's about stepping into a new kind of leadership as strategic intermediaries, bridging community needs, tool limits, and evaluation goals. 

Shiny tool syndrome: a cautionary tale

In a recent evaluation project, our team was encouraged to implement an AI tool to "speed up data analysis." The promises were bold: automatic theme extraction, sentiment analysis, and sleek, interactive dashboards. But no one asked the critical questions: 
  • Who trained the model? 
  • Does it reflect the population we serve? 
  • Can we audit how it makes decisions? 
  • Do we even have the right data volume or quality to justify this? 

The tool couldn't parse the community's Creole dialect, flagged sarcastic comments as "hostile," and missed nuanced themes like youth resilience that mattered most to stakeholders. What we received was impressive output with tenuous substance. Weeks later, the tool was shelved. 

The real cost? It's not just wasted time and resources; it also erodes trust with community partners who feel sidelined and misrepresented.

A poor tech decision isn't merely a budget hit. It can skew findings, disrupt relationships, and delay actions. 

What does a tech broker actually do?

If evaluators are to assume the role of tech brokers, we need clarity on what that actually entails. At a minimum, it includes: 

Context translator
You interpret community needs, organizational values, and cultural dynamics. For instance, you ensure that a mobile survey platform accommodates local languages and that the technology serves those priorities rather than vice versa. 

Tool vetting specialist
You ask the tough questions:
  • Is this accessible to users with disabilities? 
  • Will it work offline in rural or low-bandwidth settings? 
  • Can it be used with minimal training? 

Hype filter
You resist being dazzled by features and demand proof. That means pushing vendors to demonstrate real-world outcomes like improved decision-making, not just slick interfaces. Sustainability Assessor You think beyond the pilot. 
  • Can the tool be maintained or updated? 
  • Who will train new staff?  
  • Is there funding or internal capacity to support it long-term? 
A tool that cannot survive beyond the pilot is a liability, not a solution. 

Ethics champion

You advocate for informed consent, equitable data ownership, and ensure community members have a voice in how tools are used and how results are shared. 

A tool vetting framework for evaluators

Whether it's AI-powered software, a mobile survey platform, or a digital storytelling tool, every piece of technology should earn its place in your evaluation toolkit. This straightforward framework can assist you in asking the right questions before implementation.  

DIMENSION KEY QUESTIONS
Purpose Fit Does this solve a real problem? Is there a meaningful gap this tool addresses?
Context FitDoes it fit our cultural, logistical, and digital reality?
User InclusionIs it accessible and inclusive for all stakeholders, including those with disabilities or low-tech access?
Capacity MatchCan our team realistically use, train on, and maintain this tool over time?
Evidence of ValueHas this worked in similar contexts? Are the outcomes measurable and relevant?
Ethical Soundness Who owns the data? What are the risks of misuse, bias, or harm?

Pro tip: Asking even two or three of these questions early can save months of backtracking and rebuilding. I learned this while implementing electronic health records in the early 2000s. 

We don't need to be technologists. But we do need to be intentional

The role of the evaluator is evolving, but that doesn't mean we need to become coders or UX designers. What we need is discernment: the ability to choose tools with intention, push back against trends that are unproven, and speak up when technology threatens to undermine trust, inclusion, or rigor. 

Being a tech broker isn't merely about the tool itself; it's about the thoughtfulness behind how and why we use it. 

Final thoughts

As we lean into innovation, let's stay grounded in what matters most: evaluation that reflects the communities we serve, data that drives meaningful change, and tools that add clarity not confusion. 

We don't need to drink the Kool-Aid—just learn to read the label before passing it around. Asking hard questions isn't resistance—it's responsibility. That's how we protect the integrity of our work and ensure technology supports, rather than steers, our values. 


Disclaimer: This article was authored by Dr. Val Malika Reagon. The ideas presented were developed and structured in collaboration with ChatGPT, an AI tool by OpenAI, which supported content organization and refinement. 

-------------------------------


ABOUT THE AUTHOR

Dr Val Malika Reagon is a former CDC health scientist and epidemiologist and the Founder of Smith Hill Global Consulting. With 20+ years spanning hospital administration and global health systems, she now focuses on AI integration and ethical tech use in evaluation. Her work has shaped outcomes across the U.S. and PEPFAR countries.

Dr Reagon on LinkedIn

Everyday ethics challenges for evaluators

ABOUT US   |    CONTACT US   |    LEGALS
Search site
© 2023 Australian Evaluation Society Limited 
ABN 13 886 280 969 ACN 606 044 624
Address: 425 Smith Street, Fitzroy, Victoria, 3065, Australia

We acknowledge the Australian Aboriginal and Torres Strait Islander peoples of this nation. We acknowledge the Traditional Custodians of the lands in which we conduct our business. We pay our respects to ancestors and Elders, past and present. We are committed to honouring Australian Aboriginal and Torres Strait Islander peoples’ unique cultural and spiritual relationships to the land, waters and seas and their rich contribution to society.