By Ingrid Ciotto on Saturday, 24 May 2025
Category: Uncategorized

Evaluators as brokers of tech: How to vet and integrate tools without 'drinking the Kool-Aid'

By Val Malika Reagon

Technology in evaluation is no longer a future concept; it's a daily reality. We're navigating a tech tsunami: AI dashboards, digital storytelling platforms, and mobile data tools all promise to make our work faster, more inclusive, or magically insightful. The marketing is seductive: "faster insights," "real-time feedback," and "inclusive engagement." But here's the truth: more technology doesn't always mean better evaluation. 
I discovered this through experience. An alleged revolutionary AI tool was brought into a project I was involved with. It appeared to have impressive, automated theme extraction, appealing visual outputs, and immediate analysis. However, it overlooked context, misinterpreted community stories, and distorted nuances. Ultimately, it squandered weeks and damaged trust with partners who felt neglected and misrepresented. 

This experience changed my perspective. Evaluators aren't merely users of technology. We're gatekeepers. We are increasingly seen as brokers of technology: guiding its selection, ensuring ethical use, and assessing its long-term value. However, these decisions are frequently made without clear roadmaps, consultation, or considering the implications after the pilot concludes.

As evaluation adapts to rapid technological change, this article examines how evaluators can embrace their emerging role as tech brokers, navigating complexity with discernment and grounding decisions in equity, context, and evidence. 

Why evaluators are being pulled into the tech conversation

Evaluation today demands faster, more visual insights, pushing tools like AI summarizers, mobile data collection platforms, and automated dashboards into our workflows. Funders and program teams want real-time feedback, while communities desire to see their stories meaningfully reflected in the data. The pressure is on evaluators to deliver it all. 

But here's where it gets tricky: these tools often land in our laps with a mandate to "use them," yet without a parallel conversation about what it takes to use them responsibly. 

Before implementation, we seldom pause to ask: 

Too often, these questions go unasked and unbudgeted.

Evaluators are being positioned as frontline tech decision-makers, yet we are not always trained, resourced, or supported in that role. It's not about rejecting innovation; it's about stepping into a new kind of leadership as strategic intermediaries, bridging community needs, tool limits, and evaluation goals. 

Shiny tool syndrome: a cautionary tale

In a recent evaluation project, our team was encouraged to implement an AI tool to "speed up data analysis." The promises were bold: automatic theme extraction, sentiment analysis, and sleek, interactive dashboards. But no one asked the critical questions: 

The tool couldn't parse the community's Creole dialect, flagged sarcastic comments as "hostile," and missed nuanced themes like youth resilience that mattered most to stakeholders. What we received was impressive output with tenuous substance. Weeks later, the tool was shelved. 

The real cost? It's not just wasted time and resources; it also erodes trust with community partners who feel sidelined and misrepresented.

A poor tech decision isn't merely a budget hit. It can skew findings, disrupt relationships, and delay actions. 

What does a tech broker actually do?

If evaluators are to assume the role of tech brokers, we need clarity on what that actually entails. At a minimum, it includes: 

Context translator
You interpret community needs, organizational values, and cultural dynamics. For instance, you ensure that a mobile survey platform accommodates local languages and that the technology serves those priorities rather than vice versa. 

Tool vetting specialist
You ask the tough questions:

Hype filter
You resist being dazzled by features and demand proof. That means pushing vendors to demonstrate real-world outcomes like improved decision-making, not just slick interfaces. Sustainability Assessor You think beyond the pilot. 
A tool that cannot survive beyond the pilot is a liability, not a solution. 

Ethics champion

You advocate for informed consent, equitable data ownership, and ensure community members have a voice in how tools are used and how results are shared. 

A tool vetting framework for evaluators

Whether it's AI-powered software, a mobile survey platform, or a digital storytelling tool, every piece of technology should earn its place in your evaluation toolkit. This straightforward framework can assist you in asking the right questions before implementation.  

DIMENSION KEY QUESTIONS
Purpose Fit ​Does this solve a real problem? Is there a meaningful gap this tool addresses?
Context Fit​Does it fit our cultural, logistical, and digital reality?
User Inclusion​Is it accessible and inclusive for all stakeholders, including those with disabilities or low-tech access?
Capacity Match​Can our team realistically use, train on, and maintain this tool over time?
Evidence of Value​Has this worked in similar contexts? Are the outcomes measurable and relevant?
Ethical Soundness ​Who owns the data? What are the risks of misuse, bias, or harm?

Pro tip: Asking even two or three of these questions early can save months of backtracking and rebuilding. I learned this while implementing electronic health records in the early 2000s. 

We don't need to be technologists. But we do need to be intentional

The role of the evaluator is evolving, but that doesn't mean we need to become coders or UX designers. What we need is discernment: the ability to choose tools with intention, push back against trends that are unproven, and speak up when technology threatens to undermine trust, inclusion, or rigor. 

Being a tech broker isn't merely about the tool itself; it's about the thoughtfulness behind how and why we use it. 

Final thoughts

As we lean into innovation, let's stay grounded in what matters most: evaluation that reflects the communities we serve, data that drives meaningful change, and tools that add clarity not confusion. 

We don't need to drink the Kool-Aid—just learn to read the label before passing it around. Asking hard questions isn't resistance—it's responsibility. That's how we protect the integrity of our work and ensure technology supports, rather than steers, our values. 

Disclaimer: This article was authored by Dr. Val Malika Reagon. The ideas presented were developed and structured in collaboration with ChatGPT, an AI tool by OpenAI, which supported content organization and refinement. 

-------------------------------

ABOUT THE AUTHOR

Dr Val Malika Reagon is a former CDC health scientist and epidemiologist and the Founder of Smith Hill Global Consulting. With 20+ years spanning hospital administration and global health systems, she now focuses on AI integration and ethical tech use in evaluation. Her work has shaped outcomes across the U.S. and PEPFAR countries.

Dr Reagon on LinkedIn