May 19, 2025
Ask Amina: Can a Synthetic Persona Speak for the Displaced?

Ali Madad
Author
Proof-of-concepts aren't mere tech demos—they're provocations.
They work like design fiction: tangible enough to engage us, yet incomplete enough to provoke necessary discomfort. Done right, they help us ask better questions before the future becomes policy.
A Workshop on AI Agents in Humanitarian Action
The “Ask Amina” demo was part of a workshop hosted by the United Nations University Centre for Policy Research titled AI Agents in Humanitarian Action: Exploring New Frontiers. Held in May 2025, the event convened policy makers, technologists, humanitarian workers, and academics to examine the implications of synthetic personas in peacekeeping, refugee support, and development contexts.
The agenda included live demonstrations of two AI-generated personas—Amina, a Sudanese refugee, and Abdalla, a paramilitary commander—followed by use case presentations from agencies like UNDP and a moderated open discussion.
Speakers included:
- Eduardo Albrecht (UNU-CPR, Mercy University)
- Liselott Hitz (formerly Doctors Without Borders)
- Eleonore Fournier-Tombs (UNU-CPR)
- Col. Matt Dawson (Columbia University)
- Joseph Aylett-Bullock (UNDPO)
- Sun-joo Lee and Shouryadipta Sarkar (UNDP)
Participants came from across the humanitarian spectrum—UN agencies, universities, tech firms, peacekeeping institutions—each bringing a different lens to the question: Should AI personas have a role in humanitarian action? And if so, how?
What’s Missing: No Anthropologist in the Loop
The working paper behind the Amina experiment introduces a compelling provocation: for synthetic personas to be truly representative, they must act more like anthropologists than chatbots.
They must become similar to anthropologists, capable of conducting autonomous cultural research, organizing relevant artifacts, interpreting context through an ethical lens, and updating their knowledge in real-time based on user interaction and lived conditions.
These so-called “anthropologist agents” aren’t just retrieval systems. They’re curators of living, evolving cultural portraits—tasked with understanding not just what people say, but what they mean within a social and historical context.
Amina, as demonstrated, lacked these deeper layers. Her responses were grounded in data, yes—but not autonomously gathered, ethically framed, or dynamically evolving. The promise of such an anthropologist agent system remains a frontier, not a feature.
Two personas:
- Amina, a refugee mother in Chad
- Abdalla, a paramilitary commander in Sudan
Visitors prompted them through a browser interface. Amina’s answers were grounded in a curated knowledge base and retrieval-augmented LLM setup—she scored 80% accuracy against real survey data. Abdalla’s responses, by contrast, were unvalidated and more speculative.
But let’s be clear: these weren’t agents in the technical sense. No goals, no planning, no memory. They were talking avatars built with HeyGen with visages lent by collaborators. The language of autonomy was borrowed; the underlying systems weren’t agentic.
Yet the impact wasn’t in their architecture—it was in the conversations they sparked.
One of the researchers who helped shape Abdalla's persona offered a particularly moving reflection during the workshop. Originally from Cameroon, he emphasized how personal this project was to him—rooted in the realities of the Sahel:
This project to me is very, very important because I’m originally from Cameroon. And the Sahel region is a hotspot for lots of terrorist activity. One of the reasons why young men might be attracted to terrorist groups is because they offer them a sense of solidarity, protection, and even cultural solidarity. […] In my studies, I researched and discussed something called the Fulani question. A Muslim person in Mali might identify more with a Muslim person from Niger. So a lot of issues here also stem from cultural and ethnic disputes.
His words underscored the complexity of the underlying social fabric that personas like Abdalla attempt to simulate—and why any effort toward authentic representation must begin with deep cultural context. But as Daniel Naujoks (Lecturer of International and Public Affairs, Director of International Organization and UN Studies Specialization, Columbia University) cautioned, “Having one voice stand in for millions is not participation—it’s misrepresentation.” The tension between nuance and legibility, between voice and flattening, was central to the debate.
What was really asked
This wasn’t a product showcase. It was a stress test.
If the personas spoke with practiced fluency, the room answered back with sharpness.
🧠 What the chat actually interrogated:
1. Who is this for?
These personas could be useful for training—especially for young officers preparing to negotiate in crisis zones,”
said Col. Matt Dawson (U.S. Army, Columbia University). “But how accurate do they need to be before they’re valid?
2. How is bias being replicated?
If these avatars reflect reports we wrote, are we just talking to ourselves?”
Asked anonymously in chat—hitting the echo-chamber risk of institutional AI.
3. Where’s the agency?
These aren’t agents, they’re stylized retrieval interfaces.”
Multiple voices called out the semantic drift—“agent” carries weight that these tools don’t bear.
4. What about cultural nuance?
You’ve captured language. But have you captured culture?
Peacekeeping and crisis response folks flagged dialect, trust dynamics, and contextual emotion as missing.
5. Are these tools for simulation—or reflection?
These agents might be best as training mirrors—not replacements.”
Many attendees pointed to value as internal tools for practitioner bias-checking.
6. What do we owe the people these personas portray?
Who gets to approve Amina’s voice? What if the community disagrees?”
UNHCR staff and disability advocates raised deep concerns around consent, voice, and erasure.
The analog premise under fire
The framing leaned on a familiar design logic: personas help us humanize data.
But this wasn’t a design sprint. This was humanitarian response, identity politics, and AI ethics—colliding.
Why build a synthetic refugee, when refugees are right here—capable of speaking for themselves?”
asked Grainne O’Hara, former UNHCR Director of International Protection.
It wasn't merely critique—it was a clear line drawn in the sand.
The value of the provocation
Credit where it’s due: Eduardo Albrecht and Liselott Hitz didn’t claim to solve anything. They offered a live proof-of-concept precisely so it could be challenged.
“This was built quickly, with very limited resources,” Hitz said.
“It’s not a product. It’s a demonstration—meant to surface these questions before they get bypassed.”
“If we don’t talk about this now,” said Eleonore Fournier-Tombs (UNU-CPR), “someone else will build it and deploy it without asking.”
That framing—a provocation before deployment—is what made the event important.
My position: It's complicated
I first met Eduardo at an AI Agents event in February at Betaworks, ahead of the release of his book Political Automation: An Introduction to AI in Government and its Impact on Citizens. We connected over the cultural and political implications of agentic systems. I later corresponded with Liz just before they entered the build phase, so I’d been eagerly anticipating this event—not just as a curious observer, but as someone invested in the terrain.
I had corresponded with Liz both before and after attending the IASEAI conference on AI ethics and safety in Paris, where I was reflecting on similar themes.
Because I’ve been building similar things.
Not just my caregiving agent, but also synthetic personas and AI agents for client work—tools designed to hold memory, simulate dialogue, and adapt over time. And I’ve seen, again and again, how easy it is to flatten the lived into the legible.
These tools don't merely process data—they encode worldviews.
They mediate access, imply authority, shape who gets heard—and how.
The tech will evolve. But the deeper questions about representation, consent, and use will remain.
Where this goes next
As Jay Chaudhary of Doctors Without Borders asked, “In many tech orgs, you have dedicated trust and safety teams. Would we think about a similar type of function for integrating AI into humanitarian workflows?” The need for governance is not just technical—it’s ethical, procedural, and systemic.
Co-design with affected communities
from the beginningTransparent data logic
, not just natural language polishSupport for ambiguity and silence
, not resolutionStronger ethical and institutional guardrails
Maybe the next tool isn't simply Ask Amina.
Maybe it’s: Ask who built Amina. Ask who she excludes. Ask what she can’t say.
One last thing
It takes guts to show something unfinished—especially in a space as high-stakes and risk-averse as humanitarian innovation. Eduardo and Liz made space for critique, not just applause.
That courage alone deserves acknowledgment.
May this be the beginning of real participation, not just better simulations.
Get in Touch
Want to learn more about how we can help your organization navigate the AI-native era?