Humanitarian organizations are facing a growing ethical dilemma as artificial intelligence offers a seemingly convenient solution for creating fundraising imagery. A recent inquiry reveals a surge in the use of AI-generated photos and videos depicting vulnerable individuals, prompting concerns about authenticity and the potential for exploiting harmful stereotypes. While proponents cite cost savings and privacy protections, critics warn that these fabricated images risk undermining public trust and perpetuating damaging narratives around global crises. The practice is sparking debate within the humanitarian sector and raising questions about the long-term impact on aid efforts.
Humanitarian organizations are increasingly turning to artificial intelligence to generate images for fundraising campaigns, raising ethical questions about the authenticity of appeals and the potential for exploiting stereotypes. A recent study highlights a growing trend of NGOs using AI-created photos and videos depicting victims to solicit donations.
The practice has sparked debate, with some critics labeling it “poverty porn” – a term mirroring the discussion around “food porn” and its focus on visually appealing culinary images. The study, published in The Lancet, details how the use of AI-generated imagery is becoming more widespread.
Examples cited by researcher Arsenii Alenichev include a Plan International campaign against forced child marriage, utilizing AI-generated videos of pregnant and abused young girls, and a United Nations campaign addressing sexual violence in conflict zones, featuring fabricated survivor portraits.
Seeking the “Perfect” Image
Table of Contents
Organizations are drawn to AI-generated images for two primary reasons: cost-effectiveness and the ability to protect the identities of those who have experienced trauma. This allows them to bypass the complex ethical considerations and logistical challenges of working with real individuals.
You can ‘dictate’ your idealized scenario to the AI.
“The idea is to be able to create the perfect image – one that incorporates the ideal way to represent a victim, in the position you want, with the characteristics you want,” explains Valérie Gorin, director of training at the Centre d’études humanitaires at the University of Geneva. “You can ‘dictate’ your idealized scenario to the AI.”
Gorin also notes that AI helps circumvent the need for informed consent, a significant hurdle in humanitarian photography. “Organizations are obliged to obtain the consent of all people who are pictured. We know that consent raises a very large number of problems.”
Perpetuating Misery Stereotypes
Despite the benefits, critics argue that artificial images reinforce harmful stereotypes and encourage a voyeuristic approach to suffering, a phenomenon Alenichev terms “poverty porn 2.0.” He has identified approximately 100 AI-generated images used in online humanitarian campaigns that perpetuate these damaging clichés.
These images are readily available on stock photo sites like Adobe and Freepik, and are quickly disseminated across social media platforms. The ease of access and affordability make them particularly appealing to smaller charitable organizations.
A Focus on Children
Maria Gabrielsen Jumbert, a researcher at the Peace Research Institute Oslo (PRIO) and the Centre d’études et de recherches internationales (CERI) in Paris, describes the images as “absolutely stereotyped,” often depicting “children surrounded by disastrous hygienic conditions, sitting in the mud.”
The figure of the child is frequently featured in these AI-generated images, as children are seen to evoke “innocence” and are therefore considered the “ideal” representation of vulnerability, according to Gorin. Women and the elderly are also prominently featured, while men are less often depicted.
>> Relire : Valérie Gorin: “Dans l’humanitaire, les femmes et les enfants sont plus vendeurs”
A Trap for NGOs?
Beyond the amplification of stereotypes, the use of AI raises concerns about the credibility of humanitarian organizations. Documentary photographer Niels Ackermann believes it presents a significant risk.
When you talk about a real problem with a false image, it opens the door to great mistrust.
“When we talk about humanitarian work, we’re talking about real problems on the ground that we want to solve. And when you talk about a real problem with a false image, it opens the door to great mistrust,” Ackermann argues. This is particularly concerning given the current climate, where humanitarian organizations are increasingly facing scrutiny and accusations of lacking credibility.
He warns that using fabricated images could ultimately undermine the trust that is essential for effective aid work. “If they do it themselves, they are sawing off the branch they are sitting on, and it will break the trust.”
Sujet radio: Malika Nedir
Adaptation web: Julie Liardet