How Generative AI Can Support The Humanitarian Sector

Mahmoud AlSwedy
9 min readSep 7, 2023

--

“AI could be the moon landing of our generation, an inspiring scientific leap forward that brings huge benefits to mankind.” From “Applying artificial intelligence for social good” by McKinsey

The recent boom of generative AI tools from content generation to image and video creation, draws attention to the opportunities that AI can offer in other fields and in particular in situations where resources are limited and speed and accuracy in the decision-making are critical in saving people’s lives. Though it is still a nascent technology, it has the potential to make a significant impact that may revolutionize the way we tackle challenges in international development.

Generative AI is a type of Artificial Intelligence that can create new content, including images, text, music, video, code, or synthetic data. These generative models are usually trained on large datasets of existing data and then use that data to generate new examples that are derived from the original datasets.

As humanitarian organizations deal with mountains of data, generative AI models offer a solution that can sift through the piles of data and generate useful insights for tasks such as report writing, data analysis, writing routine letters, project planning and management, ensuring compliance with rules and regulations, and automating routine tasks in HR and finance.

For large organizations, particularly those with interagency ties with other organizations or donors, generative AI can be the solution for many of the problems that occur from their inability to create synergies through APIs between their different ERP and legacy systems. This leads to many occasions of double work on the same task at both ends, unnecessary delays, and financial loss. Generative AI can be used to mitigate such issues by creating a special generative AI model that is designed to carry out specific tasks such as the reconciliation of staff benefits and entitlements or budget expenditure related to projects or field missions. By deploying the model among the trusted partners such a solution will eliminate the need for unnecessary paperwork, lengthy email threads, and long wait times and will effectively enhance the streamlining of business processes and transparency.

Ten ways AI can be used for good. Deloitte AI Institute
Source: Ten ways AI can be used for good. Deloitte AI Institute.

AI and social good, opportunities, risks, and bottlenecks

McKinsey published valuable research on how AI can support the humanitarian sector. Though the study was published in 2018 (it’s funny how five years seem so far!) it remains very relevant today. The paper mapped 160 AI social impact use cases in 10 domains (the paper noted that those use cases will “continue to evolve along with the capabilities of AI” which happened recently).

MAPPING AI USE CASES TO DOMAINS OF
 SOCIAL GOOD. Applying artificial intelligence for social good research by McKinsey
From “Applying artificial intelligence for social good” research by McKinsey

The paper found that AI solutions are very relevant when applied in four major domains health and hunger, education, security and justice, and equality and inclusion. AI solutions built for these domains usually target a large population which augments their potential usage frequency and subsequently leads to magnify the benefits the society would cultivate from those AI solutions.

“[There are] four main categories of risk are particularly relevant when AI solutions are leveraged for social good: bias and fairness, privacy, safe use and security, and “explainability” (the ability to identify the feature or data set that leads to a particular decision or prediction).”

From “Applying artificial intelligence for social good” research by McKinsey

But on the road to achieving maximum utilization of AI tools, the research found 18 potential bottlenecks that were divided into four categories according to their criticality level. These bottlenecks if not addressed carefully, would possibly obstruct the adoption of AI tools by the organization. The most significant bottlenecks that were identified were data accessibility, shortage of talent to develop AI solutions, and last-mile implementation challenges. The issue with data accessibility remains relevant today as there are several barriers around data that would impact the development of in-house AI tools. Such barriers span from external ones such as privacy concerns, and multiple –even sometimes conflicting- regulations on data use from one country to another, to internal issues such as volume, quality, and types of the datasets required to train the AI model.

Bottlenecks limiting the use of AI for societal good. Source: “Applying artificial intelligence for social good” research by McKinsey.
From “Applying artificial intelligence for social good” research by McKinsey

UN interest in AI

The United Nations recognized early on the importance of AI and launched the UN Global Pulse in 2009 which is an innovation initiative to promote the utilization of big data and Artificial Intelligence in the development and humanitarian sector. UNGP has now 4 dedicated innovation labs around the world. One of their most recent projects is the result of collaboration between the UN Human Rights Office and Dataminr. The project developed an AI model for detecting threats and attacks against human rights defenders.

Artificial Intelligence was also one of the 8 key areas that were identified in the UN Secretary-General’s Roadmap for Digital Cooperation that was announced in 2020. Subsequently, the Inter-Agency Working Group on Artificial Intelligence was created in 2021 to facilitate knowledge sharing among UN agencies and support capacity building activities for developing countries.

The UN has recently organized through the ITU, The AI for Good Global Summit. In his speech at the summit, the UN Secretary-General stressed the pivotal role AI can play in solving the chronic development problems in the world.

“But AI also has the potential for enormous good. Its powerful tools could drive forward the 2030 Agenda and the Sustainable Development Goals:

By making a massive leap in healthcare and eradicating diseases that affect millions;

By transforming education and empowering people everywhere to build a better future.

The AI for Good Global Summit, convened by ITU, recognizes the joint responsibility of governments, the private sector, United Nations agencies, academia and others to ensure AI reaches its full potential — while preventing and mitigating harms.”

António Guterres, UN Secretary-General message to The AI for Good Global Summit 2023

UN agencies and AI, vision is key

Since the announcement of the UN Secretary-General’s Roadmap for Digital Cooperation in 2020, and even a little earlier for a few agencies, several UN agencies have launched innovation initiatives to promote the adoption of AI-based solutions.

One of the UN agencies that has been experimenting with AI solutions is the WFP. Over the last few years, WFP has deployed several data-driven solutions to address the complex needs they face while providing assistance to people in need around the world. Three of those solutions, Optimus, SKAI, and Hunger Map LIVE are prime examples of using generative AI in humanitarian aid. Optimus is a system that uses AI to optimize the delivery of food aid to people in crisis-affected areas. Optimus has been used in 44 of WFP’s operations since 2017. SKAI is a result of the collaboration between WFP and Google Research. SKAI uses AI and satellite imagery to enable real-time insights and actionable intelligence for effective decision-making during disaster response. SKAI has proved to be a very effective tool in disaster response during some of the natural disasters that hit South Africa, USA, Pakistan, and the earthquake that struck Turkey and Syria last February. The UN is leading an initiative to build a generative AI system called “Data Insights for Social & Humanitarian Action — Disha”. Disha “is a multi-partner initiative that aims to accelerate ethical and responsible access to data and artificial intelligence (AI) solutions to unlock social impact at scale.” This promising project will use SKAI as a core technology to assess the damage caused by natural disasters and conflicts.

Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks. Research by Google
Source: Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks, research by Google

WFP has also developed an AI-powered, near real-time interactive world map called the Hunger Map LIVE, which uses satellite imagery and AI to identify the food security situation in more than 90 countries around the world.

HungerMap LIVE interactive map uses near real-time metrics to show the food security situation at both national and subnational levels. Photo: WFP
HungerMap LIVE interactive map uses near real-time metrics to show the food security situation at both national and subnational levels. Photo: WFP

To help governments evaluate their AI readiness level, UNDP developed The AI Readiness Assessment (AIRA) as part of its “Making AI Work For Us” initiative. AIRA is designed to help governments assess their AI readiness level by using indicators such as vision, governance, ethics, innovation, infrastructure, data availability, human capital, inclusivity, transparency, and accountability. Whether a government is still considering adopting AI solutions or already running a national AI program, AIRA is useful for assessing their readiness level and benchmarking the progress of their program.

The UN Refugee Agency — UNHCR launched an internal initiative called the “Data Innovation Impact Fund” that’s only open for its staff teams (they can partner with external parties, but the staff team remains fully responsible for the project) who have innovative ideas for solutions that can benefit from AI and data analytics.

On the other hand, UNICEF pursued a different approach and published a Policy guidance on AI for children in 2020 which was updated in 2021, as part of its AI for children project which offers recommendations and principles for policymakers and enterprises to create child-centered AI policies and systems. UNICEF also launched “Generation AI” initiative to set and lead the global agenda on AI and children in partnership with multiple stakeholders including Microsoft and UC Berkeley. The initiative also aimed at promoting building AI powered solutions that help realize and uphold child rights. Through its Innovation Venture Fund, UNICEF has made 24 investments in 15 countries in startups using data science and AI to solve global challenges that affect children.

UNICEF artificial intelligence for children (full version)

NGOs and AI, a history of reluctance and future of opportunities

Though AI offers a huge opportunity for NGOs of all sizes in terms of cost and time savings and expanded knowledge and impact, nonetheless there are very few NGOs that experimented with AI tools and used them in solving real-world problems but even in those occasions the number, scale, and extent of those experiments are still very limited, including their partnerships with UN agencies. AI awareness and limitations of resources are on top of the hurdles that are facing NGOs when considering adopting AI solutions. Janti Soeripto, President & CEO of Save the Children US in a recent interview with impACT leadership by Omdena, spoke about the humanitarian sector and that it “has for very long time underinvested in data and technology for all kinds of understandable reasons but essentially we are really coming from behind” She stressed the importance of weighing up the risks and opportunities when choosing and deploying AI and machine learning tools and the need to understand how to “blend technology with some of the analog activities we already have been doing”.

However, the potential application of AI at large and generative AI in particular in the humanitarian and international development sectors is still untapped. This what makes the initiatives by big tech companies such as Google, Microsoft, Amazon AWS, IBM and others are critical in raising awareness of the importance of AI solutions and in removing some of the roadblocks that impede humanitarian organizations from adopting these tools. There is a growing number of collaboration opportunities and incentives for NGOs and social entrepreneurs to create innovative AI-based solutions that can be used in humanitarian assistance.

“While AI is not a silver bullet or cure-all, the technology’s powerful capabilities could be harnessed and added to the mix of approaches to address some of the biggest challenges of our age, from hunger and disease to climate change and disaster relief.”

From “Applying artificial intelligence for social good” research by McKinsey

One of the key lessons we can learn from the endless list of failed digital transformation initiatives is that unless AI as a technology is hardwired into the system of any organization, it won’t be possible to fully utilize it or attain its wide range of benefits. That’s why AI needs to be embedded in the organization’s system, staff must be educated on how to use these tools in their everyday work, and donors and partners should be aware of the benefits of using generative AI tools. Simultaneously, there should be thorough analysis of the risks, limitations, and nuances related to the use of generative AI models in the different domains inside the humanitarian sector. As there is a huge difference between the types of risks related to deploying an AI tool in water systems or for economic evaluation and the type and level of risks when deploying a generative AI tool for human rights related issues.

As the technology continues to develop beyond its current early stages, the understanding of its pros and cons will evolve, and the world will reach a conclusion on the necessary guardrails that are required to address the ethical and privacy concerns associated with generative AI models while making sure that such rules do not restrict innovation and thwart creativity.

--

--

Responses (1)