
AI poses risks in humanitarian crises such as potential harm from poor quality data used to train AI algorithms, algorithmic bias, lack of transparency in decision-making processes, and concerns over data privacy4. These risks can inadvertently exclude populations in need of assistance and exacerbate existing vulnerabilities. Safeguards and adherence to the "do no harm" principle are crucial when deploying AI in crisis situations.

AI has significantly aided Mozambique's emergency response by enhancing disaster mapping and analysis through AI-supported drone imagery. This technology has enabled faster and more accurate assessments of structural damage, identification of affected areas, and prediction of potential risks, thereby facilitating better decision-making and resource allocation during emergency situations.

The "do no harm" imperative for AI entails that humanitarian actors should consider the potential ways in which their actions or omissions may inadvertently cause harm or create new risks for the populations they intend to serve. This principle of non-maleficence should be paramount to all deployment of AI systems in humanitarian action, ensuring that AI technologies do not introduce unnecessary risks to already vulnerable populations.