
Artificial intelligence is often framed in terms of speed, efficiency, and new consumer experiences. But its deeper value may lie elsewhere: in whether it can support people and organisations working under the greatest pressure, with the fewest margins for error.
In this talk, Carlo Rapisarda of The Good AI Lab and Sofia Giannotti of Médecins Sans Frontières (MSF) Italy use their collaboration as an entry point into a broader conversation about responsible AI in humanitarian and public-interest contexts. Grounded in MSF’s core principles of humanity, impartiality, neutrality, and independence, the conversation explores what it means to design and adopt technology in environments where transparency, accountability, and respect for human dignity must remain central.
Drawing on practical experience in staff enablement, strategic roadmap design, and the evaluation of concrete use cases, the speakers reflect on what adoption looks like when the goal is not novelty but usefulness, not automation for its own sake but meaningful support for human judgment. The talk opens a conversation on the ethical challenges around AI and offers a critical perspective on how AI can serve humanitarian action when governance, operational realities, and ethical responsibility guide innovation. In a rapidly changing world, how can we ensure AI serves the right ends?
Pier Luigi Dovesi is Co-Founder and Chair of The Good AI Lab, an independent AI lab working on foundational AI research and innovation projects for social good, in partnership with Doctors Without Borders (MSF), FMSI, and several universities across Europe.
Pier leads the Robotics and Autonomous Systems team at AMD Silo AI, working on world action models, multimodal AI, and embodied intelligence. His research spans domain adaptation, self-supervised learning, and real-time perception, with recent work extending toward vision-language models. He has published at CVPR, ECCV, ICCV, 3DV, BMVC, and ICRA.