エピソード

  • Matthew Brown from Profound on Agentic Search Engine Optimization
    2025/07/30
    In a world where AI-powered question answering interfaces and AI agents are becoming the new informational gatekeepers, how can humanitarian organizations adapt their communications strategies to stay visible, credible, and prominent? In this episode of Humanitarian AI Today, guest host Roderick Besseling, Head of the Data and Analytics Unit at the Norwegian Refugee Council, speaks with Matthew Brown from Profound, a startup that helps companies track, control, and optimize their marketing and communications content for the agentic internet. Joined by Lucy Hall, a Data and Evidence Specialist from Save the Children's Humanitarian Leadership Academy and Brent Phillips, Roderick and Matt discuss a critical challenge facing the humanitarian sector caused by artificial intelligence upending the world of search, simultaneously disrupting the industry and transforming the very nature of how we access information. This disruption forces a pivotal choice: organizations must adapt their communication strategies, or risk becoming invisible to the donors and communities they serve. Matt explains how Profound helps companies and organizations analyze their "AI visibility" by tracking how, when, and in what context their brand is mentioned by question-answering interfaces like ChatGPT, Gemini and Perplexity. He explains how Answer or Agentic Engine Optimization (AEO) works and how Profound can help organizations learn to generate the high-quality, semantically rich, and well-structured content that AI agents favor, ensuring that their communications are not just seen, but are recognized as trustworthy and reliable. The conversation also explores how AEO can support the localization agenda within the humanitarian sector. Matt argues that this technological shift can "level-up the playing field," giving local and grassroots organizations a better chance at visibility. Because AEO prioritizes well-structured, helpful content over large budgets and traditional SEO tactics, smaller organizations with less resources have a new opportunity to be discovered, ensuring their vital work is visible to donors and partners from the community level all the way up to large UN agencies. Episode notes: https://medium.com/humanitarian-ai-today/matthew-brown-from-profound-on-agentic-search-engine-optimization-aeo-for-humanitarian-75ba0e6560c6
    続きを読む 一部表示
    42 分
  • Pradyumna Chari Introduces Project NANDA to Humanitarian Organizations
    2025/07/24
    In this episode of Humanitarian AI Today, guest host Doug Smith, Acting CEO of Data Friendly Space, speaks with Pradyumna Chari, a postdoctoral associate at the MIT Media Lab about Project NANDA. This initiative is building the foundational layer for an internet of AI agents through a broad coalition of academic institutions, major technology corporations, specialized AI startups, and the open-source community. Pradyumna explains how components like the NANDA Index create a "handshake layer" for intelligent agents to discover, coordinate, and transact with each other. This system is designed to shape the future of knowledge sharing, enabling agents to transact in privacy-preserving "intelligence" and "insights" rather than raw data. Doug and Pradyumna explore how this unlocks the potential for a "mesh" of interconnected agents to revolutionize humanitarian response. With this technology in a formative stage—much like the early World Wide Web—the humanitarian community has a critical opportunity to help shape its infrastructure. Tune in to learn how your organization can get involved and ensure this powerful new ecosystem is built to meet humanitarian needs from the ground up. Episode notes: https://humanitarianaitoday.medium.com/pradyumna-chari-introduces-project-nanda-to-humanitarian-organizations-333478f5f049
    続きを読む 一部表示
    30 分
  • Andre Heller and Mala Kumar Discuss Signpost’s Pilot AI Assistant
    2025/07/22
    Humanitarian AI Today, guest host Mala Kumar, Head of Impact at Humane Intelligence, sits down with Andre Heller, Director of Signpost at the International Rescue Committee (IRC). They discuss Signpost's recent research paper on piloting an "information assistant," detailing the technical architecture, evaluation methods, and lessons learned from the project The conversation also addresses the significant challenges facing the sector, including a funding crisis that has impacted the pace, scale, and scope of critical research being carried out across the humanitarian community advancing humanitarian applications of artificial intelligence. In the midst of this crisis and the explosive growth of AI, Andre and Mala emphasize the need for more rigorous, scientific, and collaborative approaches to AI development and evaluation. They speak in detail about open source AI and what it means to the humanitarian community. And Mala explains how her organization, Humane Intelligence, is working to professionalize AI evaluation by creating a community of practice around fair and representative algorithmic auditing. She describes their work in red teaming, conducting "bias bounties," and the future development of open-source "evaluation cards" to make evaluation methodologies more transparent and reusable. Episode notes: https://medium.com/humanitarian-ai-today/andre-heller-and-mala-kumar-discuss-signposts-pilot-ai-assistant-903b5b8ab787
    続きを読む 一部表示
    44 分
  • Beyond the Summit: A Push for Real AI Collaboration from Blaise Robert
    2025/07/21
    On this episode of the Humanitarian AI Today podcast, Blaise Robert, Global AI Advisor for the International Committee for the Red Cross (ICRC), joins producer Brent Phillips to discuss his takeaways from the AI for Good Summit, specifically the need for more meaningful collaboration around artificial intelligence. Blaise observes that organizations are still duplicating their efforts to a large degree and could move faster by better sharing their lessons learned. He explores what it would take to elevate collaboration to the next level and truly integrate it into daily work. Tune in to hear his call to action for the humanitarian community: to be open about what works, what doesn't, and the hurdles along the way, so that successes can be shared by all. The conversation also touches on several other critical areas. Blaise details the ICRC's practical AI projects and how the ICRC is acting on its "responsibility to be more collaborative" by publicly publishing its AI policy and technology strategy as a step toward greater transparency. This approach is vital for turning the vast knowledge accumulated across the sector into actionable intelligence, ensuring that lessons learned from one project can inform the design of the next. He addresses the serious concerns around "digital harm," the ethics of data used to train AI models, and the use of AI in warfare, including autonomous weapons and military decision support. Finally, he discusses the careful balance the ICRC must strike in its relationships with major tech companies to maintain its core principles of independence and neutrality. Blaise and Brent also discuss emerging AI-powered search tools like Perplexity's new browser, Comet, and the use of large language models to make internal knowledge more accessible. While Roberts acknowledges the "large potential" for such tools in transforming tasks like project evaluation, he also stresses that they must be framed within strong policy and governance frameworks to ensure proper human oversight and responsible use. Episode notes and transcript: https://medium.com/humanitarian-ai-today/beyond-the-summit-a-push-for-real-ai-collaboration-from-blaise-robert-362ff41bb9d3
    続きを読む 一部表示
    35 分
  • Giulio Coppi on Defending Digital and Human Rights in the AI Age
    2025/07/13
    Giulio Coppi, Senior Humanitarian Officer at Access Now, an organization that fights for human rights in the digital age, speaks with Humanitarian AI Today guest host Brent Phillips. They discuss protecting digital rights in situations of conflict and violence and critically examine ethical dilemmas facing the humanitarian community surrounding the collection and use of data and applications of AI. Giulio and Brent discuss data sharing, information integrity, the role of private tech in humanitarian response, public-private sector engagement, corporate commitment to human rights and the involvement of AI companies in building surveillance and defense verticals during a time when human and digital rights are being dangerously undermined and attacked by state and non-state actors. Interview notes: https://medium.com/humanitarian-ai-today/giulio-coppi-on-defending-digital-and-human-rights-in-the-ai-age-325a44b818b3
    続きを読む 一部表示
    34 分
  • Aral Surmeli on HERA’s Frontline Experimentation with AI
    2025/07/06
    How can humanitarian organizations harness the power of AI to deliver critical healthcare when the technology is advancing faster than ever, but essential aid funding is being cut? Aral Sürmeli, Executive Director at HERA Digital Health, discusses this question with Humanitarian AI Today guest host and podcast producer Brent Phillips, sharing insights and lessons from his team's frontline experiments with AI. Aral and Brent also discuss the AI for Good Summit, explainable AI and AI research.
    続きを読む 一部表示
    31 分
  • A New Ethical AI Adoption Toolkit for Humanitarian Actors
    2025/06/29
    On this special short episode of Humanitarian AI Today, guest host Brent Phillips sits down with Tigmanshu Bhatnagar, a lecturer at University College London (UCL), and Hamdan Albishi, a UCL MSc student in AI for Sustainable Development. Tigmanshu and Hamdan discuss a toolkit they are developing, designed to empower non-technical humanitarian actors to build their own ethical AI projects. They walk through the toolkit's four-phase process—Reflection, Scoping, Feasibility Assessment, and Development—which guides users from an initial idea to a simulated, ethically-sound AI project without needing deep technical expertise. Toolkit users define a problem, identify beneficiaries, and consider potential unintended harm. The tool presents existing use cases and projects in the same problem area to educate the user. The toolkit helps users assess project feasibility based on resources and regulations. It can also suggest publicly available humanitarian datasets and helps check them for completeness and bias to avoid unintended harm. The tool suggests appropriate technical solutions, generates a project with embedded ethical guardrails, and runs it in a simulated environment to validate its accuracy and impact before real-world deployment This initiative emerged from a UK Humanitarian Innovation Hub (UKHIH) and Elrha-funded project, which found that humanitarian organizations, despite their commitment, faced a steep learning curve in creating tangible AI solutions. The toolkit addresses AI adoption challenges and aims to help humanitarian actors develop responsible AI projects for users, regardless of their technical background.
    続きを読む 一部表示
    29 分
  • André Heller on Piloting Agentic AI and Client-facing Applications at Signpost
    2025/06/23
    André Heller, Director of Signpost, led by the International Rescue Committee, updates Humanitarian AI Today podcast producer Brent Phillips on Signpost’s latest AI publications and research findings. André and Brent discuss a new Signpost paper covering their work piloting agentic AI and client-facing applications and touch on funding-connected complications caused by the aid funding crisis. This short episode was recorded to lay groundwork prior to recording a more formal interview featuring André and Mala Kumar, Head of Impact with Humane Intelligence, covering Signpost’s latest work in greater detail.
    続きを読む 一部表示
    18 分