エピソード

  • Dialogue with Market Actors: Cooperation for Better AI Oversight
    2026/03/10

    Supervision cannot succeed without structured engagement with developers, deployers, and industry partners. In this episode, Huub Jannsen discusses best practices for market dialogue, transparency expectations, and collaborative mechanisms that support compliance while fostering innovation.

    The episode highlights how oversight bodies can build trust and shared responsibility across the AI ecosystem.

    Speaker: Huub Jannsen (RDI)
    Interviewer: Dafna Feinholz, Director of the Division a.i. & Chief of Section for Bioethics and Ethics of Science and Technology, UNESCO.



    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    続きを読む 一部表示
    27 分
  • Cybersecurity for AI Supervision: Protecting Systems, Data, and Institutions
    2026/03/03

    As AI systems introduce new attack surfaces, cybersecurity becomes a foundational element of oversight. Carlos Antunes outlines key threat vectors, resilience strategies, and practical measures supervisory authorities can implement.

    This episode gives listeners a clear roadmap for integrating cybersecurity considerations into AI supervision workflows.

    Speaker: Carlos Antunes (Portugal National Cybersecurity Agency)
    Interviewer: Yannic Duller, Project Consultant, Ethics of AI Unit, UNESCO


    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    続きを読む 一部表示
    34 分
  • Red Teaming as a Supervisory Tool: Stress-Testing AI Systems
    2026/02/24

    Red teaming is rapidly becoming a critical component of AI oversight. In this episode, Rumman Chowdhury explains how structured adversarial testing can uncover system vulnerabilities, model failures, and misuse pathways.

    The discussion focuses on practical red-teaming approaches that supervisory authorities can adopt, even with limited resources.

    Speaker: Rumman Chowdhury (Human Intelligence)
    Interviewer: Mirela Kmetic-Marceau, Project Consultant, Ethics of AI Unit, UNESCO


    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    続きを読む 一部表示
    27 分
  • Evaluating AI Systems: Metrics, Methods, and Measurement Gaps
    2026/02/17

    A deep dive into the metrics and methodologies essential for robust AI evaluations. Agnès Delaborde examines measurement challenges, standards alignment, and the tools supervisory authorities need to assess AI system performance.

    The conversation highlights gaps between emerging benchmarks and real-world regulatory needs.

    Speaker: Agnès Delaborde (Laboratoire national de métrologie et d'essais – LNE)
    Interviewer: Lihui Xu, Programme Specialist, Ethics of AI Unit, UNESCO


    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    続きを読む 一部表示
    33 分
  • Mapping AI Risks: From Principles to Practice
    2026/02/10

    This episode explores how supervisory authorities can translate high-level AI risk principles into practical, operational risk-mapping processes. Nathalie Cohen discusses evaluation frameworks, data considerations, and real-world challenges identified during the roundtable exercise, providing regulators with concrete steps for structuring risk identification and prioritisation.

    Speaker: Nathalie Cohen (OECD)
    Interviewer: Max Kendrick, AI Strategy Coordinator & Senior Advisor, Office of the Director General, UNESCO



    Hosted on Ausha. See ausha.co/privacy-policy for more information.

    続きを読む 一部表示
    36 分