『AI Standards Stack』のカバーアート

AI Standards Stack

AI Standards Stack

著者: Professor Michael Mainelli Z/Yen Group and Adam Leon Smith AIQI
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

Join us for the AI Standards Stack podcast series, hosted by Professor Michael Mainelli (Z/Yen Group) and Adam Leon Smith (AIQI). This podcast series examines the latest developments in AI assurance, alignment, governance, and responsible innovation. Each session features expert guests from around the world who are shaping standards, ethics, regulation, and best practices for trustworthy artificial intelligence.Professor Michael Mainelli, Z/Yen Group and Adam Leon Smith, AIQI 政治・政府
エピソード
  • Nicholas Beale On The Importance Of Responsible AI
    2026/04/02

    In this episode, hosts Michael Minelli and Adam Smith welcome Nicholas Beale, founder and director at Sciteb, for an insightful look at AI ethics and governance. Nicholas discusses his early internet ethics work and his paper on the Unethical Optimisation Principle, explaining why AI optimisers disproportionately pick unethical strategies by ignoring future downsides. He explores mitigations like panels of AIs, risks of single-system reliance, defence challenges, the Investor Consensus on Responsible AI, guardrail issues, and the need for diversity to avoid systemic risks. A mathematically grounded conversation urging balanced systems that preserve human judgment and the common good.

    続きを読む 一部表示
    40 分
  • Dr Piercosma Bisconti On The Social Frontiers Of Generative AI
    2026/03/16

    In this episode, hosts Michael Mainelli and Adam Leon Smith welcome Piercosma Bisconti, dialling in from Rome, for a fresh European perspective on the evolving ethics and governance of generative AI. With a background in philosophy, robotics, and global politics, Piercosma shares his surprising shift from academic research to actively shaping EU and international AI standards, including his work with DEXAI – Artificial Ethics.

    The conversation dives into how ChatGPT's 2022 launch changed everything, suddenly bringing AI directly into human social spaces in ways earlier ethical frameworks never fully anticipated. Piercosma explores the rise of more interconnected AI systems and the surprising new risks that emerge when multiple models interact, collaborate, or even compete in real-world environments. Drawing on philosophy and systems thinking, he reflects on what this means for society, especially how always-agreeable AI might quietly reshape human relationships, emotional resilience, and social skills in the years ahead. Expect thoughtful insights on where standards and governance fit in, the limits of current testing approaches, and why the biggest changes may be more social than technological.

    A fascinating, big-picture discussion that asks: as AI becomes part of everyday social life, how do we keep our humanity intact? Tune in for Piercosma's unique blend of deep thinking and practical standards experience.


    続きを読む 一部表示
    45 分
  • Dr Christine Chow On Why AI Standards Matter To Investors
    2026/02/18

    In this second episode of the AI Standards Stack Podcast, guest Dr Christine Chow joins hosts Professor Michael Mainelli and Adam Leon Smith to discuss responsible AI governance from an investor’s perspective. Christine, a long-time investment professional and early advocate for responsible AI since 2012, shares insights drawn from her pioneering work, including leading Federated Hermes’ 2019 industry-first investor expectations on responsible AI and data governance.

    The conversation centers on why robust data governance forms the foundation of effective AI governance, covering data provenance, bias in raw, model and synthetic data, transparency, explainability and accountability. It explores practical challenges across evolving AI paradigms, from efficiency tools to generative, agentic, multimodal and embodied systems, including use-case identification, prompt engineering, meaningful human-in-the-loop oversight, board-level engagement, and societal risks of over-reliance such as impacts on mental health, confidence and critical thinking. The episode examines the fragmented global standards landscape (EU AI Act risk categories, NIST voluntary frameworks, ISO 42001), investor approaches to company engagement, environmental concerns around AI infrastructure, tensions between free speech and content guardrails, cultural complexities in human rights, and the push for concrete implementation guidance to balance innovation with safety and societal well-being.

    続きを読む 一部表示
    41 分
まだレビューはありません