エピソード

  • Perspectives and Predictions: Looking Back at 2025 and Forward to 2026
    2025/12/24

    A retrospective sampling of ideas and questions our illustrious guests gifted us in 2025 alongside some glad and not so glad tidings (ok, predictions) for AI in 2026.

    In this episode we revisit insights from our guests and, perhaps, introduce those you may have missed along the way. Select guests provide sparky takes on what may happen in 2026.

    Host Note: I desperately wanted to use the work prognostication in reference to the latter segment. But although the word sounds cool it implies a level of mysticism entirely out of keeping with the informed opinions these guests have proffered. So, predictions it is.

    A transcript of this episode is here.

    続きを読む 一部表示
    24 分
  • An Environmental Grounding with Masheika Allgood
    2025/12/10

    Masheika Allgood delineates good AI from GenAI, outlines the environmental imprint of hyperscale data centers, and emphasizes AI success depends on the why and data.

    Masheika and Kimberly discuss her path from law to AI; AI as an embodied infrastructure; forms of beneficial AI; if the GenAI math maths; narratives underpinning AI; the physical imprint of hyperscale data centers; the fallacy of closed loop cooling; who pays for electrical capacity; enabling community dialogue; starting with why in AI product design; AI as a data infrastructure play; staying positive and finding the thing you can do.

    Masheika Allgood is an AI Ethicist and Founder of AllAI Consulting. She is a well-known advocate for sustainable AI development and contributor to the IEEE P7100 Standard for Measurement of Environmental Impacts of Artificial Intelligence Systems.

    Related Resources

    • Taps Run Dry Initiative (Website)
    • Data Center Advocacy Toolkit (Website)
    • Eat Your Frog (Substack)
    • AI Data Governance, Compliance, and Auditing for Developers (LinkedIn Learning)
    • A Mind at Play: How Claude Shannon Invented the Information Age (Referenced Book)

    A transcript of this episode is here.

    続きを読む 一部表示
    57 分
  • Your Digital Twin Is Not You with Kati Walcott
    2025/11/26

    Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.

    Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI.

    Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.

    Related Resources

    • The False Intention Economy: How AI Systems are Replacing Human Will with Modeled Behavior (LinkedIn Article)

    A transcript of this episode is here.

    続きを読む 一部表示
    53 分
  • No Community Left Behind with Paula Helm
    2025/11/12

    Paula Helm articulates an AI vision that goes beyond base performance to include epistemic justice and cultural diversity by focusing on speakers and not language alone.

    Paula and Kimberly discuss ethics as a science; language as a core element of culture; going beyond superficial diversity; epistemic justice and valuing other’s knowledge; the translation fallacy; indigenous languages as oral goods; centering speakers and communities; linguistic autonomy and economic participation; the Māori view on data ownership; the role of data subjects; enabling cultural understanding, self-determination and expression; the limits of synthetic data; ethical issues as power asymmetries; and reflecting on what AI mirrors back to us.

    Paula Helm is an Assistant Professor of Empirical Ethics and Data Science at the University of Amsterdam. Her work sits at the intersection of STS, Media Studies and Ethics. In 2022 Paula was recognized as one of the 100 Most Brilliant Women in AI-Ethics.

    Related Resources

    • Generating Reality and Silencing Debate: Synthetic Data as Discursive Device (paper) https://journals.sagepub.com/doi/full/10.1177/20539517241249447
    • Diversity and Language Technology (paper): https://link.springer.com/article/10.1007/s10676-023-09742-6

    A transcript of this episode is here.

    続きを読む 一部表示
    52 分
  • What AI Values with Jordan Loewen-Colón
    2025/10/29

    Jordan Loewen-Colón values clarity regarding the practical impacts, philosophical implications and work required for AI to serve the public good, not just private gain.

    Jordan and Kimberly discuss value alignment as an engineering or social problem; understanding ourselves as data personas; the limits of personalization; the perception of agency; how AI shapes our language and desires; flattening of culture and personality; localized models and vernacularization; what LLMs value (so to speak); how tools from calculators to LLMs embody values; whether AI accountability is on anyone’s radar; failures of policy and regulation; positive signals; getting educated and fostering the best AI has to offer.

    Jordan Loewen-Colón is an Adjunct Associate Professor of AI Ethics and Policy at Smith School of Business | Queen's University. He is also the Co-Founder of the AI Alt Lab which is dedicated to ensuring AI serves the public good and not just private gain.

    Related Resources

    • HBR Research: Do LLMs Have Values? (paper): https://hbr.org/2025/05/research-do-llms-have-values
    • AI4HF Beyond Surface Collaboration: How AI Enables High-Performing Teams (paper): https://www.aiforhumanflourishing.com/the-framework-papers/relationshipsandcommunication

    A transcript of this episode is here.

    続きを読む 一部表示
    52 分
  • Agentic Insecurities with Keren Katz
    2025/10/15

    Keren Katz exposes novel risks posed by GenAI and agentic AI while reflecting on unintended malfeasance, surprisingly common insider threats and weak security postures.


    Keren and Kimberly discuss threats amplified by agentic AI; self-inflicted exposures observed in Fortune 500 companies; normalizing risky behavior; unintentional threats; non-determinism as a risk; users as an attack vector; the OWASP State of Agentic AI and Governance report; ransomware 2025; mapping use cases and user intent; preemptive security postures; agentic behavior analysis; proactive AI/agentic security policies and incident response plans.

    Keren Katz is Senior Group Manager of Threat Research, Product Management and AI at Tenable, a contributor at both the Open Worldwide Application Security Project (OWASP) and Forbes. Keren is a global leader in AI and cybersecurity, specializing in Generative AI threat detection.

    Related Resources

    • Article: The Silent Breach: Why Agentic AI Demands New Oversight
    • State of Agentic AI Security and Governance (whitepaper): https://genai.owasp.org/resource/state-of-agentic-ai-security-and-governance-1-0/
    • The LLM Top 10: https://genai.owasp.org/llm-top-10/

    A transcript of this episode is here.

    続きを読む 一部表示
    49 分
  • To Be or Not to Be Agentic with Maximilian Vogel
    2025/10/01

    Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.


    Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work.

    Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.


    Related Resources

    • Medium: https://medium.com/@maximilian.vogel

    A transcript of this episode is here.

    続きを読む 一部表示
    51 分
  • The Problem of Democracy with Henrik Skaug Sætra
    2025/09/17

    Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society.


    Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.

    Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.


    Related Resources

    • Google Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=en
    • How to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686
    • AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063
    • Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL

    A transcript of this episode is here.

    続きを読む 一部表示
    54 分