エピソード

  • To Be or Not to Be Agentic with Maximilian Vogel
    2025/10/01

    Maximilian Vogel dismisses tales of agentic unicorns, relying instead on human expertise, rational objectives, and rigorous design to deploy enterprise agentic systems.


    Maximilian and Kimberly discuss what an agentic system is (emphasis on system); why agency in agentic AI resides with humans; engineering agentic workflows; agentic AI as a mule not a unicorn; establishing confidence and accuracy; codesigning with business/domain experts; why 100% of anything is not the goal; focusing on KPIs not features; tricks to keep models from getting tricked; modeling agentic workflows on human work; live data and human-in-the-loop validation; AI agents as a support team and implications for human work.

    Maximilian Vogel is the Co-Founder of BIG PICTURE, a digital transformation boutique specializing in the use of AI for business innovation. Maximilian enables the strategic deployment of safe, secure, and reliable agentic AI systems.


    Related Resources

    • Medium: https://medium.com/@maximilian.vogel

    A transcript of this episode is here.

    続きを読む 一部表示
    51 分
  • The Problem of Democracy with Henrik Skaug Sætra
    2025/09/17

    Henrik Skaug Sætra considers the basis of democracy, the nature of politics, the tilt toward digital sovereignty and what role AI plays in our collective human society.


    Henrik and Kimberly discuss AI’s impact on human comprehension and communication; core democratic competencies at risk; politics as a joint human endeavor; conflating citizens with customers; productively messy processes; the problem of democracy; how AI could change what democracy means; whether democracy is computable; Google’s experiments in democratic AI; AI and digital sovereignty; and a multidisciplinary path forward.

    Henrik Skaug Sætra is an Associate Professor of Sustainable Digitalisation and Head of the Technology and Sustainable Futures research group at Oslo University. He is also the CEO of Pathwais.eu connecting strategy, uncertainty, and action through scenario-based risk management.


    Related Resources

    • Google Scholar Profile: https://scholar.google.com/citations?user=pvgdIpUAAAAJ&hl=en
    • How to Save Democracy from AI (Book – Norwegian): https://www.norli.no/9788202853686
    • AI for the Sustainable Development Goals (Book): https://www.amazon.com/AI-Sustainable-Development-Goals-Everything/dp/1032044063
    • Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism (Book): https://www.amazon.com/Technology-Sustainable-Development-Pitfalls-Techno-Solutionism-ebook/dp/B0C17RBTVL

    A transcript of this episode is here.

    続きを読む 一部表示
    54 分
  • Generating Safety Not Abuse with Dr. Rebecca Portnoff
    2025/08/20

    Dr. Rebecca Portnoff generates awareness of the threat landscape, enablers, challenges and solutions to the complex but addressable issue of online child sexual abuse.

    Rebecca and Kimberly discuss trends in online child sexual abuse; pillars of impact and harm; how GenAI expands the threat landscape; personalized targeting and bespoke abuse; Thorn’s Safety by Design Initiative; scalable prevention strategies; technical and legal barriers; standards, consensus and commitment; building better from the beginning; accountability as an innovative goal; and not confusing complex with unsolvable.

    Dr. Rebecca Portnoff is the Vice President of Data Science at Thorn, a non-profit dedicated to protecting children from sexual abuse. Read Thorn’s seminal Safety by Design paper, bookmark the Research Center to stay updated and support Thorn’s critical work by donating here.

    Related Resources

    • Thorn’s Safety by Design Initiative (News): https://www.thorn.org/blog/generative-ai-principles/
    • Safety by Design Progress Reports: https://www.thorn.org/blog/thorns-safety-by-design-for-generative-ai-progress-reports/
    • Thorn + SIO AIG-CSAM Research (Report): https://cyber.fsi.stanford.edu/io/news/ml-csam-report

    A transcript of this episode is here.

    続きを読む 一部表示
    47 分
  • Inclusive Innovation with Hiwot Tesfaye
    2025/08/06

    Hiwot Tesfaye disputes the notion of AI givers and takers, challenges innovation as an import, highlights untapped global potential, and charts a more inclusive course.


    Hiwot and Kimberly discuss the two camps myth of inclusivity; finding innovation everywhere; meaningful AI adoption and diffusion; limitations of imported AI; digital colonialism; low-resource languages and illiterate LLMs; an Icelandic success story; situating AI in time and place; employment over automation; capacity and skill building; skeptical delight and making the case for multi-lingual, multi-cultural AI.

    Hiwot Tesfaye is a Technical Advisor in Microsoft’s Office of Responsible AI and a Loomis Council Member at the Stimson Center where she helped launch the Global Perspectives: Responsible AI Fellowship.

    Related Resources

    • #35 Navigating AI: Ethical Challenges and Opportunities a conversation with Hiwot Tesfaye

    A transcript of this episode is here.

    続きを読む 一部表示
    51 分
  • The Shape of Synthetic Data with Dietmar Offenhuber
    2025/07/23

    Dietmar Offenhuber reflects on synthetic data’s break from reality, relates meaning to material use, and embraces data as a speculative and often non-digital artifact.

    Dietmar and Kimberly discuss data as a representation of reality; divorcing content from meaning; data settings vs. data sets; synthetic data quality and ground truth; data as a speculative artifact; the value in noise; data materiality and accountability; rethinking data literacy; Instagram data realities; non-digital computing and going beyond statistical analysis.

    Dietmar Offenhuber is a Professor and Department Chair of Art+Design at Northeastern University. Dietmar researches the material, sensory and social implications of environmental information and evidence construction.

    Related Resources

    • Shapes and Frictions of Synthetic Data (paper): https://journals.sagepub.com/doi/10.1177/20539517241249390
    • Autographic Design: The Matter of Data in a Self-Inscribing World (book): https://autographic.design/
    • Reservoirs of Venice (project): https://res-venice.github.io/
    • Website: https://offenhuber.net/

    A transcript of this episode is here.

    続きを読む 一部表示
    52 分
  • A Question of Humanity with Pia Lauritzen, PhD
    2025/07/09

    Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required.


    Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger’s eerily precise predictions, the skill of critical thinking, and why it’s not really about the questions at all.


    Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions.


    Related Resources

    • Questions (Book): https://www.press.jhu.edu/books/title/23069/questions
    • TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions
    • Question Jam: www.questionjam.com
    • Forbes Column: forbes.com/sites/pialauritzen
    • LinkedIn Learning: www.Linkedin.com/learning/pialauritzen
    • Personal Website: pialauritzen.dk

    A transcript of this episode is here.

    続きを読む 一部表示
    56 分
  • A Healthier AI Narrative with Michael Strange
    2025/06/25

    Michael Strange has a healthy appreciation for complexity, diagnoses hype as antithetical to innovation and prescribes an interdisciplinary approach to making AI well.

    Michael and Kimberly discuss whether AI is good for healthcare; healthcare as a global system; radical shifts precipitated by the pandemic; why hype stifles nuance and innovation; how science works; the complexity of the human condition; human well-being vs. health; the limits of quantification; who is missing in healthcare and health data; the political-economy and material impacts of AI as infrastructure; the doctor in the loophole; the humility required to design healthy AI tools and create a resilient, holistic healthcare system.

    Michael Strange is an Associate Professor in the Dept of Global Political Affairs at Malmö University focusing on core questions of political agency and democratic engagement. In this context he works on Artificial Intelligence, health, trade, and migration. Michael directed the Precision Health & Everyday Democracy (PHED) Commission and serves on the board of two research centres: Citizen Health and the ICF (Imagining and Co-creating Futures).

    Related Resources

    • If AI is to Heal Our Healthcare Systems, We Need to Redesign How AI Is Developed (article): https://www.techpolicy.press/if-ai-is-to-heal-our-healthcare-systems-we-need-to-redesign-how-ai-itself-is-developed/
    • Beyond ‘Our product is trusted!’ – A processual approach to trust in AI healthcare (paper) https://mau.diva-portal.org/smash/record.jsf?pid=diva2%3A1914539
    • Michael Strange (website): https://mau.se/en/persons/michael.strange/

    A transcript of this episode is here.

    続きを読む 一部表示
    1 時間
  • LLMs Are Useful Liars with Andriy Burkov
    2025/06/11

    Andriy Burkov talks down dishonest hype and sets realistic expectations for when LLMs, if properly and critically applied, are useful. Although maybe not as AI agents.

    Andriy and Kimberly discuss how he uses LLMs as an author; LLMs as unapologetic liars; how opaque training data impacts usability; not knowing if LLMs will save time or waste it; error-prone domains; when language fluency is useless; how expertise maximizes benefit; when some idea is better than no idea; limits of RAG; how LLMs go off the rails; why prompt engineering is not enough; using LLMs for rapid prototyping; and whether language models make good AI agents (in the strictest sense of the word).

    Andriy Burkov holds a PhD in Artificial Intelligence and is the author of The Hundred Page Machine Learning and Language Models books. His Artificial Intelligence Newsletter reaches 870,000+ subscribers. Andriy was previously the Machine Learning Lead at Talent Neuron and the Director of Data Science (ML) at Gartner. He has never been a Ukrainian footballer.

    Related Resources

    • The Hundred Page Language Models Book: https://thelmbook.com/
    • The Hundred Page Machine Learning Book: https://themlbook.com/
    • True Positive Weekly (newsletter): https://aiweekly.substack.com/

    A transcript of this episode is here.

    続きを読む 一部表示
    47 分