エピソード

  • Stories we Tech with Dr. Ash Watson
    2025/05/14

    Dr. Ash Watson studies how stories ranging from classic Sci-Fi to modern tales invoking moral imperatives, dystopian futures and economic logic shape our views of AI.


    Ash and Kimberly discuss the influence of old Sci-Fi on modern tech; why we can’t escape the stories we’re told; how technology shapes society; acting in ways a machine will understand; why the language we use matters; value transference from humans to AI systems; the promise of AI’s promise; grounding AI discourse in material realities; moral imperatives and capitalizing on crises; economic investment as social logic; AI’s claims to innovation; who innovation is really for; and positive developments in co-design and participatory research.


    Dr. Ash Watson is a Scientia Fellow and Senior Lecturer at the Centre for Social Research in Health at UNSW Sydney. She is also an Affiliate of the Australian Research Council (ARC) Centre of Excellence for Automated Decision-Making and Society (CADMS).


    Related Resources:

    • Ash Watson (Website): https://awtsn.com/
    • The promise of artificial intelligence in health: Portrayals of emerging healthcare technologies (Article): https://doi.org/10.1111/1467-9566.13840
    • An imperative to innovate? Crisis in the sociotechnical imaginary (Article): https://doi.org/10.1016/j.tele.2024.102229

    A transcript of this episode is here.

    続きを読む 一部表示
    48 分
  • Regulating Addictive AI with Robert Mahari
    2025/04/16

    Robert Mahari examines the consequences of addictive intelligence, adaptive responses to regulating AI companions, and the benefits of interdisciplinary collaboration.

    Robert and Kimberly discuss the attributes of addictive products; the allure of AI companions; AI as a prescription for loneliness; not assuming only the lonely are susceptible; regulatory constraints and gaps; individual rights and societal harms; adaptive guardrails and regulation by design; agentic self-awareness; why uncertainty doesn’t negate accountability; AI’s negative impact on the data commons; economic disincentives; interdisciplinary collaboration and future research.

    Robert Mahari is a JD-PhD researcher at MIT Media Lab and the Harvard Law School where he studies the intersection of technology, law and business. In addition to computational law, Robert has a keen interest in AI regulation and embedding regulatory objectives and guardrails into AI designs.

    A transcript of this episode is here.

    Additional Resources:

    • The Allure of Addictive Intelligence (article): https://www.technologyreview.com/2024/08/05/1095600/we-need-to-prepare-for-addictive-intelligence/
    • Robert Mahari (website): https://robertmahari.com/
    続きを読む 一部表示
    54 分
  • AI Literacy for All with Phaedra Boinodiris
    2025/04/02

    Phaedra Boinodiris minds the gap between AI access and literacy by integrating educational siloes, practicing human-centric design, and cultivating critical consumers.

    Phaedra and Kimberly discuss the dangerous confluence of broad AI accessibility with lagging AI literacy and accountability; coding as a bit player in AI design; data as an artifact of human experience; the need for holistic literacy; creating critical consumers; bringing everyone to the AI table; unlearning our siloed approach to education; multidisciplinary training; human-centricity in practice; why good intent isn’t enough; and the hard work required to develop good AI.

    Phaedra Boinodiris is IBM’s Global Consulting Leader for Trustworthy AI and co-author of the book AI for the Rest of Us. As an RSA Fellow, co-founder of the Future World Alliance, and academic advisor, Phaedra is shaping a future in which AI is accessible and good for all.

    A transcript of this episode is here.

    Additional Resources:

    Phaedra’s Website - https://phaedra.ai/

    The Future World Alliance - https://futureworldalliance.org/

    続きを読む 一部表示
    43 分
  • Auditing AI with Ryan Carrier
    2025/03/19

    Ryan Carrier trues up the benefits and costs of responsible AI while debunking misleading narratives and underscoring the positive power of the consumer collective.

    Ryan and Kimberly discuss the growth of AI governance; predictable resistance; the (mis)belief that safety impedes innovation; the “cost of doing business”; downside and residual risk; unacceptable business practices; regulatory trends and the law; effective disclosures and deceptive design; the value of independence; auditing as a business asset; the AI lifecycle; ethical expertise and choice; ethics boards as advisors not activists; and voting for beneficial AI with our wallets.

    A transcript of this episode is here.

    Ryan Carrier is the Executive Director of ForHumanity, a non-profit organization improving AI outcomes through increased accountability and oversight.

    続きを読む 一部表示
    53 分
  • Ethical by Design with Olivia Gambelin
    2025/03/05

    Olivia Gambelin values ethical innovation, revels in human creativity and curiosity, and advocates for AI systems that reflect and enable human values and objectives.

    Olivia and Kimberly discuss philogagging; us vs. “them” (i.e. AI systems) comparisons; enabling curiosity and human values; being accountable for the bombs we build - figuratively speaking; AI models as the tip of the iceberg; literacy, values-based judgement and trust; replacing proclamations with strong living values; The Values Canvas; inspired innovations; falling back in love with technology; foundational risk practices; optimism and valuing what matters. A transcript of this episode is here.

    Olivia Gambelin is a renowned AI Ethicist and the Founder of Ethical Intelligence, the world’s largest network of Responsible AI practitioners. An active researcher, policy advisor and entrepreneur, Olivia helps executives and product teams innovate confidently with AI.

    Additional Resources:

    Responsible AI: Implement an Ethical Approach in Your Organization – Book

    Plato & a Platypus Walk Into a Bar: Understanding Philosophy Through Jokes - Book

    The Values Canvas – RAI Design Tool

    Women Shaping the Future of Responsible AI – Organization

    In Pursuit of Good Tech | Subscribe - Newsletter

    続きを読む 一部表示
    51 分
  • The Nature of Learning with Helen Beetham
    2025/02/19

    Helen Beetham isn’t waiting for an AI upgrade as she considers what higher education is for, why learning is ostensibly ripe for AI, and how to diversify our course.

    Helen and Kimberly discuss the purpose of higher education; the current two tribe moment; systemic effects of AI; rethinking learning; GenAI affordances; the expertise paradox; productive developmental challenges; converging on an educational norm; teachers as data laborers; the data-driven personalization myth; US edtech and instrumental pedagogy; the fantasy of AI’s teacherly behavior; students as actors in their learning; critical digital literacy; a story of future education; AI ready graduates; pre-automation and AI adoption; diversity of expression and knowledge; two-tiered educational systems; and the rich heritage of universities.

    Helen Beetham is an educator, researcher and consultant who advises universities and international bodies worldwide on their digital education strategies. Helen is also a prolific author whose publications include “Rethinking Pedagogy for a Digital Age”. Her Substack, Imperfect Offerings, is recommended by the Guardian/Observer for its wise and thoughtful critique of generative AI.

    Additional Resources:

    Imperfect Offerings - https://helenbeetham.substack.com/

    Audrey Watters - https://audreywatters.com/

    Kathryn (Katie) Conrad - https://www.linkedin.com/in/kathryn-katie-conrad-1b0749b/

    Anna Mills - https://www.linkedin.com/in/anna-mills-oer/

    Dr. Maya Indira Ganesh - https://www.linkedin.com/in/dr-des-maya-indira-ganesh/

    Tech(nically) Politics - https://www.technicallypolitics.org/

    LOG OFF - logoffmovement.org/

    Rest of World - www.restofworld.org/

    Derechos Digitales – www.derechosdigitales.org

    A transcript of this episode is here.

    続きを読む 一部表示
    46 分
  • Ethics for Engineers with Steven Kelts
    2025/02/05

    Steven Kelts engages engineers in ethical choice, enlivens training with role-playing, exposes organizational hazards and separates moral qualms from a duty to care.

    Steven and Kimberly discuss Ashley Casovan’s inspiring query; the affirmation allusion; students as stochastic parrots; when ethical sophistication backfires; limits of ethics review boards; engineers and developers as core to ethical design; assuming people are good; 4 steps of ethical decision making; inadvertent hotdog theft; organizational disincentives; simulation and role-playing in ethical training; avoiding cognitive overload; reorienting ethical responsibility; guns, ethical qualms and care; and empowering engineers to make ethical choices.

    Steven Kelts is a lecturer in Princeton’s University Center for Human Values (UCHV) and affiliated faculty in the Center for Information Technology Policy (CITP). Steve is also an ethics advisor to the Responsible AI Institute and Director of All Tech is Human’s Responsible University Network.

    Additional Resources:

    • Princeton Agile Ethics Program: https://agile-ethics.princeton.edu
    • CITP Talk 11/19/24: Agile Ethics Theory and Evidence
    • Oktar, Lomborozo et al: Changing Moral Judgements
    • 4-Stage Theory of Ethical Decision Making: An Introduction
    • Enabling Engineers through “Moral Imagination” (Google)

    A transcript of this episode is here.

    続きを読む 一部表示
    47 分
  • Righting AI with Susie Alegre
    2025/01/22

    Susie Alegre makes the case for prioritizing human rights and connection, taking AI systems to account, minding the right gaps, and resisting unwitting AI dependency.

    Susie and Kimberly discuss the Universal Declaration of Human Rights (UDHR); legal protections and access to justice; human rights laws; how court cases impact legislative will; the wicked problem of companion AI; abdicating accountability for AI systems; Stepford Wives and gynoid robots; human connection and agency; minding the wrong gaps with AI systems; AI dogs vs. AI pooper scoopers; the reality of care and legal work; writing to think; cultural heritage and creativity; pausing for thought; unwittingly becoming dependent on AI; and prioritizing people over technology.

    Susie Alegre is an acclaimed international human rights lawyer and the author of Freedom to Think: The Long Struggle to Liberate Our Minds and Human Rights, Robot Wrongs: Being Human in the Age of AI. She is also a Senior Fellow at the Centre for International Governance and Innovation (CIGI) and Founder of the Island Rights Initiative. Learn more at her website: Susie Alegre

    A transcript of this episode is here.

    続きを読む 一部表示
    46 分