The Behavioral Design Podcast

著者: Samuel Salzer and Aline Holzwarth
  • サマリー

  • How can we change behavior in practice? What role does AI have to play in behavioral design? Listen in as hosts Samuel Salzer and Aline Holzwarth speak with leading experts on all things behavioral science, AI, design, and beyond. The Behavioral Design Podcast from Habit Weekly and Nuance Behavior provides a fun and engaging way to learn about applied behavioral science and how to design for behavior change in practice. The latest season explores the fascinating intersection of Behavioral Design and AI. Subscribe and follow! For questions or to get in touch, email podcast@habitweekly.com.
    Samuel Salzer and Aline Holzwarth
    続きを読む 一部表示

あらすじ・解説

How can we change behavior in practice? What role does AI have to play in behavioral design? Listen in as hosts Samuel Salzer and Aline Holzwarth speak with leading experts on all things behavioral science, AI, design, and beyond. The Behavioral Design Podcast from Habit Weekly and Nuance Behavior provides a fun and engaging way to learn about applied behavioral science and how to design for behavior change in practice. The latest season explores the fascinating intersection of Behavioral Design and AI. Subscribe and follow! For questions or to get in touch, email podcast@habitweekly.com.
Samuel Salzer and Aline Holzwarth
エピソード
  • Building Moral AI with Jana Schaich Borg
    2025/05/01

    How Do You Build a Moral AI? with Jana Schaich Borg

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Jana Schaich Borg, Associate Research Professor at Duke University and co-author of the book “Moral AI and How We Get There”. Together they explore one of the thorniest and most important questions in the AI age: How do you encode human morality into machines—and should you even try?

    Drawing from neuroscience, philosophy, and machine learning, Jana walks us through bottom-up and top-down approaches to moral alignment, why current models fall short, and how her team’s hybrid framework may offer a better path. Along the way, they dive into the messy nature of human values, the challenges of AI ethics in organizations, and how AI could help us become more moral—not just more efficient.

    This conversation blends practical tools with philosophical inquiry and leaves us with a cautiously hopeful perspective: that we can, and should, teach machines to care.

     Topics Covered:

    • What AI alignment really means (and why it’s so hard)

    • Bottom-up vs. top-down moral AI systems

    • How organizations get ethical AI wrong—and what to do instead

    • The messy reality of human values and decision making

    • Translational ethics and the need for AI KPIs

    • Personalizing AI to match your values

    • When moral self-reflection becomes a design feature

    Timestamps:

    00:00  Intro: AI Alignment — Mission Impossible?
    04:00  Why Moral AI Is So Hard (and Necessary)
    07:00  The “Spec” Story & Reinforcement Gone Wrong
    10:00  Anthropomorphizing AI — Helpful or Misleading?
    12:00  Introducing Jana & the Moral AI Project
    15:00  What “Moral AI” Really Means
    18:00  Interdisciplinary Collaboration (and Friction)
    21:00  Bottom-Up vs. Top-Down Approaches
    27:00  Why Human Morality Is Messy
    31:00  Building a Hybrid Moral AI System
    41:00  Case Study: Kidney Donation Decisions
    47:00  From Models to Moral Reflection
    52:00  Embedding Ethics Inside Organizations
    56:00  Moral Growth Mindset & Training the Workforce
    01:03:00  Why Trust & Culture Matter Most
    01:06:00  Comparing AI Labs: OpenAI vs. Anthropic vs. Meta
    01:10:00  What We Still Don’t Know
    01:11:00  Quickfire: To AI or Not To AI
    01:16:00  Jana’s Most Controversial Take
    01:19:00  Can AI Make Us Better Humans?

    🎧 Like this episode? Share it with a friend or leave us a review to help others discover the show.

    Let me know if you’d like an abridged version, pull quotes, or platform-specific text for Apple, Spotify, or LinkedIn.

    続きを読む 一部表示
    1 時間 22 分
  • State of AI Risk with Peter Slattery
    2025/04/16

    Understanding AI Risks with Peter Slattery

    In this episode of the Behavioral Design Podcast, hosts Aline and Samuel are joined by Peter Slattery, behavioral scientist and lead researcher at MIT’s FutureTech lab, where he spearheads the groundbreaking AI Risk Repository project. Together, they dive into the complex and often overlooked risks of artificial intelligence—ranging from misinformation and malicious use to systemic failures and existential threats.

    Peter shares the intellectual and emotional journey behind categorizing over 1,000 documented AI risks, how his team built a risk taxonomy from 17,000+ sources, and why shared understanding and behavioral science are critical for navigating the future of AI.

    This one is a must-listen for anyone curious about AI safety, behavioral science, and the future of technology that’s moving faster than most of us can track.

    --

    LINKS:

    • Peter's LinkedIn Profile
    • MIT FutureTech Lab: futuretech.mit.edu
    • AI Risk Repository


    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    1 時間 10 分
  • Enter the AI Lab
    2025/03/20

    Enter the AI Lab: Insights from LinkedIn Polls and AI Literature Reviews

    In this episode of the Behavioral Design Podcast, hosts Samuel Salzer and Aline Holzwarth explore how AI is shaping behavioral design processes—from discovery to testing. They revisit insights from past LinkedIn polls, analyzing audience perspectives on which phases of behavioral design are best suited for AI augmentation and where human expertise remains crucial.

    The discussion then shifts to AI-driven literature reviews, comparing the effectiveness of various AI tools for synthesizing research. Samuel and Aline assess the strengths and weaknesses of different platforms, diving into key performance metrics like quality, speed, and cost, and debating the risks of over-reliance on AI-generated research without human oversight.

    The episode also introduces Nuance’s AI Lab, highlighting upcoming projects focused on AI-driven behavioral science innovations. The conversation concludes with a Behavioral Redesign series case study on Peloton, offering a fresh take on how AI and behavioral insights can reshape product experiences.

    If you're interested in the intersection of AI, behavioral science, and research methodologies, this episode is packed with insights on where AI is excelling—and where caution is needed.


    LINKS:

    • Nuance AI Lab: Website


    TIMESTAMPS:
    00:00 Introduction and Recap of Last Year's AI Polls
    06:27 AI's Strengths in Literature Review
    15:12 Emerging AI Tools for Research
    19:31 Evaluating AI Tools for Literature Reviews
    23:57 Comparing Chinese and American AI Tools
    26:01 Evaluating Literature Review Outputs
    28:12 Critical Analysis and Human Oversight
    35:19 The Worst Performing Model
    37:21 Introducing Nuance's AI Lab
    38:51 Behavioral Redesign Series: Peloton Example
    45:21 Podcast Highlights and Future Guests

    --

    Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠nuancebehavior.com.⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

    Every Monday our ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

    Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

    The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

    続きを読む 一部表示
    48 分

The Behavioral Design Podcastに寄せられたリスナーの声

カスタマーレビュー:以下のタブを選択することで、他のサイトのレビューをご覧になれます。