エピソード

  • AI Therapy with Slingshot's Derrick Hull
    2025/03/17

    “Everyone should go to therapy.” It’s a common statement, but much harder to achieve than it looks. There’s only one therapist for every 10K would-be clients, and the gap is only growing every year.

    That gap is what my latest guest - Dr. Derrick Hull - has spent his career trying to fix. Now serving as the founding Clinical Lead at Slingshot AI, Derrick previously led Clinical R&D at Talkspace, where he was instrumental in popularizing text-message therapy, a previously controversial modality now recognized as critical to increasing accessibility to mental health care. Derrick has had a fascinating career, serving as a practicing psychologist, a clinical advisor and researcher for top startups, and a highly accomplished academic with publications in Nature, the National Academy of Sciences, and Oxford University Press, with his research supported via R01 grants through NIH.

    It’s been amazing getting to work with Derrick at Slingshot, and I’m so excited to share just a snapshot of the wisdom and insight I collect from him everyday.


    In our latest conversation, we touch on...

    • The ethical problems (and opportunities) of AI therapy
    • The role of empathy in practicing psychology
    • Why being a therapist can be an “impossible profession”
    • The surprising literature on therapeutic effectiveness
    • … and many, many things in between

    For more conversations like this, be sure to subscribe to our Youtube channel (@ThinkingMachinesPodcast) and to our show in your favorite podcast player.

    続きを読む 一部表示
    36 分
  • What if we could cure loneliness? Philosophy, dopamine, and more with Mark Ungless
    2025/02/26

    Artificial neural networks were designed to emulate the human brain - and their insane performance on a wide range of tasks is pretty good evidence to support the comparison.


    Well, it's a bit more complicated than that, at least according to my guest Mark Ungless, a neuroscientist and current Director of AI at the UK's Mental Health Innovations. Mark and I have collaborated on research for 5+ years, and I've long enjoyed his thoughts on the biology of the brain, the philosophy of the mind, and how AI is changing (and being changed) by our understanding of both. In our latest conversation, we touch on...

    • His groundbreaking research on the biological root of loneliness
    • How understanding neuroscience helps you understand humans
    • How well neural networks represent our own minds (and whether it matters at all!)
    • AI’s ability to do therapy
    • What it means to have free will in a world where we understand more about our brains every day

    For more conversations like this, be sure to follow our Youtube channel (@ThinkingMachinesPodcast) and subscribe to our show in your favorite podcast player.

    続きを読む 一部表示
    1 時間 12 分
  • Does Philosophy Make Progress? Chatting with Every's Dan Shipper
    2025/01/23

    Is the AI revolution we're experiencing going to push us into a future we can't imagine? Or will the pace of progress enable us to adjust along the way?

    Dan Shipper spends his time thinking and writing on these topics (and many others) as the founder and CEO of Every Media, a technology-focused publication trying to understand the future. Dan is also a lifelong coder and entrepreneur with a background in philosophy and its intersection with tech. Dan and I share a ton in common (beyond just our first names!), so I think you all will enjoy our thoughts on...

    • The meaning of the "good life"
    • Whether the study of philosophy evolves over time
    • How cryptocurrency, AI, and the nature of knowledge will accelerate each other
    • The value of philosophy and thinking for thinking's sake
    • What the future holds for engineering, philosophy, and humanity at large

    If you like this conversation, check out more of Dan's thought at Every.to and on X 'at' @danshipper.

    続きを読む 一部表示
    51 分
  • OpenAI o1: Another GPT-3 moment?
    2024/10/18

    GPT-3 didn't have much of a splash outside of the AI community, but it foreshadowed the AI explosion to come. Is o1 OpenAI's second GPT-3 moment?

    Machine Learning Researchers Guilherme Freire and Luka Smyth discuss OpenAI o1, it's impact, and it's potential. We discuss early impressions of o1, why inference-time compute and reinforcement learning matter in the LLM story, and the path from o1 to AI beginning to fulfill its potential.

    00:00 Introduction and Welcome
    00:22 Exploring O1: Initial Impressions
    03:44 O1's Reception
    06:42 Reasoning and Model Scaling
    18:36 The Role of Agents
    27:28 Impact on Prompting
    28:43 Copilot or Autopilot?
    32:17 Reinforcement Learning and Interaction
    37:36 Can AI do your taxes yet?
    43:37 Investment in AI vs. Crypto
    46:56 Future Applications and Proactive AI

    続きを読む 一部表示
    52 分
  • The Future is Fine Tuned (with Dev Rishi, Predibase)
    2024/05/24

    Dev Rishi is the founder and CEO of Predibase, the company behind Ludwig and LoRAX. Predibase just released LoRA Land, a technical report showing 310 models that can outcompete GPT-4 on specific tasks through fine-tuning. In this episode, Dev tries (pretty successfully) to convince me that fine-tuning is the future, while answering a bunch of interesting questions, like:

    • Is fine-tuning hard?
    • If LoRAX is a competitive advantage for you, why open-source it?
    • Is model hosting becoming commoditized? If so, how can anyone compete?
    • What are people actually fine-tuning language models for?
    • How worried are you about OpenAI eating your lunch?

    I had a ton of fun with Dev on this one. Also, check out Predibase’s newsletter called fine-tuned (great name!) and LoRA Land.

    続きを読む 一部表示
    52 分
  • Pre-training LLMs: One Model To Rule Them All? with Talfan Evans, DeepMind
    2024/05/18

    Talfan Evans is a research engineer at DeepMind, where he focuses on data curation and foundational research for pre-training LLMs and multimodal models like Gemini. I ask Talfan:

    • Will one model rule them all?
    • What does "high quality data" actually mean in the context of LLM training?
    • Is language model pre-training becoming commoditized?
    • Are companies like Google and OpenAI keeping their AI secrets to themselves?
    • Does the startup or open source community stand a chance next to the giants?

    Also check out Talfan's latest paper at DeepMind, Bad Students Make Good Teachers.

    続きを読む 一部表示
    38 分
  • On Adversarial Training & Robustness with Bhavna Gopal
    2024/05/08

    "Understanding what's going on in a model is important to fine-tune it for specific tasks and to build trust."

    Bhavna Gopal is a PhD candidate at Duke, research intern at Slingshot with experience at Apple, Amazon and Vellum.

    We discuss

    • How adversarial robustness research impacts the field of AI explainability.
    • How do you evaluate a model's ability to generalize?
    • What adversarial attacks should we be concerned about with LLMs?
    続きを読む 一部表示
    44 分
  • On Emotionally Intelligent AI (with Chris Gagne, Hume AI)
    2024/04/19

    Chris Gagne manages AI research at Hume, which just released an expressive text-to-speech model in a super impressive demo. Chris and Daniel discuss AI and emotional understanding:

    • How does “prosody” add a dimension to human communication? What is Hume hoping to gain by adding it to Human-AI communication?
    • Do we want to interact with AI like we interact with humans? Or should the interaction models be different?
    • Are we entering the Uncanny Valley phase of emotionally intelligent AI?
    • Do LLMs actually have the ability to reason about emotions? Does it matter?
    • What do we risk, by empowering AI with emotional understanding? Are there risks from deception and manipulation? Or even a loss of human agency?
    続きを読む 一部表示
    40 分