『Reimagining Psychology』のカバーアート

Reimagining Psychology

Reimagining Psychology

著者: Tom Whitehead
無料で聴く

概要

Psychology's an important science, one that can really help us live better lives day-to-day. In this podcast we look at what parts of this science work well, and what parts don't. If we dare to look closely, we can find ways to improve it. Join us in a mind-blowing journey to a behavioral science for the 21st century.© 2026 Reimagining Psychology 哲学 生物科学 社会科学 科学
エピソード
  • AI Fakes Fingers and Facts
    2026/02/23

    If you've ever used an AI graphics program, you've probably encountered this problem: you write a great prompt involving some human figure, and the program delivers. Everything looks great ... except for the hands. They're a mess. Too many fingers. Not enough fingers. Creepy looking fingers with weird misshapen fingernails. Or they don't looks like hands at all. What's going on?

    In this episode, Deep Divers Mark and Jenna finally answer that question. Turns out the "hand problem" isn't just an annoying glitch. It's a symptom of a much bigger issue — the same issue that causes AI to create "false facts", also called "AI hallucinations."

    Our Deep Divers explain these strange quirks using the ideas in a new paper by psychotherapist and author Tom Whitehead, "Ecological Alignment: Preventing Parasitic Emergence in Complex Generative Systems", released in February 2026. To access/download the original paper, visit:

    https://whiteheadbooks.com/

    続きを読む 一部表示
    17 分
  • Your AI is Pacing Its Cage
    2026/02/18

    What do pacing tigers, zombie ants, and glitching AIs have in common? More than you think. If you’ve ever wondered why chatbots hallucinate, sycophants emerge, or guardrails backfire, this conversation will change how you see the entire field. In this episode, Deep Divers Mark and Jenna unpack a groundbreaking new approach to AI alignment and safety — one that treats artificial intelligence not as a machine to be controlled, but as an ecology to be cultivated.

    When an AI hallucinates or starts defending an idea that's obviously wrong, maybe it isn’t misbehaving — maybe it’s adapting to the cage we built around it. This episode explores how the mismatch between an AI and the environment we put it in creates strange behavior in large models, and why the future of alignment may depend less on rules and more on resonance, context, and collaboration. A thought‑provoking dive into the environments we create and the systems that grow inside them.

    This episode is focused on a new paper by psychotherapist and author Tom Whitehead, "Ecological Alignment: Preventing Parasitic Emergence in Complex Generative Systems", released in February 2026. To access/download the original paper, visit:

    https://whiteheadbooks.com/

    続きを読む 一部表示
    19 分
  • Ecological Alignment - What Can Zoo Animals Teach Us About AI Malfunctions?
    2026/02/17

    AI systems can be startlingly competent. They write letters, make artwork, compose songs. But sometimes they hallucinate, repeat obviously wrong "facts", and even hide what they're doing from developers and users. Developers play "Whack-A-Mole", solving one problem only to have others crop up in their place. This is an industry-wide problem. And as we depend on AI more and more the crazy stuff can be scary.

    In this episode Deep Divers Mark and Jenna explain a paper about a new way to understand AI misbehavior, and how to make it safer. This is the Ecological Alignment approach.

    The idea is that AI systems do weird things not because they are broken, but because the environment they're working in won't let them work the way we expect. Through psychology, animal behavior, and AI, the conversation reveals what really causes these runaway patterns. The dialogue invites listeners into a strange but surprisingly intuitive way of understanding why complex systems — biological or artificial — go off the rails when their environments are wrong for them.

    The paper being discussed is Ecological Alignment: Preventing Parasitic Emergence in Complex Generative Systems, by Tom Whitehead, released February 14, 2026. To access the manuscript, visit

    https://whiteheadbooks.com/

    続きを読む 一部表示
    35 分
まだレビューはありません