エピソード

  • Do facts have an expiration date? (with Samuel Arbesman)
    2025/10/08

    Read the full transcript here.

    What does it mean to treat facts as drafts rather than monuments? If truth is something we approach, how do we act while it’s still provisional? When definitions shift, what really changes? How do better instruments quietly rewrite the world we think we know? Are we mostly refining truths or replacing them? When do scientific metaphors clarify and when do they mislead? What public stories make self-correction legible and trusted? What features make science self-correct rather than self-congratulatory? How should we reward replication, repair, and tool-building? Do we need more generalists - or better bridges between tribes? How does measurement expand the very questions we can ask? Is progress a goal-seeking march or a search for interesting stepping stones? Should we teach computing as a liberal art to widen its aims? Will AI turn software into a home-cooked meal for everyone? How do we design tools that increase wonder, not just efficiency?

    Samuel Arbesman is Scientist in Residence at Lux Capital. He is also an xLab senior fellow at Case Western Reserve University’s Weatherhead School of Management and a research fellow at the Long Now Foundation. His writing has appeared in the New York Times, the Wall Street Journal, and The Atlantic, and he was previously a contributing writer for Wired. He is the author of the new book The Magic of Code, and his previous books are Overcomplicated: Technology at the Limits of Comprehension and The Half-Life of Facts: Why Everything We Know Has an Expiration Date. He holds a PhD in computational biology from Cornell University and lives in Cleveland with his family.

    Links:

    Sam's Recent Titles: The Half-Life of Facts and The Magic of Code

    Staff

    • Spencer Greenberg — Host + Director
    • Ryan Kessler — Producer + Technical Lead
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 15 分
  • From prisoner to escaping inner prisons (with Shaka Senghor)
    2025/10/01

    Read the full transcript here.

    What changes when we treat violence as a human problem rather than a demographic story? Are fear, anger, and shame the real levers behind sudden harm? How much agency can we ask of people shaped by chaos without ignoring that chaos? Where is the line between explanation and excuse? What would an honest narrative about community safety sound like? Do neighborhoods want fewer police, or different policing grounded in respect? How do we build cultures where accountability and care reinforce each other? If separation is required for rehabilitation, how do we keep it from becoming psychological punishment? How do we welcome people back into society without chaining them to their worst moment?

    Shaka Senghor is a resilience expert and author whose journey from incarceration to inspiration has empowered executives, entrepreneurs, and audiences around the world. Born in Detroit amid economic hardship, Shaka overcame immense adversity - including 19 years in prison - to become a leading authority on resilience, grit, and personal transformation. Since his release in 2010, Shaka has guided individuals and organizations to break free from their hidden emotional and psychological prisons, turning resilience from theory into actionable practice.

    Links:

    • Shaka's Book: How to Be Free: A Proven Guide to Escaping Life's Hidden Prisons
    • Shaka's TED Talk

    Staff

    • Spencer Greenberg — Host + Director
    • Ryan Kessler — Producer + Technical Lead
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 17 分
  • A new paradigm for psychology research (with Slime Mold Time Mold)
    2025/09/24

    Read the full transcript here.

    What changes when psychology stops naming traits and starts naming parts - can “entities and rules” turn fuzzy labels into testable mechanisms? If the mind is a web of governors with set points, what exactly is being controlled - and how do error signals become feelings? Are hunger, fear, and status-seeking all negative-feedback problems, and where do outliers like anger or awe fit? What would count as disconfirming evidence for a cybernetic view - useful constraint or unfalsifiable epicycle? Could a “parliament of drives” explain why identical situations yield different choices? And how would we measure the votes? Do abstractions like the Big Five help, or do they hide the machine under the hood? How many rules do we need before prediction beats metaphor? And could a new paradigm help make psychology a more mature and cumulative science?

    SLIME MOLD TIME MOLD is a mad science hive mind with a blog. If you believe the rumors, it’s run by 20 rats in a trenchcoat. You can reach them at slimemoldtimemold@gmail.com, follow them on twitter at @mold_time, and read their blog at slimemoldtimemold.com

    Links:

    • The Mind in the Wheel
    • Obesity and Lithium

    Staff

    • Spencer Greenberg — Host + Director
    • Ryan Kessler — Producer + Technical Lead
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 27 分
  • Beyond saving lives: happiness and doing good (with Michael Plant)
    2025/09/17

    Read the full transcript here.

    Are we trying to maximize moment-to-moment happiness or life satisfaction? Can self-reports really guide policy and giving? What happens to quality of life metrics when we judge impact by wellbeing instead of health or income? How should we compare treating depression to providing clean water when their benefits feel incomparable? Do cultural norms and scale-use quirks impact the accuracy of global happiness scores? How much do biases warp both our forecasts and our data? Is it ethical to chase the biggest happiness returns at the expense of other meaningful interventions? Where do autonomy, agency, and justice fit if philanthropy aims to reduce suffering or maximize aggregate happiness? Can we balance scientific rigor with the irreducibly subjective nature of joy, misery, and meaning? What should donors actually do with wellbeing-based cost-effectiveness numbers in the face of uncertainty and long-run effects? And could a wellbeing lens realistically reshape which charities, and which policies, the world funds next?

    Dr. Michael Plant is the Founder and Director of the Happier Lives Institute, a non-profit that researches the most cost-effective ways to increase global well-being and provides charity recommendations. Michael is a Post-Doctoral Research Fellow at the Wellbeing Research Centre, Oxford and his PhD in Philosophy from Oxford was supervised by Peter Singer. He is a co-author of the 2025 World Happiness Report. He lives in Bristol, England, with his wife.

    Links:

    • The Happier Lives Institute
    • Wellbeing Research Centre at Oxford
    • PersonalityMap (correlation between life satisfaction and moment-to-moment happiness)
    • The Elephant in the Bed Net
    • World Happiness Report 2025

    Staff

    • Spencer Greenberg — Host + Director
    • Ryan Kessler — Producer + Technical Lead
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 12 分
  • Darwinian Demons: Climate Change and the AI Arms Race (with Kristian Rönn)
    2025/09/10

    Read the full transcript here.

    Are existential risks from AI fundamentally different from those posed by previous technologies such as nuclear weapons? How can global cooperation overcome the challenges posed by national interests? What mechanisms might enable effective governance of technologies that transcend borders? How do competitive pressures drive harmful behaviors even when they threaten long-term stability? How might we balance innovation with precaution in rapidly advancing fields? Is slow progress the key to dodging hidden catastrophes in technological advancement? Is it possible to design systems that reward cooperation over defection on a global scale? How do we ensure emerging technologies uplift humanity rather than undermine it? What are the ethics of delegating decision-making to non-human intelligences? Can future generations be safeguarded by the choices we make today?

    Kristian is an entrepreneur and author of the Darwinian Trap, and has contributed to policy and standards with AI and climate change. In the climate sector, he contributed to global carbon accounting standards, represented Sweden at the UN Climate Conference and founded the carbon accounting software Normative.io. His work in AI governance includes contributions to policies in the EU and UN and authoring an influential report on AI Assurance Tech. Currently, as the co-founder and CEO of Lucid Computing, he develops technology to monitor the location of export controlled AI chips. He can be reached via email at kristian@lucidcomputing.ai.

    Thanks to a listener who pointed us to this 2017 report that may be responsible for some confounding bias around the idea that only 100 companies are reponsible for the majority of emissions.

    Links:

    • Kristian's book: The Darwinian Trap
    • Kristian's company: Lucid Computing

    Staff

    • Spencer Greenberg — Host + Director
    • Ryan Kessler — Producer + Technical Lead
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 18 分
  • Seeing through cognitive traps (with Alex Edmans)
    2025/09/03

    Read the full transcript here.

    How do we distinguish correlation from causation in organizational success? How common is it to mistake luck or data mining for genuine effects in research findings? What are the challenges in interpreting ESG (Environmental, Social, Governance) criteria? Why is governance considered distinct from environmental and social impact? How should uncertainty in climate science affect our policy choices? Are regulation and free markets really at odds, or can they be mutually reinforcing? How does economic growth generated by markets fund social programs and environmental protection? How does “publish or perish” culture shape scientific research and incentives? What psychological and neuroscientific evidence explains our tendency toward confirmation bias? Will LLMs exacerbate or mitigate cognitive traps? How do biases shape popular narratives about diversity and corporate purpose? How can we balance vivid stories with rigorous data to better understand the world?

    Alex Edmans FBA FAcSS is Professor of Finance at London Business School. Alex has a PhD from MIT as a Fulbright Scholar, was previously a tenured professor at Wharton, and an investment banker at Morgan Stanley. He serves as non-executive director of the Investor Forum and on Morgan Stanley’s Institute for Sustainable Investing Advisory Board, Novo Nordisk’s Sustainability Advisory Council, and Royal London Asset Management’s Responsible Investment Advisory Committee. He is a Fellow of the British Academy and a Fellow of the Academy of Social Sciences.

    Links:

    • Alex’s TEDx Talk
    • Alex’s books: May Contain Lies and Grow The Pie
    • Alex’s Blog
    • A double bind in collective learning (article)

    Staff

    • Spencer Greenberg — Host / Director
    • Josh Castle — Producer
    • Ryan Kessler — Audio Engineer
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 32 分
  • The most important century (with Holden Karnofsky)
    2025/08/27

    Read the full transcript here.

    Has society reached ‘peak progress’? Can we sustain the level of economic growth that technology has enabled over the last century? Have researchers plucked the last of science's "low-hanging fruit?" Why did early science innovators have outsized impact per capita? As fields mature, why does per-researcher output fall? Can a swarm of AI systems materially accelerate research? What does exponential growth hide about the risk of collapse? Will specialized AI outcompete human polymaths? Is quality of life still improving - and how confident are we in those measures? Is it too late to steer away from the attention economy? Can our control over intelligent systems scale as we develop their power? Will AI ever be capable of truly understanding human values? And if we reach that point, will it choose to align itself?

    Holden Karnofsky is a Member of Technical Staff at Anthropic, where he focuses on the design of the company's Responsible Scaling Policy and other aspects of preparing for the possibility of highly advanced AI systems in the future. Prior to his work with Anthropic, Holden led several high-impact organizations as the co-founder and co-executive director of charity evaluator GiveWell, and one of three Managing Directors of grantmaking organization Open Philanthropy. You can read more about ideas that matter to Holden at his blog Cold Takes.

    Further reading:

    • Holden's "most important century" series
    • Responsible scaling policies
    • Holden's thoughts on sustained growth

    Staff

    • Spencer Greenberg — Host / Director
    • Josh Castle — Producer
    • Ryan Kessler — Audio Engineer
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 48 分
  • Should we try to live forever? (with Ariel Zeleznikow-Johnston)
    2025/08/22

    Read the full transcript here.

    Why do humans live as long as they do? Since whales have literally tons more cells than humans, why don't they develop cancers at much higher rates than humans? What can the genetic trade-offs we observe in other organisms teach us about increasing human longevity? Will we eventually be able to put people into some kind of stasis? What is the state of such technology? What counts as being dead? How much brain damage can a person sustain before they're no longer the same person? Is lowering temperature the same thing as slowing time? What does it mean to turn organic tissue into "glass"? Would clones of me be the same person as me? How should we feel about death? What is "palliative" philosophy? Why are people generally supportive of curing diseases but less supportive of increasing human lifespan? Will humans as a species reach 2100 A.D.?

    Dr. Ariel Zeleznikow-Johnston is a neuroscientist at Monash University, Australia, where he investigates methods for characterising the nature of conscious experiences. In 2019, he obtained his PhD from The University of Melbourne, where he researched how genetic and environmental factors affect cognition. His research interests range from the decline, preservation, and rescue of cognitive function at different stages of the lifespan, through to comparing different people's conscious experience of colour. By contributing to research that clarifies the neurobiological, cognitive, and philosophical basis of what it is to be a person, he hopes to accelerate the development of medical infrastructure that will help prevent him and everyone else from dying. Read his writings on Substack, follow him on Bluesky or X / Twitter, email him at arielzj.phd@gmail.com, or learn more about him on his website.

    Further reading

    • The Future Loves You: How and Why We Should Abolish Death, by Ariel Zeleznikow-Johnston

    Staff

    • Spencer Greenberg — Host / Director
    • Josh Castle — Producer
    • Ryan Kessler — Audio Engineer
    • Uri Bram — Factotum
    • WeAmplify — Transcriptionists
    • Igor Scaldini — Marketing Consultant

    Music

    • Broke for Free
    • Josh Woodward
    • Lee Rosevere
    • Quiet Music for Tiny Robots
    • wowamusic
    • zapsplat.com

    Affiliates

    • Clearer Thinking
    • GuidedTrack
    • Mind Ease
    • Positly
    • UpLift
    [Read more]
    続きを読む 一部表示
    1 時間 14 分