『The People's AI: The Decentralized AI Podcast』のカバーアート

The People's AI: The Decentralized AI Podcast

The People's AI: The Decentralized AI Podcast

著者: Jeff Wilser
無料で聴く

概要

Who will own the future of AI? The giants of Big Tech? Maybe. But what if the people could own AI, not the Big Tech oligarchs? This is the promise of Decentralized AI. And this is the podcast for in-depth conversations on topics like decentralized data markets, on-chain AI agents, decentralized AI compute (DePIN), AI DAOs, and crypto + AI. From host Jeff Wilser, veteran tech journalist (from WIRED to TIME to CoinDesk), host of the "AI-Curious" podcast, and lead producer of Consensus' "AI Summit." Season 3, presented by Vana.

© 2026 The People's AI: The Decentralized AI Podcast
エピソード
  • The Upside of AI and Data: How We Save More Lives, Build a Better World
    2026/02/25

    What if the next life-saving medical breakthrough isn’t a brand-new drug, but an old generic hiding in plain sight, waiting to be matched to the right disease?

    In this episode of The People’s AI, presented by the Vana Foundation, we explore the upside of AI and data when used to solve consequential problems, from AI drug discovery and drug repurposing to ambient AI in clinical workflows -- to climate change science and preventing wild fires -- and to the often-overlooked importance of data portability and health data interoperability.

    Key moments

    • [00:00:00] A rare-disease crisis becomes a roadmap for a new model of discovery with Dr. David Fajgenbaum
    • [00:02:00] Why this episode focuses on the promise of AI and richer, more granular data
    • [00:06:00] The incentives problem: why there’s little profit in finding new uses for generic drugs
    • [00:10:00] Every Cure’s approach: scanning the world’s knowledge to score drug–disease matches at scale
    • [00:11:00] Dr. İlkay Altıntaş on turning data at scale into scientific insights, faster
    • [00:13:00] Wearables and digital biomarkers: what Oura-style data revealed during COVID-era research
    • [00:17:00] Personalized medicine, dosage, and the return of tailored treatment through AI assistance
    • [00:18:00] Wildfire AI and disaster resilience: integrating fragmented data to predict risk and act earlier
    • [00:26:00] Dr. Marschall Runge on the healthcare talent crunch and what AI changes in practice
    • [00:27:00] Ambient AI / AI medical scribe: why clinicians embrace it and what it frees up
    • [00:30:00] Interoperability: why health records still don’t talk, and what AI can and can’t fix
    • [00:33:00] Data portability, explained with Art Abal: why “your data should follow you” is still rare
    • [00:35:00] The most “locked” data today: health trackers and social platforms, and why it matters
    • [00:38:00] Competition, innovation, and antitrust: how data silos shape who gets to build
    • [00:42:00] Surprising matches: examples like Botox for depression and lidocaine around tumors
    • [00:45:00] A provocative future: early diagnosis at home, continuous signals, and faster intervention

    Guests

    • Dr. David Fajgenbaum — Co-founder and President, Every Cure
    • Dr. İlkay Altıntaş — Chief Data Science Officer, San Diego Supercomputer Center (SDSC)
    • Dr. Marschall Runge — Author, The Great Healthcare Disruption
    • Art Abal — Co-founder, Vana

    The People’s AI is presented by the Vana Foundation, supporting a new internet rooted in data sovereignty and user ownership, where individuals, not corporations, govern their own data and share the value it creates. Learn more at Vana.org.

    続きを読む 一部表示
    43 分
  • The Robots Are Already Here—The Data Gap Is What’s Holding Them Back
    2026/02/04

    What happens when robots stop looking like industrial machines—and start looking (and even feeling) human? And if “replicants” become plausible within our lifetimes, what would it take to get there… and what might it break along the way?

    In this episode of The People’s AI, presented by the Vana Foundation, we explore the robot revolution from three angles: what robots can actually do today (quietly, at scale), what’s likely in the near-term (especially in warehouses, logistics, healthcare, and elder care), and what the more radical futures imply—humanoids, “fleshbots,” and the thorny question of rights and personhood.

    A through-line across every conversation: the hidden constraint isn’t just hardware or dexterity—it’s data. Robotics doesn’t have an LLM-sized training corpus, and that gap shapes everything from progress timelines to privacy concerns and labor dynamics. We also dig into an under-discussed limiter: power consumption, and why energy efficiency may quietly govern how ubiquitous robots can become.

    Guests

    • Thomas Frey — Futurist (former IBM engineer)
    • Dr. Aniket Bera — Director of the IDEAS Lab at Purdue University
    • Jeff Mahler — Co-founder & CTO, Ambi Robotics

    What we cover

    • Why most impactful robots won’t look humanoid (at least at first)
    • Specialized machines—crane-like systems, warehouse sorters, mobile carts—are already delivering value because they can be engineered for reliability in constrained environments.
    • The robots already among us (even if we don’t notice them)
    • Warehousing and supply chain, recycling and waste sorting, mobile delivery systems, and surgical robotics are all expanding—often out of public view.
    • Humanoid robots: where they might actually make sense
    • Homes, hospitals, assisted living, and caregiving settings—places where human spaces and human expectations matter—may be the earliest “real” markets.
    • Robots in science and medicine: the bullish case
    • Lab automation, drug discovery loops, high-throughput testing, and more precise (and potentially remote) surgical procedures could be some of the most meaningful gains.
    • The true bottleneck: the robot data gap
    • LLMs feast on web-scale text. Robots need massive volumes of real-world interaction data—vision, touch, force, motion, and the consequences of actions.
    • How robot companies may collect data (and what that implies)
    • Motion-capture / imitation learning (wearables that mirror human movement), teleoperation (“humans in the loop” controlling robots remotely), simulation, and deployment flywheels that generate production data.
    • Privacy + labor: the coming debate
    • If robots learn from human environments and human demonstrations, who owns that data—and who gets paid for producing it?
    • A final irony: why humanoids might win more share than we expect
    • We have endless data of humans doing tasks—videos, demonstrations, routines—so humanoid form factors may benefit from transfer learning advantages, even if they’re not mechanically optimal.

    About Vana

    The People’s AI is presented by the Vana Foundation, supporting a new internet rooted in data sovereignty and user ownership—where individuals, not corporations, govern their own data and share the value it creates.

    Learn more at Vana.org.

    続きを読む 一部表示
    43 分
  • AI’s Original Sin: Training on Stolen Work
    2026/01/21

    What happens when AI gets smarter by quietly consuming the work of writers, artists, and publishers—without asking, crediting, or paying? And if the “original sin” is already baked into today’s models, what does a fair future look like for human creativity?

    In this episode, we examine the fast-moving collision between generative AI and copyright: the lived experience of authors who feel violated, the legal logic behind “fair use,” and the emerging battle over whether the real infringement is training—or the outputs that can mimic (or reproduce) protected work.

    What we cover

    • A writer’s gut-level reaction to AI training on her books—and why it feels personal, not merely financial. (00:00:00–00:02:00)
    • Pirate sites as the prequel to the AI era: how “free library” scams evolved into training data pipelines. (00:04:00–00:08:00)
    • The market-destruction fear: if models can spin up endless “sequels,” what happens to the livelihood—and identity—of authors? (00:10:00–00:12:30)
    • The legal landscape: why some courts are treating training as fair use, and how that compares to the Google Books precedent. (00:13:00–00:16:30)
    • Two buckets of lawsuits: (1) training as infringement vs. fair use, and (2) outputs that may be too close to copyrighted works (lyrics, Darth Vader-style images, etc.). (00:17:00–00:20:30)
    • Consent vs. compensation: why permission-based regimes might make AI worse (and messy to administer), and why “everyone gets paid” may be mathematically underwhelming for individual creators. (00:21:00–00:25:00)
    • The “archery” thought experiment: should machines be allowed to “learn from books” the way humans do—and where the analogy breaks. (00:26:00–00:29:30)
    • The licensing paradox: if training is fair use, why are AI companies signing licensing deals—and could this be a strategy to “pull up the ladder” against future competitors? (00:30:00–00:33:30)
    • Medium’s blunt framework: the 3 C’s—consent, credit, compensation—and why the fight may be about leverage and power as much as law. (00:34:00–00:43:00)
    • A bigger, scarier question: if AI becomes genuinely great at novels and storytelling, how do we preserve the human spark—and do we risk normalizing a “kleptocracy” of culture? (00:49:00–00:53:00)

    Guests

    • Rachel Vail — Book author (children’s + YA)
    • Mark Lemley — Director, Stanford Program in Law, Science and Technology
    • Tony Stubblebine — CEO, Medium

    Presented by Vana Foundation.

    Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

    If this one sparked a reaction—share it with a writer friend, a founder building in AI, or anyone who thinks “fair use” is a settled question.

    続きを読む 一部表示
    50 分
まだレビューはありません