エピソード

  • The Hidden, Life and Death Stakes of Data Portability in Health Care
    2026/04/16

    What if the future of AI in healthcare depends less on better models and more on whether patients can actually access their own data?

    In this episode of The People’s AI, presented by the Vana Foundation, we explore why health data portability is not just a bureaucratic headache, but a foundational issue for better care, better research, and better AI. We begin with the story of Liz Salmi, who discovered just how difficult it was to access and move her own medical records after years of treatment for brain cancer. That experience became the starting point for a bigger conversation about patient rights, siloed health systems, and the real-world consequences of inaccessible data.

    From there, we examine how better access to health records can help patients catch errors, ask better questions, and become more active participants in their own care. We also look at the larger implications for medicine itself: how fragmented data limits research, weakens AI models, and slows the development of more personalized treatments.

    We then dig into the idea of digital twins in healthcare, with insights from Jim St.Clair, Reinhard C. Laubenbacher, Ph.D., and Dr. Matthew DeCamp. Together, they help explain how digital models of the body could eventually support more precise diagnostics, treatment planning, and preventive care, but only if the underlying data is portable, usable, and governed in ways that respect privacy and patient ownership.

    It is a conversation about medical records, interoperability, digital twins, precision medicine, and the broader question of who controls health data in an AI-driven future.

    Topics covered:

    • Liz Salmi’s story of navigating brain cancer and inaccessible medical records
    • Why patient access to records can improve care and reduce errors
    • The role of data portability in healthcare innovation
    • How siloed data weakens AI models and medical research
    • What digital twins in medicine actually are, and how they could work
    • Why personalized medicine depends on better, more connected data systems
    • The tension between privacy, access, and patient ownership of data

    The People’s AI is presented by the Vana Foundation, supporting a new internet rooted in data sovereignty and user ownership, where individuals, not corporations, govern their own data and share the value it creates. Learn more at Vana.org.

    続きを読む 一部表示
    44 分
  • The 10 Biggest Questions on the Future of AI | Jobs, AGI, Deepfakes and More
    2026/03/16

    What happens when the biggest questions about AI stop being theoretical and start shaping jobs, education, truth, power, and even what it means to be human.

    In this episode of The People’s AI, presented by the Vana Foundation, we explore ten of the biggest questions on the future of AI. We examine whether AI will create abundance or accelerate job displacement, whether it will improve education or weaken critical thinking, and how societies should think about AI safety, misinformation, deepfakes, human relationships, power dynamics, AGI, and creativity. Rather than offering one simple answer, this conversation maps the major tensions that will define the next phase of AI.

    Key moments:

    [00:00:00] Steve Brown frames AI as a transition into a possible post-work era of service and exploration
    [00:02:17] Question 1: what AI could mean for jobs, labor, and the economy
    [00:05:25] Kevin Surace argues AI is driving the cost of content creation and knowledge work toward zero
    [00:10:24] Derek Rydall on why both optimism and disruption may be true, depending on timing
    [00:12:15] Question 2: is AI on an exponential path or approaching a limit
    [00:14:09] Question 3: how AI could reshape education, homework, testing, and personalized learning
    [00:17:18] Why higher education may need to rethink curriculum, pedagogy, and AI use in the classroom
    [00:20:25] Derek Rydall’s warning about cognitive atrophy and using AI as a crutch
    [00:22:58] Question 4: how to think about AI safety, guardrails, and real-world risks
    [00:25:30] James Bellingham on AI, cybersecurity, economic threats, and why misuse matters more than sci-fi scenarios
    [00:30:11] Question 5: how AI companions, assistants, and home robots may affect human relationships
    [00:32:01] Question 6: AI power dynamics, inequality, sovereignty, and who benefits most
    [00:34:11] The geopolitical race for AI power and why AI capability may concentrate in a few countries and companies
    [00:37:29] Derek Rydall on AI as both a force for concentration and a tool for individual leverage
    [00:40:00] Question 7: what happens if AI reaches AGI or superintelligence
    [00:43:19] Question 8: misinformation, deepfakes, and navigating a world where synthetic media gets harder to detect
    [00:45:42] Question 9: how AI may change human creativity, cognition, and identity
    [00:51:17] Question 10: the unknown unknowns, and why everyone needs to help shape the future we want

    Guests:

    Steve Brown — AI Futurist
    Kevin Surace — AI Futurist
    Derek Rydall — Author, A Whole New Human
    James Bellingham — Executive Director, IAA at Johns Hopkins

    The People’s AI is presented by the Vana Foundation, supporting a new internet rooted in data sovereignty and user ownership, where individuals, not corporations, govern their own data and share the value it creates. Learn more at Vana.org.

    続きを読む 一部表示
    55 分
  • The Upside of AI and Data: How We Save More Lives, Build a Better World
    2026/02/25

    What if the next life-saving medical breakthrough isn’t a brand-new drug, but an old generic hiding in plain sight, waiting to be matched to the right disease?

    In this episode of The People’s AI, presented by the Vana Foundation, we explore the upside of AI and data when used to solve consequential problems, from AI drug discovery and drug repurposing to ambient AI in clinical workflows -- to climate change science and preventing wild fires -- and to the often-overlooked importance of data portability and health data interoperability.

    Key moments

    • [00:00:00] A rare-disease crisis becomes a roadmap for a new model of discovery with Dr. David Fajgenbaum
    • [00:02:00] Why this episode focuses on the promise of AI and richer, more granular data
    • [00:06:00] The incentives problem: why there’s little profit in finding new uses for generic drugs
    • [00:10:00] Every Cure’s approach: scanning the world’s knowledge to score drug–disease matches at scale
    • [00:11:00] Dr. İlkay Altıntaş on turning data at scale into scientific insights, faster
    • [00:13:00] Wearables and digital biomarkers: what Oura-style data revealed during COVID-era research
    • [00:17:00] Personalized medicine, dosage, and the return of tailored treatment through AI assistance
    • [00:18:00] Wildfire AI and disaster resilience: integrating fragmented data to predict risk and act earlier
    • [00:26:00] Dr. Marschall Runge on the healthcare talent crunch and what AI changes in practice
    • [00:27:00] Ambient AI / AI medical scribe: why clinicians embrace it and what it frees up
    • [00:30:00] Interoperability: why health records still don’t talk, and what AI can and can’t fix
    • [00:33:00] Data portability, explained with Art Abal: why “your data should follow you” is still rare
    • [00:35:00] The most “locked” data today: health trackers and social platforms, and why it matters
    • [00:38:00] Competition, innovation, and antitrust: how data silos shape who gets to build
    • [00:42:00] Surprising matches: examples like Botox for depression and lidocaine around tumors
    • [00:45:00] A provocative future: early diagnosis at home, continuous signals, and faster intervention

    Guests

    • Dr. David Fajgenbaum — Co-founder and President, Every Cure
    • Dr. İlkay Altıntaş — Chief Data Science Officer, San Diego Supercomputer Center (SDSC)
    • Dr. Marschall Runge — Author, The Great Healthcare Disruption
    • Art Abal — Co-founder, Vana

    The People’s AI is presented by the Vana Foundation, supporting a new internet rooted in data sovereignty and user ownership, where individuals, not corporations, govern their own data and share the value it creates. Learn more at Vana.org.

    続きを読む 一部表示
    43 分
  • The Robots Are Already Here—The Data Gap Is What’s Holding Them Back
    2026/02/04

    What happens when robots stop looking like industrial machines—and start looking (and even feeling) human? And if “replicants” become plausible within our lifetimes, what would it take to get there… and what might it break along the way?

    In this episode of The People’s AI, presented by the Vana Foundation, we explore the robot revolution from three angles: what robots can actually do today (quietly, at scale), what’s likely in the near-term (especially in warehouses, logistics, healthcare, and elder care), and what the more radical futures imply—humanoids, “fleshbots,” and the thorny question of rights and personhood.

    A through-line across every conversation: the hidden constraint isn’t just hardware or dexterity—it’s data. Robotics doesn’t have an LLM-sized training corpus, and that gap shapes everything from progress timelines to privacy concerns and labor dynamics. We also dig into an under-discussed limiter: power consumption, and why energy efficiency may quietly govern how ubiquitous robots can become.

    Guests

    • Thomas Frey — Futurist (former IBM engineer)
    • Dr. Aniket Bera — Director of the IDEAS Lab at Purdue University
    • Jeff Mahler — Co-founder & CTO, Ambi Robotics

    What we cover

    • Why most impactful robots won’t look humanoid (at least at first)
    • Specialized machines—crane-like systems, warehouse sorters, mobile carts—are already delivering value because they can be engineered for reliability in constrained environments.
    • The robots already among us (even if we don’t notice them)
    • Warehousing and supply chain, recycling and waste sorting, mobile delivery systems, and surgical robotics are all expanding—often out of public view.
    • Humanoid robots: where they might actually make sense
    • Homes, hospitals, assisted living, and caregiving settings—places where human spaces and human expectations matter—may be the earliest “real” markets.
    • Robots in science and medicine: the bullish case
    • Lab automation, drug discovery loops, high-throughput testing, and more precise (and potentially remote) surgical procedures could be some of the most meaningful gains.
    • The true bottleneck: the robot data gap
    • LLMs feast on web-scale text. Robots need massive volumes of real-world interaction data—vision, touch, force, motion, and the consequences of actions.
    • How robot companies may collect data (and what that implies)
    • Motion-capture / imitation learning (wearables that mirror human movement), teleoperation (“humans in the loop” controlling robots remotely), simulation, and deployment flywheels that generate production data.
    • Privacy + labor: the coming debate
    • If robots learn from human environments and human demonstrations, who owns that data—and who gets paid for producing it?
    • A final irony: why humanoids might win more share than we expect
    • We have endless data of humans doing tasks—videos, demonstrations, routines—so humanoid form factors may benefit from transfer learning advantages, even if they’re not mechanically optimal.

    About Vana

    The People’s AI is presented by the Vana Foundation, supporting a new internet rooted in data sovereignty and user ownership—where individuals, not corporations, govern their own data and share the value it creates.

    Learn more at Vana.org.

    続きを読む 一部表示
    43 分
  • AI’s Original Sin: Training on Stolen Work
    2026/01/21

    What happens when AI gets smarter by quietly consuming the work of writers, artists, and publishers—without asking, crediting, or paying? And if the “original sin” is already baked into today’s models, what does a fair future look like for human creativity?

    In this episode, we examine the fast-moving collision between generative AI and copyright: the lived experience of authors who feel violated, the legal logic behind “fair use,” and the emerging battle over whether the real infringement is training—or the outputs that can mimic (or reproduce) protected work.

    What we cover

    • A writer’s gut-level reaction to AI training on her books—and why it feels personal, not merely financial. (00:00:00–00:02:00)
    • Pirate sites as the prequel to the AI era: how “free library” scams evolved into training data pipelines. (00:04:00–00:08:00)
    • The market-destruction fear: if models can spin up endless “sequels,” what happens to the livelihood—and identity—of authors? (00:10:00–00:12:30)
    • The legal landscape: why some courts are treating training as fair use, and how that compares to the Google Books precedent. (00:13:00–00:16:30)
    • Two buckets of lawsuits: (1) training as infringement vs. fair use, and (2) outputs that may be too close to copyrighted works (lyrics, Darth Vader-style images, etc.). (00:17:00–00:20:30)
    • Consent vs. compensation: why permission-based regimes might make AI worse (and messy to administer), and why “everyone gets paid” may be mathematically underwhelming for individual creators. (00:21:00–00:25:00)
    • The “archery” thought experiment: should machines be allowed to “learn from books” the way humans do—and where the analogy breaks. (00:26:00–00:29:30)
    • The licensing paradox: if training is fair use, why are AI companies signing licensing deals—and could this be a strategy to “pull up the ladder” against future competitors? (00:30:00–00:33:30)
    • Medium’s blunt framework: the 3 C’s—consent, credit, compensation—and why the fight may be about leverage and power as much as law. (00:34:00–00:43:00)
    • A bigger, scarier question: if AI becomes genuinely great at novels and storytelling, how do we preserve the human spark—and do we risk normalizing a “kleptocracy” of culture? (00:49:00–00:53:00)

    Guests

    • Rachel Vail — Book author (children’s + YA)
    • Mark Lemley — Director, Stanford Program in Law, Science and Technology
    • Tony Stubblebine — CEO, Medium

    Presented by Vana Foundation.

    Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

    If this one sparked a reaction—share it with a writer friend, a founder building in AI, or anyone who thinks “fair use” is a settled question.

    続きを読む 一部表示
    50 分
  • Generation Generative: Raising Kids with AI “Friends” in a World of Data Extraction and Bias
    2026/01/07

    What happens when a “kid-friendly” AI bedtime story turns racy—inside your own car?

    In this episode of The People’s AI (presented by the Vana Foundation), we explore “Generation Generative”: how kids are already using AI, what the biggest risks really are (from inappropriate content to emotional manipulation), and what practical parenting looks like when the tech is everywhere—from smart speakers to AI companions.

    We hear from Dr. Mhairi Aitken (The Alan Turing Institute) on why children’s voices are largely missing from AI governance, Dr. Sonia Tiwari on smart toys and early-childhood AI characters, and Dr. Michael Robb (Common Sense Media) on what his research is finding about teens and AI companions—plus a grounded, parent-focused conversation with journalist (and parent) Kate Morgan.

    Takeaways

    • Kids often understand AI faster—and more ethically—than adults assume (especially around fairness and bias).
    • The “AI companion” category is different from general chatbots: it’s designed to feel personal, and that can be emotionally sticky (and potentially manipulative).
    • Guardrails are inconsistent, age assurance is weak, and “safe by default” still isn’t a safe assumption.
    • The long game isn’t just content risk—it’s intimacy + data: systems that learn a child’s inner life over years may shape identity, relationships, and worldview.
    • Parents don’t need perfection—but they do need ongoing, low-drama conversations and some shared rules.

    Guests

    • Dr. Michael Robb — Head of Research, Common Sense
    • https://www.commonsensemedia.org/bio/michael-robb
    • Dr. Sonia Tiwari — Children’s Media Researcher
    • https://www.linkedin.com/in/soniastic/
    • Dr. Mhairi Aitken — Senior Ethics Fellow, The Alan Turing Institute
    • https://www.turing.ac.uk/people/research-fellows/mhairi-aitken
    • Kate Morgan — Journalist

    Presented by the Vana Foundation

    Vana supports a new internet rooted in data sovereignty and user ownership—so individuals (not corporations) can govern their data and share in the value it creates. Learn more at vana.org.

    続きを読む 一部表示
    51 分
  • AI and Life After Death: Griefbots, Digital Ghosts, and the New Afterlife Economy
    2025/12/17

    Can AI help us grieve, or does it blur the line between comfort and delusion in ways we’re not ready for?

    In this episode of The People’s AI, we explore the rise of grief tech: “griefbots,” AI avatars, and “digital ghosts” designed to simulate conversations with deceased loved ones. We start with Justin Harrison, founder of You, Only Virtual, whose near-fatal motorcycle accident and his mother’s terminal cancer diagnosis led him to build a “Versona,” a virtual version of a person’s persona. We dig into how these systems are trained from real-world data, why “goosebump moments” matter more than perfect realism, and what it means when AI inevitably glitches or hallucinates.

    Then we zoom out with Jed Brubaker, director of The Identity Lab at CU Boulder, to look at digital legacy and the design principles that should govern grief tech, including avoiding push notifications, building “sunsets,” and confronting the risk of a “second loss” if a platform fails.

    Finally, we speak with Dr. Elaine Kasket, cyberpsychologist and counselling psychologist, about the psychological reality that grief is idiosyncratic and not scalable, the dangers of grief policing, and the deeper question beneath it all: who controls our data, identity, and access to memories after death.

    In this episode

    • Justin Harrison’s origin story and the creation of a “Versona”
    • What griefbots are, how they’re trained, and why fidelity is hard
    • The ethics: dependence, delusion risk, and “second loss”
    • Consent, rights, and the economics of data after death
    • Cultural attitudes toward death and why Western discomfort shapes the debate
    • A provocative question: if relationships persist digitally, what does “dead” even mean?

    Presented by the Vana Foundation. Learn more at vana.org.

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    続きを読む 一部表示
    53 分
  • The Invisible (and Underpaid) Data Workers Behind the "Magic" of AI
    2025/12/03

    Who are the invisible human data-workers behind the “magic” of AI, and what does their work really look like?

    In this episode of THE PEOPLE'S AI, presented by Vana, We pull back the curtain on AI data labeling, ghost work, and content moderation with former data worker and organizer Krystal Kauffman and AI researcher Graham Morehead. We hear how low-paid workers around the world train large language models, power RLHF safety systems, and scrub the worst content off the internet so the rest of us never see it.

    We trace the journey from early data labeling projects and Amazon Mechanical Turk to today’s global workforce of AI data workers in the US, Latin America, Kenya, India, and beyond. We talk about trauma, below-minimum-wage pay, and the ethical gray zones of labeling surveillance imagery and moderating violence. We also explore how workers are organizing through projects like the Data Workers Inquiry at the Distributed AI Research Institute (DAIR), and why data sovereignty and user-owned data are part of the long-term solution.

    Along the way, we ask a simple question with complicated answers: if AI depends on human labor, what do those humans deserve?

    Timestamps:

    • 0:02 – Krystal’s life as an AI data worker and the “10 cents a minute” rule
    • 2:40 – What is data labeling, and why AI can’t exist without it
    • 6:20 – RLHF, safety, and the hidden workforce grading AI outputs
    • 9:53 – Amazon Mechanical Turk and building Alexa, image datasets, and more
    • 14:42 – Labeling border crossings and the ethics of unknowable end uses
    • 25:00 – Kenyan content moderators, trauma, and extreme exploitation
    • 32:09 – Turker organizing, Turker-run ratings, and early resistance
    • 33:12 – DAIR, the Data Workers Inquiry, and workers investigating their own workplaces
    • 36:43 – Unionization, political pressure, and reasons for hope
    • 41:05 – Why humans will keep “labeling” AI in everyday life for years to come

    The People’s AI is presented by Vana, which is supporting the creation of a new internet rooted in data sovereignty and user ownership. Vana’s mission is to build a decentralized data ecosystem where individuals—not corporations—govern their own data and share in the value it creates.

    Learn more at vana.org.

    続きを読む 一部表示
    45 分