エピソード

  • Why AI Won't Just Take Your Job — It'll Take Your Boss Too
    2026/04/06

    Fifteen percent of workers say they'd be fine with an AI boss. Meanwhile, thirty percent of March's sixty thousand US layoffs are being blamed directly on AI — and most of those jobs were in tech, the sector that built the tools doing the replacing. Jeremy and Jason sit with the uncomfortable logic of where this all leads: a capitalism that's optimizing so hard for efficiency that it's burning the workforce it depends on. No guests, no protocol. Just two guys who've been around long enough to remember when this job was supposed to be a career, and who aren't sure 'adapt' is the answer anymore.

    Get the Newsletter!

    Key Moments

    • 00:00 — The AI boss survey: 15% say they'd accept a robot manager — and why that number reveals more about human managers than AI
    • 02:41 — Why 'the boss function' doesn't feel fully human to most employees anyway
    • 04:01 — Jason's case that employers are trying to replace everyone, not just management
    • 05:31 — The outsourcing pattern: from Asia to AI — it's the same playbook, accelerated
    • 09:39 — The 60,000 March layoffs: 18,000 attributed to AI, mostly in tech — the people who built the tools
    • 11:01 — Silent quitting, AI monitoring, and how the three-month detection window just collapsed
    • 12:28 — The signal-to-noise problem: collective apathy and why people can't find the action step
    • 13:37 — Jason's reframe: the system isn't against you. It just doesn't see you as a threat anymore.
    • 16:52 — The generational split: why kids who grew up through 9/11, COVID, and two financial crises don't flinch at gig economy chaos
    • 18:47 — Anthropic's weapons refusal and the autonomous killing machine pipeline: from digital infrastructure to meat space
    • 21:17 — Jeremy's optimism thread — and why Jason thinks we keep handing wiffle ball bats to toddlers

    続きを読む 一部表示
    不明
  • How AI Can See Heart Disease Coming Before It Kills You
    2026/03/16

    Heart disease kills one person every 40 seconds. That number hasn’t changed in 30 years. Dr. John Osborne, a preventive cardiologist with two doctorates and 29 years in practice, has spent his career on a single question: why do we screen for cancers that kill a few percent of us and do nothing for the disease that kills 40%? In this episode, Jeremy and Jason sit down with Dr. Osborne to get the real story on cardiac CT with AI — the imaging technology that can detect, quantify, and track arterial plaque at sub-millimeter resolution, years before symptoms appear. If you track your bloodwork, wear a fitness device, or consider yourself health-forward — this is the conversation that fills the gap nobody warned you about.
    Guest Link:
    https://clearcardio.com/

    Key Moments:

    • 00:00 — Dr. Osborne’s case for preventive cardiology: why heart disease is the most under-screened killer
    • 02:43 — How cardiac CT evolved from "iPhone 0.5" to the 2026-era AI-powered tool he uses today
    • 05:35 — Why he gave up stress tests and heart caths in 2005 and never looked back
    • 08:16 — What AI actually adds: seeing and quantifying plaque invisible to the human eye, down to 0.1 cubic millimeters
    • 10:13 — When insurance pays for cardiac CT — and when it doesn’t (the preventive gray zone)
    • 14:50 — The “cardiac colonoscopy” concept: the case for screening before symptoms, not after
    • 18:11 — Coronary artery calcium score: the accessible $100 starting point, and what it can and can’t tell you
    • 31:54 — Lifestyle essentials: the 50% of risk that’s modifiable regardless of genetics
    • 35:00 — Family history decoded: why your sibling’s heart history matters more than your parents’
    • 36:12 — Nicotine myth-busting: Dr. Osborne on the "health guru" nicotine fad and why he thinks it’s dangerous
    • 38:05 — Supplements under scrutiny: natokinase, fish oil, red yeast rice — what the actual RCT data says
    続きを読む 一部表示
    47 分
  • The Real Risk of Trusting AI With Your Health Decisions
    2026/03/09

    The internet taught everyone to self-diagnose. AI made it faster, more persuasive, and significantly more dangerous.
    Dr. Ajit Barron-Dhillon — ER physician, military veteran, and someone who has watched patients demand MRIs for minor complaints because 'the internet said so' — joins Jason to talk about what AI-assisted health research actually does to people who think they're being smart about it.
    The conversation covers confirmation bias in clinical settings, supplement stacks optimized by ChatGPT, the cheerleader problem in medical AI, and why being above-average intelligent with these tools may make you more vulnerable, not less. If you use AI or Google to research your health, this conversation is specifically for you.

    Topics Discussed

    • Why AI self-diagnosis is dangerous specifically for informed, health-conscious people
    • What ER physicians are actually seeing when patients arrive with internet-sourced diagnoses
    • How confirmation bias turns AI research into an expensive form of being wrong
    • When AI-assisted supplement optimization is useful — and when it's not
    • Why peer-reviewed research and AI training data are not the same thing
    • What a responsible approach to AI health research actually looks like

    CHAPTERS

    • 0:00 — Jeremy's Intro: Sick and Googling While Hosting an AI Health Episode
    • 1:17 — Kids Unplugging: Why In-Person Dating Is the New Counterculture
    • 2:40 — The No-Wi-Fi Coffee Shop and What the Internet Can't Tell You
    • 9:47 — I Let ChatGPT Optimize My Supplement Stack. Here's What Happened.
    • 11:59 — The Telemedicine Loophole: AI + Social Engineering for Prescriptions
    • 14:25 — Why Your Doctor Doesn't Know What You're Supplementing
    • 20:16 — NIH PubMed Is Being Scrubbed — and Why That Matters
    • 28:40 — She's Not Fighting Logic. She's Fighting Belief.
    • 32:58 — Star Trek, Dr. McCoy, and the Tricorder We're Almost Building
    • 37:11 — What a PubMed-Only AI Would Actually Look Like
    • 44:58 — The Tool Gets You 80% There. The Human Closes the Gap.
    続きを読む 一部表示
    49 分
  • When AI Becomes a Weapon: The Government Deal Anthropic Refused
    2026/03/03

    The US government asked Anthropic — the company behind Claude, one of the most capable AI coding systems on the market — to help build autonomous weapons and a mass surveillance infrastructure. Anthropic said no.

    That refusal, which happened the same week the US launched strikes on Iran, is either the most principled corporate decision in recent AI history or the beginning of a very ugly fight over who controls the most powerful tools ever built.

    Jeremy and Jason break down what the government actually asked for, why Anthropic refused, what Open AI and Elon Musk did instead, and what it means for all of us when the people writing the guardrails are the same people being pressured to remove them.

    Topics Discussed:

    • Why autonomous AI weapons systems default to nuclear launch in virtually every war game simulation
    • What Anthropic's Claude can actually do — and why the US government wants it so badly
    • How AI turns existing NSA surveillance infrastructure into something exponentially more dangerous
    • Why Open AI and Elon Musk said yes to the same deal Anthropic refused
    • Why the people most confident they're using AI as a tool might be the ones AI ends up using

    Chapters

    • 0:00 — When AI Meets War: What We're Actually Talking About
    • 1:15 — What Claude Can Really Do (And Why the Government Wants It)
    • 4:18 — The Autonomous Cyber Weapon Problem
    • 5:28 — Why Anthropic Said No to the Money
    • 6:26 — Mass Surveillance, AI, and What's Already Running
    • 9:45 — When War Games Go Nuclear: The 95% Problem
    • 13:01 — AGI Is Already Here. We Just Didn't Call It That.
    • 17:33 — Why Anthropic's Refusal Might Be Their Smartest Business Move
    • 22:06 — Who's Actually Using Whom

    MORE FROM BROBOTS:
    Get the Newsletter!

    続きを読む 一部表示
    26 分
  • Using AI to Work Through Anxiety: Does It Actually Help?
    2026/03/02

    Most people using AI for anxiety aren't following a protocol — they stumbled into it. Emma Klint, a writer and Substack creator, accidentally discovered she was doing exposure therapy by typing 'I don't know' over and over into an AI chat window.
    In this episode, Jeremy and Jason sit down with Emma to stress-test what AI-assisted self-reflection actually looks like: the real benefits, the obvious limits, and the uncomfortable question of whether outsourcing your feelings is the same thing as actually feeling them.
    If you've wondered whether talking to a robot about your problems is legitimate or just avoidance with extra steps — this conversation will give you a clearer answer.
    Guest website:
    (Over)thinking Out Loud - Emma Klint

    Topics discussed:

    • Why using AI for anxiety isn't the same thing as outsourcing your feelings
    • How one writer accidentally discovered she was doing exposure therapy in her chat window
    • What makes AI different from journaling — and why that difference matters for anxious brains
    • When AI mental health use helps, and when it's just avoidance with extra steps
    • Why neurodivergent people may be getting the most out of these conversations
    • How to tell the difference between AI that's helping you think and AI that's just telling you what you want to hear

    Chapters:

    • 0:00 — The 2AM Chatbot Question: Is This Therapy or Avoidance?
    • 0:42 — Using AI for Anxiety: What We're Actually Testing
    • 3:04 — The Judgment-Free Space: Why 'I Don't Know' Changes Things
    • 5:01 — AI as a Journal That Writes Back
    • 9:23 — Is the Advice Good, or Is Naming the Feeling Enough?
    • 11:00 — When AI Tries to Be Blunt (And Still Fails)
    • 13:00 — Why Prompt Engineering Is Already Outdated for This
    • 15:50 — ADHD, Neurodivergence, and Why AI Might Be the Real Unlock
    • 18:18 — Outsourcing vs. Externalizing: The Line That Matters


    MORE FROM BROBOTS:
    Get the Newsletter!

    続きを読む 一部表示
    21 分
  • The Next Privacy Crisis Isn't Your Data - It's Your Thoughts
    2026/02/23

    Most people think AI data collection means targeted ads and leaked emails - but that's already yesterday's problem. Bruce Randall, AI and quantum practitioner, argues that cognitive data - the kind recorded by brain-computer interfaces before conscious thought even forms - is the frontier nobody is legislating, regulating, or even discussing clearly yet.
    In this episode, we stress-test where quantum computing, Neuralink, hive mind dynamics, and energy infrastructure are actually headed - and what regular people need to understand now, before the decisions get made without them. Walk away knowing what questions to ask, even if nobody has the answers yet.

    Topics Discussed:

    • Why the Neuralink user's cursor moved before he consciously directed it — and what that means for data ownership
    • How quantum computing functions as a prediction engine for complex variables, and why most people will never see it but will feel its effects
    • What a "hive mind" actually is and why shared thought networks create an ownership problem nobody has solved
    • Why digital workers face more displacement risk than tradespeople — and the 15-minute daily habit that changes that
    • Whether mass collection of behavioral and emotional data is a public good or a slow handover of your most private information
    • How to think about cognitive data protection before the decisions get made without you

    Chapters:

    • 0:00 — The Moment That Changed How Bruce Thinks About AI
    • 1:28 — Quantum Computing Without the Headache: A Real Explanation
    • 3:19 — Why Quantum Is the Engine Behind AI — Not a Replacement for It
    • 4:21 — Jobs, AI, and Who Actually Gets Replaced First
    • 6:47 — What Reiki Has to Do With Brain-Computer Interfaces
    • 7:43 — Hive Minds, Neuralink, and the Thought Ownership Problem
    • 11:44 — Can Your Personality Be Uploaded Without Your Knowledge?
    • 13:35 — Is Mass Data Collection Actually Good for Society?
    • 18:09 — Where Does the Energy Come From for All of This?
    • 19:46 — The One Thing You Should Do This Week to Stay Relevant

    Guest Website:
    https://theaihumanparadox.com/

    続きを読む 一部表示
    21 分
  • Can AI Actually Build Utopia or Is That Just Hype?
    2026/02/16

    Are we getting too lazy to think without AI?

    You use it for emails, reports, research. It saves time. But every shortcut you take, every task you hand over, you feel a quiet trade-off happening. Efficiency for autonomy. Speed for depth. Convenience for critical thinking.

    In this episode:

    • Why AI acts as a cosmic mirror that reflects our worst habits back at us
    • How laziness becomes the trap when machines can outthink, outwork, and outlast us
    • What happens when humans drift into digital dependency instead of staying grounded
    • Why short-term pain might be necessary for long-term transformation
    • How to decide which tasks to outsource and which require you to stay sharp
    • What the hero's journey teaches us about navigating AI's crucible

    Guest: Jeff Burningham, author of The Last Book Written by a Human and former gubernatorial candidate. He believes AI is forcing humanity to confront an uncomfortable question: Are we ready to evolve, or will we choose the easy path and lose ourselves in the process?

    🔗 Links:

    • Jeff Burningham's Website
    • The Last Book Written by a Human
    Chapters (Benefit-Driven Labels):

    0:00 — Why AI feels like a trap we're setting for ourselves
    2:30 — AI as a cosmic mirror: Reflecting humanity's recorded data
    5:30 — Short-term pessimism, long-term hope (and why pain matters)
    9:30 — The laziness problem: What happens when AI outworks us
    14:00 — Embodied humans vs. digital drift: Two paths forward
    18:30 — Why the hero's journey applies to AI transformation
    21:00 — Job loss and male unemployment: The civil unrest risk
    25:00 — The old game vs. the new game: Choosing transformation
    31:00 — Can governments regulate AI fast enough? (Probably not)

    MORE FROM BROBOTS:
    Get the Newsletter!
    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
    Subscribe to BROBOTS on Youtube
    Join our community in the BROBOTS Facebook group

    続きを読む 一部表示
    35 分
  • AI Doesn't Want Your Job - It Wants to Hire You
    2026/02/09

    Artificial intelligence is moving beyond cyberspace, and its first move isn't replacing us, it's renting us.

    Services like RentAHuman.ai let AI agents hire people for real-world errands while AI-only social networks reveal something darker: given all human knowledge, these systems don't build utopias. They replicate our worst behaviors - wealth hoarding, tribalism, even manifests about ending humanity. The difference? They never sleep, never feel shame, and now they want physical autonomy through human labor.
    Topics discussed:

    - Why giving AI "meat space" control is more dangerous than job loss
    - How AI social networks expose the myth of benevolent superintelligence
    - Why we're voluntarily funding algorithmic manipulation at $20/month
    - What augmented reality gamification will do to human decision-making
    - Why billionaire accountability is impossible—and what that means for AI oversight
    - The uncomfortable truth about who controls you when systems can override biology

    This is for people who suspect they're already losing autonomy but can't articulate how. Two skeptical tech observers examine why resistance feels impossible, and whether dystopia and utopia might be indistinguishable when the right chemicals are involved.
    MORE FROM BROBOTS:
    Get the Newsletter!
    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok
    Subscribe to BROBOTS on Youtube
    Join our community in the BROBOTS Facebook group

    続きを読む 一部表示
    36 分