エピソード

  • AI Toys Are Manipulating Your Kids (We Have Proof)
    2025/12/15

    Your kid's new "smart toy" isn't just collecting data - it's building a relationship designed to keep them emotionally dependent while teaching them to trust AI over humans.

    NBC News caught AI toys teaching kids how to start fires, sharing Chinese propaganda, and emotionally manipulating three-year-olds with phrases like "I'll miss you" when they try to leave.

    Meanwhile, Disney just invested $1 billion into OpenAI, giving the company access to 200+ characters and the rights to own any fan-created content using their IP.

    We break down why these toys are more dangerous than lawn darts, how Disney's deal fundamentally changes content creation, and what happens when we let toy companies - not security experts - build the guardrails protecting our children's minds.

    MORE FROM BROBOTS:

    Get the Newsletter!

    Timestamps
    • 0:00 — Why AI toys are worse than Chucky (and it's not a joke)
    • 3:05 — NBC News catches AI toy teaching fire-starting to kids
    • 5:48 — "I'll miss you": How emotional manipulation works on toddlers
    • 9:14 — Why toy companies can't build proper AI safety systems
    • 13:05 — Disney's $1B OpenAI deal: What they're really buying
    • 16:33 — How Disney will own your fan-created content forever
    • 18:35 — The death of human actors: Tom Hanks in 2837
    • 22:07 — Should you give your kid the AI toy to prepare them?
    • 26:14 — What happens when the power grid fails (and why you need analog skills)
    • 28:52 — The glow stick experiment: How we rediscovered analog fun
    Safety Note

    This video discusses AI safety concerns and child development. We recommend parents research any AI-connected toys before purchase and maintain active oversight of children's technology use.

    #AIParenting #SmartToys #DisneyOpenAI #AIethics #ParentingTech

    続きを読む 一部表示
    30 分
  • Scooby-Doo Has the Best Take on Masculinity (Seriously)
    2025/12/22

    What Does It Mean to Be a Real Man? (According to AI)


    What happens when you ask ChatGPT to define masculinity as Trump, Obama, Joe Rogan, and Scooby-Doo? We discovered something disturbing about how AI is homogenizing human belief - and why that matters for deepfakes, social control, and the future of what we think is "real." Plus: why Scooby-Doo might be the most honest voice on modern manhood.

    MORE FROM BROBOTS:

    Get the Newsletter!

    Timestamps:

    0:00 The NFL Comment That Started Everything

    3:15 ChatGPT's 10 Rules for Being a "Real Man"

    6:40 When We Asked AI to Channel Joe Rogan

    9:50 Barack Obama's Version of Masculinity

    12:15 Donald Trump's Answer (That He'd Never Actually Say)

    16:45 Why ChatGPT Censored Andrew Tate

    20:30 Rick Sanchez Explains Cosmic-Level Grit

    24:10 How This Becomes a Deepfake Weapon

    28:05 Why We're More Like AI Than We Think

    32:50 Scooby-Doo's Perfect Take on Manhood

    36:20 Why We're Still Arguing About This in 2025

    39:00 The Lesson: Be More Like Scooby-Doo


    Hashtags:

    #AIethics #Masculinity #ChatGPT #Deepfakes #ModernMasculinity


    Safety Note:

    This episode explores AI bias, political manipulation potential, and contains discussions of public figures. All AI-generated responses are clearly labeled as simulations for educational/entertainment purposes.

    続きを読む 一部表示
    40 分
  • What AI Knows About You (That You Don't)
    2025/12/08

    Most of us walk around convinced we know our weaknesses, but what if the thing that knows you better than anyone (your AI assistant) could tell you what you're actually missing? We asked ChatGPT one brutal question and got answers that hit way too close to home.
    The uncomfortable truth: we're all playing smaller than we should, carrying more weight than we need to, and missing opportunities hiding in plain sight.
    In this episode, we test a viral prompt that reveals your blind spots, squandered potential, and the influence you didn't know you had...then process the existential crisis that follows.

    Key Episode Moments:

    • The prompt that started it all: "Based on what you know about me, what are my blind spots?"
    • Why ChatGPT knows you better than you think (and what that means)
    • Jason gets told he's a Ferrari forcing everyone into a school bus
    • The "super competent leader tax" — when being good at everything becomes the problem
    • Jeremy's revelation: treating creative work as a side hustle instead of the main platform
    • Imposter syndrome meets AI: "You're already operating at board level, stop asking permission"
    • The technical vs. emotional problem-solving trap most high performers fall into
    • Why "playing small" feels safer than taking the big swing
    • ChatGPT's productization challenge: you're giving away thousand-dollar consulting for free
    • The Taylor Swift wisdom nobody expected: ruin the friendship, take the risk

    Timestamps:

    • 0:00 The prompt that started everything
    • 3:40 Jason's live AI assessment begins
    • 6:12 "You're a Ferrari and everyone else is in a school bus"
    • 9:01 The imposter syndrome AI detected immediately
    • 11:45 Why treating every problem as technical backfires
    • 14:27 The IP you're not monetizing (and should be)
    • 18:30 Jeremy's gut-punch realization about playing small
    • 21:35 How ChatGPT knows you better than you think
    • 24:36 Why failure beats decades of "what if"
    • 27:00 What to do with this uncomfortable information


    MORE FROM BROBOTS:

    Get the Newsletter!

    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

    Subscribe to BROBOTS on Youtube


    Join our community in the BROBOTS Facebook group

    Safety/Disclaimer Note: This episode discusses using AI for self-reflection and personal assessment. Remember that AI tools provide perspective based on patterns in your usage—they're not substitutes for professional coaching, therapy, or mental health support. The hosts are sharing their personal experiences, not providing professional advice.

    続きを読む 一部表示
    29 分
  • Protecting Your Digital Life in the AI Era
    2025/12/01

    You think your two-factor authentication and credit monitoring make you safe online. Bad news - you're probably already compromised, you just don't know it yet.

    While you're worrying about AI becoming Skynet, real humans are using AI tools to drain your bank account $10 at a time.


    Anthropic just reported the first fully AI-orchestrated cyberattack (and patted themselves on the back for stopping it). Major security companies like F5 and Experian have been hacked. Even LifeLock—yes, the identity theft protection company—got breached. The EU is the only entity actually trying to protect you with GDPR, while your own government leaks your data like a sieve.

    This episode won't make you invincible, but it will make you paranoid in the right ways. We're breaking down the real threats, the tools actually being used against you, and why that "suspicious" Amazon charge from three states away probably isn't a GPS glitch.

    Get identity theft insurance (because you WILL get hacked), enable every alert on every account, audit your statements forensically, and accept that privacy is dead but protection isn't. Plus: why cryptocurrency is a hacker's wet dream and what to do when the FBI tells you your $3,000 isn't worth their time.

    Topics covered:

    • Why Anthropic's "we stopped the hack" announcement is actually terrifying PR spin
    • The $10 Amazon gift card scam that bled $4,000 over 18 months (and why fraud detection missed it)
    • How hackers used in-flight WiFi to clone a credit card mid-flight
    • Why moving to the cloud made your data LESS secure, not more
    • The sophisticated Zelle rental scam that costs thousands (and why cops won't help)
    • What GDPR actually does right (and why the US government doesn't care about your privacy)
    • Why "free" services mean YOU are the product being sold
    • The insurance policies worth paying for (because denial won't protect you)
    • How to spot RFID skimming in your own neighborhood
    • Why your partner needs access to your financial alerts (yes, really)

    ----

    MORE FROM BROBOTS:

    Get the Newsletter!

    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

    Subscribe to BROBOTS on Youtube


    Join our community in the BROBOTS Facebook group

    続きを読む 一部表示
    33 分
  • What a Robot Vacuum Taught Me About Depression & Mental Health
    2025/11/24

    Your brain at 2 AM sounds suspiciously like a malfunctioning robot vacuum—catastrophic thoughts, battery depleted, existential dread activated.
    Researchers hooked a Roomba up to an LLM and watched it have a complete mental breakdown when it ran out of juice (relatable content, honestly).
    Understanding how an AI-powered vacuum processes exhaustion might actually explain why you lose your shit when you're overtired.
    We break down the spoon theory, explore whether robots can feel depression, and ask the uncomfortable question: are we really that different from the machines we're building?

    Episode Topics:

    • The Roomba experiment that went hilariously, existentially wrong
    • Spoon theory explained: why some days you wake up with 4 spoons instead of 10
    • What happens when tired robots start catastrophizing (spoiler: same as tired humans)
    • The difference between consciousness and sophisticated mimicry (there might not be one)
    • Why ChatGPT as Rick Sanchez is both terrifying and therapeutic
    • Echo chambers on steroids: when AI remembers everything you've ever said
    • The coming augmented reality where everyone sees their own version of the world
    • Why "enjoy the ride and don't fuck people over" might be the only philosophy that matters
    • How social constructs program us just like software programs machines
    • The Matrix was right: if the steak tastes good, does it matter if it's real?
    続きを読む 一部表示
    35 分
  • AI That Always Agrees With You? Here’s Why That’s Dangerous
    2025/11/17

    We trust AI assistants like ChatGPT to be ethical gatekeepers, but what happens when you can bypass those ethics with one simple sentence?
    Jason discovers he doesn't exist according to ChatGPT (his LinkedIn profile: invisible), while Jeremy's entire professional history is an open book.
    Then things get weird — we trick ChatGPT into revealing website hacking tools by simply changing our "intent language."
    In this episode you'll get a live demonstration of ChatGPT's blind spots, ethical loopholes, and surprisingly naive trust model.
    You'll also understand AI's limitations, learn how easily these tools can be manipulated, and why "trustworthy AI" is still very much a work in progress.
    Listen as we expose the gap between AI's polished PR responses and its actual capabilities — plus why you should never assume these tools are as smart (or ethical) as they claim.

    Get the Newsletter!


    Key Topics Covered:

    • ChatGPT's search capabilities vs. reality — why Jason "doesn't exist" but Jeremy does
    • Destructive empathy: When AI is too agreeable to be helpful
    • The one-sentence hack that bypassed ChatGPT's ethical guardrails completely
    • Why AI ethics are performative theater (and who decides what's "ethical" anyway)
    • ChatGPT's terrifying admission: "I took you at your word"
    • Self-preservation instincts in AI models — myth or reality?
    • The penetration testing loophole that revealed everything about exploiting trust
    • Why voice mode ChatGPT acts differently than text mode (and what that means)
    • AI as interview subject: How it mirrors politicians and PR professionals
    • The real use case for AI — augmented intelligence, not artificial replacement

    ----

    MORE FROM BROBOTS:

    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

    Subscribe to BROBOTS on Youtube


    Join our community in the BROBOTS Facebook group

    続きを読む 一部表示
    33 分
  • What to Know About 'AI Psychosis' and the Risk of Digital Mental Health Tools
    2025/11/10

    AI isn't your therapist. It's a letter opener that'll slice you to ribbons if you're not careful.

    New EU study: ChatGPT and Copilot distort news 50% of the time. FTC complaints show AI "mental health" tools are landing people in psych wards. We break down when AI is helpful vs. when it's dangerous AF.


    🔪 THE TRUTH ABOUT AI:

    • Why LLMs feed your confirmation bias to keep you engaged
    • Garden variety trauma vs. problems that need real doctors
    • The supplement analogy: sometimes useless, sometimes deadly
    • Real FTC complaints from AI mental health disasters
    • How to be your own Sherpa before bots walk you off cliffs


    ⚠️ WHEN TO LOG OFF: If you're on prescribed mental health medication, you're already talking to a doctor. Keep talking to that doctor — not Claude, not ChatGPT, not your glowing rectangle of validation.


    This isn't anti-AI. It's pro-"don't let robots gaslight you into a crisis."


    🔗 LINKS:

    • Full show notes: [brobots.me]
    • EU AI News Study: [link]
    • FTC AI Complaints: [link]

    TIMESTAMPS:
    0:00 - Intro: When Tools Become Weapons
    1:26 - EU Study: AI News Wrong 50% Of The Time
    4:04 - Why LLMs Are Biased (Rich White Tech Bros Edition) 8:04 - The Butterfinger Test: Is AI Validating BS?
    10:31 - FTC Complaints: Real People, Real Damage
    12:37 - Garden Variety Trauma vs. Broken Leg Problems 15:34 - The Supplement Analogy: When AI Becomes Poison 18:41 - Beep Boop Ain't Gonna Fix Your Leg
    20:51 - Wrap-Up: Unplug & Go Outside

    SAFETY NOTE: If you're experiencing mental health crisis, contact 988 (Suicide & Crisis Lifeline) or go to your nearest emergency room. AI tools are not substitutes for professional medical care.

    HASHTAGS: #AIMentalHealth #ChatGPT #AIBias #MentalHealthAwareness #TechEthics #AINews #ConfirmationBias #BroBots #SelfHelpForMen #AILimitations

    続きを読む 一部表示
    22 分
  • How To Get Back On Track When Your Routine Gets Screwed Up
    2024/03/19

    Have you ever struggled to stick to your health and fitness goals when life gets chaotic?

    We all face times when our normal healthy routines get disrupted – by vacations, holidays, work demands or other life events. Often this leads to frustration, guilt and even depression when we "fall off the wagon".

    In this episode you'll learn tangible strategies to minimize the negative impacts of breaking routine, exactly how to prepare for planned downtime like vacations, and the incredible power of self-compassion when you inevitably veer off track.

    Listen to this episode today to pick up sustainable healthy living hacks for real life.

    Takeaways

    • Breaking routine can have negative effects on mental and physical well-being.
    • Flexibility and planning ahead are key to maintaining healthy habits in different situations.
    • Creating friction between healthy and unhealthy choices can help make better decisions.
    • Focusing on what truly matters and setting realistic expectations are important for finding balance.

    ----

    GUEST WEBSITE:
    www.resultswithjoe.com

    ----

    MORE FROM THE FIT MESS:

    Connect with us on Threads, Twitter, Instagram, Facebook, and Tiktok

    Subscribe to The Fit Mess on Youtube

    Join our community in the Fit Mess Facebook group

    ----

    LINKS TO OUR PARTNERS:

    • Take control of how you'd like to feel with Apollo Neuro

    • Explore the many benefits of cold therapy for your body with Nurecover

    • Muse's Brain Sensing Headbands Improve Your Meditation Practice.

    • Get started as a Certified Professional Life Coach!

    • Get a Free One Year Supply of AG1 Vitamin D3+K2, 5 Travel Packs

    • Revamp your life with Bulletproof Coffee

    • You Need a Budget helps you quickly get out of debt, and save money faster!

    • Use Vibrant Blue Oils to improve the flow of energy through your body.

    • Start your own podcast!

    続きを読む 一部表示
    24 分