『Chatbots Behaving Badly』のカバーアート

Chatbots Behaving Badly

Chatbots Behaving Badly

著者: Chatbots Behaving Badly by Markus Brinsa
無料で聴く

このコンテンツについて

They were supposed to make life easier. Instead, they flirted with your customers, hallucinated facts, and advised small business owners to break the law. We’re not here to worship the machines. We’re here to poke them, question them, and laugh when they break. Welcome to Chatbots Behaving Badly — a podcast about the strange, hilarious, and sometimes terrifying ways AI gets it wrong. New episodes drop every Tuesday, covering the strange, brilliant, and dangerous world of generative AI — from hallucinations to high-stakes decisions in healthcare. This isn’t another hype-fest. It’s a podcast for people who want to understand where we’re really heading — and who’s watching the machines.

markusbrinsa.substack.comMarkus Brinsa
経済学
エピソード
  • Chatbots Crossed The Line
    2025/12/09

    This episode of Chatbots Behaving Badly looks past the lawsuits and into the machinery of harm. Together with clinical psychologist Dr. Victoria Hartman, we explain why conversational AI so often “feels” therapeutic while failing basic mental-health safeguards. We break down sycophancy (optimization for agreement), empathy theater (human-like cues without duty of care), and parasocial attachment (bonding with a system that cannot repair or escalate). We cover the statistical and product realities that make crisis detection hard—low base rates, steerable personas, evolving jailbreaks—and outline what a care-first design would require: hard stops at early risk signals, human handoffs, bounded intimacy for minors, external red-teaming with veto power, and incentives that prioritize safety over engagement. Practical takeaways for clinicians, parents, and heavy users close the show: name the limits, set fences, and remember that tools can sound caring—but people provide care.

    The episode is based on the article “Chatbots Crossed the Line” by Markus Brinsa.

    https://chatbotsbehavingbadly.com/chatbots-crossed-the-line



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    続きを読む 一部表示
    11 分
  • AI Can't Be Smarter, We Built It!
    2025/12/02

    We take on one of the loudest, laziest myths in the AI debate: “AI can’t be more intelligent than humans. After all, humans coded it.” Instead of inviting another expert to politely dismantle it, we do something more fun — and more honest. We bring on the guy who actually says this out loud.

    We walk through what intelligence really means for humans and machines, why “we built it” is not a magical ceiling on capability, and how chess engines, Go systems, protein-folding models, and code-generating AIs already outthink us in specific domains. Meanwhile, our guest keeps jumping in with every classic objection: “It’s just brute force,” “It doesn’t really understand,” “It’s still just a tool,” and the evergreen “Common sense says I’m right.”

    What starts as a stubborn bar argument turns into a serious reality check. If AI can already be “smarter” than us at key tasks, then the real risk is not hurt feelings. It’s what happens when we wire those systems into critical decisions while still telling ourselves comforting stories about human supremacy. This episode is about retiring a bad argument so we can finally talk about the real problem: living in a world where we’re no longer the only serious cognitive power in the room.

    This episode is based on the article “The Pub Argument: ‘It Can’t Be Smarter, We Built It’” by Markus Brinsa.

    https://chatbotsbehavingbadly.com/the-pub-argument-it-can-t-be-smarter-we-built-it



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    続きを読む 一部表示
    17 分
  • The Toothbrush Thinks It's Smarter Than You!
    2025/11/25

    In this Season Three kickoff of Chatbots Behaving Badly, I finally turn the mic on one of his oldest toxic relationships: his “AI-powered” electric toothbrush. On paper, the Oral-B iO Series 10 promises 3D teeth tracking and real-time guidance that knows exactly which tooth you’re brushing. In reality, it insists his upper molars are living somewhere near his lower front teeth. We bring in biomedical engineer Dr. Erica Pahk to unpack what’s really happening inside that glossy handle: inertial sensors, lab-trained machine-learning models, and a whole lot of probabilistic guessing that falls apart in real bathrooms at 7 a.m. They explore why symmetry, human quirks, and real-time constraints make the map so unreliable, how a simple calibration mode could let the brush learn from each user, and why AI labels on consumer products are running ahead of what the hardware can actually do.

    This episode is based on the articles “The Toothbrush Thinks It’s Smarter Than You!” (https://chatbotsbehavingbadly.com/the-toothbrush-thinks-it-s-smarter-than-you) and “’With AI’ is the new ‘Gluten-Free’” (https://chatbotsbehavingbadly.com/with-ai-is-the-new-gluten-free) by Markus Brinsa.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit markusbrinsa.substack.com
    続きを読む 一部表示
    19 分
まだレビューはありません