『The Psychology of Health』のカバーアート

The Psychology of Health

The Psychology of Health

著者: Milan Toma
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

Each episode is a clear, accessible synthesis of research studies on timely and controversial health topics; no hot takes, no hype, just what actual science says. Hosted by Milan Toma, Ph.D., this podcast cuts through the noise. Instead of speculation and hearsay, you’ll get evidence-based insights on everything from sleep and weight gain to the anatomy of misinformation and the psychology behind public health debates. If you’re frustrated by the flood of opinions online and want to know what the research really shows, this is the show for you.Milan Toma 衛生・健康的な生活 身体的病い・疾患
エピソード
  • Medical Education Must Teach AI Differently
    2026/04/14

    Artificial intelligence is rapidly moving into classrooms, clinics, and daily healthcare decision making, but much of the public conversation is built on a dangerous misunderstanding. Too often, people now treat artificial intelligence as if it simply means chatbots. In this episode, Dr. Milan Toma explains why that confusion matters and why healthcare professionals must learn to distinguish between conversational tools and task specific medical systems.

    This episode explores the long history of artificial intelligence in medicine, why chatbots are optimized for fluent language rather than true clinical understanding, and why strong performance on text based clinical vignettes should not be mistaken for real world diagnostic ability. Dr. Toma also examines the risks of artificial intelligence sycophancy, the danger of overfitting, the limits of accuracy as a metric, and how data leakage or hidden shortcuts can make weak systems look impressive during development.

    Most importantly, this is a conversation about education and patient safety. Healthcare professionals need more than basic exposure to artificial intelligence tools. They need to understand how different systems work, how they fail, how to evaluate claims critically, and why clinicians must work closely with developers before these tools are trusted in practice.

    The goal is not simply to teach people how to use artificial intelligence. It is to teach them how to question it, evaluate it, and apply it responsibly. The future of healthcare will include artificial intelligence, but safe healthcare depends on how well we teach people to understand it.

    続きを読む 一部表示
    37 分
  • The Overfitting Trap
    2026/04/02

    Introduction: A Tale of Two Rounds

    Every attending physician has seen the "Star Student" who can quote the New England Journal of Medicine verbatim but freezes when a patient doesn't follow the script. In this episode, we introduce Student A and Student B.

    • Student A (The Memorizer): They have a mental database of every practice vignette. They are fast, confident, and statistically "perfect" on paper.

    • Student B (The Thinker): They are slower. They visualize the blood flow, the cellular response, and the "why" behind the symptoms.

    We discuss why the current "Gold Rush" of Medical AI is accidentally scaling Student A to an industrial level, creating systems that look like geniuses in a lab but perform like novices in a clinic.

    In machine learning, overfitting is the statistical equivalent of "rote memorization." We break down the mechanics of how a model loses the forest for the trees.

    How do you "interview" an AI to see if it actually knows its stuff? You look at its Learning Curves. We explain how to read these graphs like a clinical EKG.

    • The Divergence Warning: When training accuracy rockets to 100% while validation accuracy (the "real world" test) plateaus or drops, you aren't looking at a breakthrough; you’re looking at a memory bank.

    • The Convergence Goal: A healthy model shows two lines that "hug" each other as they rise. This signifies that what the model learns in the "textbook" is actually applying to the "patient."

    Why do models overfit? Often, it’s because they found a shortcut. We explore the "Red Flags" that developers—and clinicians—need to watch for:

    1. Spurious Correlations: The model learns that "Patients with X-rays taken on a portable machine are sicker," rather than learning what is in the X-ray.

    2. Data Leakage: Including variables that already "hint" at the answer (e.g., predicting a condition using the medication used to treat it).

    3. Institutional Bias: Memorizing how one specific hospital operates rather than how a disease operates.

    We tackle the most dangerous metric in healthcare: Raw Accuracy. > "If 95% of your patients are healthy, a model can be 95% accurate by simply predicting 'Healthy' for every person it sees. It has a 0% success rate at finding disease, yet it gets a 95% grade. This isn't just bad math—it's dangerous medicine."

    We discuss why Sensitivity and Specificity are the only metrics that truly matter in a clinical setting.

    How do we build "Student B" AI? It requires a fundamental shift in development:

    • External Validation: Testing the model on data from a completely different hospital or geographic region.

    • Patient-Level Splits: Ensuring the model never sees the same patient in training and testing.

    • Clinician-in-the-Loop: Why doctors must be involved in feature selection to spot "leaky" data that a data scientist might miss.

    We wrap up the episode with a practical toolkit. Before you trust an AI system with your family, ask the developers these five questions:

    1. Was data split at the patient level? (Did you prevent the model from memorizing specific individuals?)

    2. Were leaky features identified and removed? (Is the model cheating using "proxy" data?)

    3. What do the training curves show? (Can I see the "EKG" of how this model learned?)

    4. How was class imbalance handled? (What is your Sensitivity for the actual disease cases?)

    5. Was there external validation? (Has this worked at a hospital that isn't yours?)

    Real medicine is messy. It’s atypical symptoms, patients with five comorbidities, and "unusual" presentations. If we want AI to be a partner in the clinic, we need it to be a "Student B." We need it to understand the pathophysiology of the data, not just the answers on the test.

    Join us as we move past the hype and toward a future of robust, reliable, and truly intelligent medical AI.

    Based on the work and research of Dr. Milan Toma and synthesized from over 40 peer-reviewed studies on clinical AI evaluation.

    続きを読む 一部表示
    23 分
  • Understanding the Trust Gap in Medical AI
    2026/03/18

    Have you ever wondered why skepticism about artificial intelligence persists in healthcare, even as new AI tools are rapidly introduced? In this episode, Dr. Milan Toma, Associate Professor of Clinical Sciences at NYIT College of Osteopathic Medicine, explains the roots of distrust in clinical AI systems and what it takes to regain confidence. Drawing on decades of machine learning evolution, real-world case studies, and his own research experience, Dr. Toma discusses the dangers of overfitting, the importance of healthy training dynamics, and the vital role of collaboration between clinicians and developers. Tune in to learn how the healthcare community can move from skepticism to trust and ensure that AI serves the needs of both patients and professionals.

    続きを読む 一部表示
    9 分
まだレビューはありません