『Digital Pathology Podcast』のカバーアート

Digital Pathology Podcast

Digital Pathology Podcast

著者: Aleksandra Zuraw DVM PhD
無料で聴く

このコンテンツについて

Aleksandra Zuraw from Digital Pathology Place discusses digital pathology from the basic concepts to the newest developments, including image analysis and artificial intelligence. She reviews scientific literature and together with her guests discusses the current industry and research digital pathology trends.© 2025 Digital Pathology Podcast 博物学 科学 自然・生態学 衛生・健康的な生活 身体的病い・疾患
エピソード
  • 153: Can GPT-4o Classify Tumors Better Than Us? AI-Powered Pathology Insights
    2025/08/18

    Send us a text

    If we don’t learn to work with LLMs now, we might end up competing with them. 🧠
    In this week’s DigiPath Digest, I return to our Journal Club to unpack the latest research on AI in tumor classification, focusing on GPT-4o, LLaMA, and other LLMs. Can these models really outperform traditional tools when analyzing pathology reports?

    Surprisingly—yes. But don’t panic. This episode is about understanding what LLMs actually bring to the table, how they’re being evaluated, and what we need to consider as digital pathology continues to evolve.

    It’s also a special week for me personally—I recorded this episode the morning of my U.S. citizenship ceremony, and I used AI to help write my speech! I’ll share more about that next time.

    ⏱️ Episode Highlights

    [00:00] – Life update + AI-written speech for my citizenship
    [04:00] – Journal Club: Austrian study on LLMs in pathology report analysis
    [05:00] – Why cancer registries need better documentation tools
    [06:00] – LLMs tested on synthetic pathology reports—game-changing idea
    [07:00] – GPT-4 and LLaMA outperform score-based models in accuracy
    [08:00] – Use case: AI-enhanced text mining across whole archives
    [09:00] – How my PhD could’ve been easier with these tools
    [10:00] – Second paper: A public synthetic dataset for benchmarking LLMs
    [11:00] – Tools used: ChatGPT, Perplexity, Copilot to generate report variations
    [13:00] – Benefits of synthetic data for de-identification
    [14:00] – Thoughts on bias, annotation workflows, and future-proofing
    [16:00] – Polish research on hybrid annotation for follicular lymphoma
    [19:00] – Foundation models, bootstrapping, weak supervision in action
    [22:00] – Charles River: AI for thyroid hypertrophy scoring in tox path
    [23:00] – Subjectivity of scoring thresholds and reproducibility
    [24:00] – Morphology-driven scoring architecture improves accuracy

    📚 Resource from this Episode

    1. LLM Performance in Malignancy Detection from Pathology Reports
      🔗 Read Article
    2. Synthetic Dataset for Evaluating LLMs in Medical Text Classification
      🔗 Read Article

    🧰 Tools & Topics Mentioned

    • LLMs: GPT-4o, LLaMA, Copilot, Perplexity
    • Synthetic Data for AI model testing
    • Annotation strategies: weak supervision, bootstrapping
    • Pathology AI applications: tumor detection, thyroid activity, lymphoma
    • Research teams: Austria, Poland, Charles River Labs

    The big takeaway? AI tools are improving fast—and it’s up to us to decide how they’re used in our field. This episode breaks down the latest advancements and opens the door to practical, safe integration in pathology workflows.

    🎧 Let’s keep pushing the boundaries—together.

    Support the show

    Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!

    続きを読む 一部表示
    20 分
  • 152: AI in Pathology, ML-Ops, and the Future of Diagnostics – 7-Part Livestream 7/7
    2025/08/15

    Send us a text

    AI in Pathology: ML-Ops and the Future of Diagnostics

    What if the most advanced AI models we’re building today are doomed to die in the machine learning graveyard? 🤯 That’s the haunting question I tackled in the final episode of our 7-part series exploring the Modern Pathology AI publications.

    In this session, I explored machine learning operations (ML-Ops)—what they mean for digital pathology —and why even the most brilliant algorithm can fail without proper deployment strategies, data infrastructure, and lifecycle management.

    But we don’t stop there. I take you on a future-forward tour through multi-agent frameworks, edge computing, AI deployment strategies, and even virtual/augmented reality for medical education. This isn’t sci-fi. This is happening now, and as pathology professionals, we need to be prepared.

    🔗 Full episode reference:
    Modern Pathology - Article 7: AI in Pathology ML-Ops and the Future of Diagnostics
    Read the paper

    🔍 Episode Highlights & Timestamps

    [00:00] – Tech check, community shout-outs, and livestream reflections
    [02:00] – Overview of ML-Ops: What it is and why pathologists should care
    [03:45] – What’s a Machine Learning Graveyard? Personal examples of models I’ve built that went nowhere
    [05:30] – Machine learning platforms: from QPath to commercial image analysis tools
    [06:45] – The lifecycle of ML models: Development, deployment, and monitoring
    [09:00] – Mayo Clinic and Techcyte partnership: Real-world deployment integration
    [12:30] – Frameworks & DevOps tools: Docker, Git, version control, metadata mapping
    [14:30] – Model cards in pathology: Structuring ML model metadata
    [16:30] – Deployment strategies: On-premise, cloud, and edge computing
    [20:00] – PromanA and QA via edge computing: Doing quality assurance during scanning
    [23:00] – Measuring ROI: From patient outcomes to institutional investment
    [25:00] – Multi-agent frameworks: AI agents collaborating in real-time
    [28:00] – Narrow AI vs. General AI and orchestrating narrow tools
    [30:00] – Real-world applications: Diagnosis generation via AI collaboration
    [32:00] – Virtual & Augmented Reality in pathology training: From smearing to surgical simulation
    [35:00] – AI in drug discovery and virtual patient interviews
    [38:00] – Scholarly research with LLMs: Structuring research ideas from unstructured data
    [41:00] – Regulatory considerations: Recap of episode 5 for frameworks and guidelines
    [42:00] – Recap and future updates: Book announcements, giveaways, and next steps

    Resource from this episode

    • 🔗 Modern Pathology Article #7: AI in Pathology ML-Ops and the Future of Diagnostics

    • 🛠️ Tools/References mentioned:
      • QPath (Free Image Analysis Tool)
      • Techcyte & Aiforia for model development and deployment
      • PromanA for edge computing and real-time QA
      • Model Cards (Pathology-specific metadata structure)
      • Apple Vision Pro, Meta Oculus, HoloLens for VR/AR learning
      • Dr. Hamid Ouiti Podcast on software failure in medicine
      • Dr. Candice C

    Support the show

    Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!

    続きを読む 一部表示
    43 分
  • 151: Ethics and Bias Considerations in AI – 7-Part Livestream 6/7
    2025/08/14

    Send us a text

    Can We Ever Eliminate Bias in AI for Pathology?

    Every time we think we’ve trained a “neutral” algorithm, we discover our own fingerprints all over it. Our biases. Unconscious. Systemic. Data-driven. And if we ignore them, AI won’t just fail—it will fail patients.

    Welcome back, my digital pathology trailblazers! In this sixth episode of our 7-part AI in Pathology series, we tackle one of the most uncomfortable yet necessary conversations: Ethics and Bias in AI and Machine Learning. These are not abstract philosophical concerns—they are critical decisions that affect diagnostic accuracy, fairness, and patient safety.

    We lean heavily on the brilliant work co-authored by Matthew Hanna, Liam Pantanowitz, and Hooman Rashidi, published in Modern Pathology, which you can read here: Ethics and Bias in AI for Pathology.

    Let’s explore where bias creeps in, how we can mitigate it, and what it means to be a responsible data steward in digital pathology.

    ⏱️ Highlights & Timestamps

    [00:00:00] Welcome back! Kicking off from Pennsylvania at 6:00 AM and reflecting on USCAP highlights, upcoming podcasts, and a pivotal lawsuit on LDTs.
    [00:03:00] Defining today’s topic: Bias in AI—why it matters, and how pathologists are key players in shaping ethical, trustworthy algorithms.
    [00:05:00] Who are the “data stewards”? A new term you need to own. We explore the role of healthcare professionals in AI development and deployment.
    [00:07:00] Ethical principles decoded—autonomy, beneficence, non-maleficence, justice, and accountability—and how they translate to AI and ML.
    [00:11:00] From voting rights to data rights: A surprising analogy from my U.S. citizenship interview about the evolution of fairness.
    [00:12:00] 12 types of bias explained—from data bias to feedback loops, representation to confirmation bias—with real pathology examples.
    [00:22:00] Temporal bias and transfer bias: Why yesterday’s data may not apply to today’s patients.
    [00:26:00] Walkthrough of the AI lifecycle and how bias seeps in at every stage—from research to regulatory approval.
    [00:29:00] Clinical trials & guidelines: Learn the difference between STARD-AI, TRIPOD-AI, QUADAS-AI, and CONSORT-AI.
    [00:33:00] Visual case study: Gleason score distribution by region shows how biased training data leads to misdiagnosis.
    [00:37:00] Real-world mitigation: I spotlight Digital Diagnostics Foundation and Big Picture Consortium as proactive models for bias reduction.
    [00:41:00] Why explainability and introspection are more than buzzwords—they are our tools for ensuring accountability.
    [00:44:00] FAIR data principles—Findability, Accessibility, Interoperability, and Reusability—and why annotations often fall short.
    [00:48:00] Practical steps: How to build better algorithms with built-in fairness, bias detectors, and responsible data sharing.

    📚 Resource from this Episode:

    📄 Featured Publication:
    Ethics and Bias Considerations in Artificial Intelligence and Pathology
    ➡️ Access Full Article

    Let’s keep creating technology that doesn’t just do what we tell it to—but does what is right for everyone. See you in the next and final episode of this

    Support the show

    Become a Digital Pathology Trailblazer get the "Digital Pathology 101" FREE E-book and join us!

    続きを読む 一部表示
    42 分
まだレビューはありません