『Health and Explainable AI Podcast』のカバーアート

Health and Explainable AI Podcast

Health and Explainable AI Podcast

著者: Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

The Health and Explainable AI podcast is a collaborative initiative between the Health and Explainable AI (HexAI) Research Lab in the Department of Health Information Management at the School of Health and Rehabilitation Sciences, and the Computational Pathology and AI Center of Excellence (CPACE), at the University of Pittsburgh School of Medicine. Led by Ahmad P. Tafti, Hooman Rashidi and Liron Pantanowitz, the podcast explores the transformative integration of responsible and explainable artificial intelligence into health informatics, clinical decision-making, and computational medicine.Pitt HexAI Lab and the Computational Pathology and AI Center of Excellence
エピソード
  • George Demiris on Proactive Healthcare and The Future of AI in Nursing and Aging
    2026/04/07

    George Demiris, Associate Dean for Research and Innovation at the University of Pennsylvania School of Nursing and a “Penn Integrates Knowledge University Professor” discusses the transformative integration of responsible and explainable artificial intelligence into nursing, elder care, and hospice settings with Pit HexAI host Jordan Gass-Pooré.

    The University of Pennsylvania School of Nursing is actively integrating emerging technologies into its curriculum, research, and clinical practice to enhance person-centered care, ensuring that technological advancements support rather than replace human connection, with the Penn Artificial Intelligence and Technology (PennAITech) Collaboratory for Healthy Aging playing a central role in bringing together interdisciplinary experts to address the technical and ethical challenges of integrating AI into the aging process.

    Discussing his work focusing on information technology's role in the healthcare of older adults, specifically through smart home solutions and passive sensing systems that support aging in place, George advocates for a shift from reactive to proactive care, using sensors for example to detect subtle behavioral changes before adverse events like falls occur. However he argues that technology must remain a "decision aid" rather than a final decision-maker, advocating for "self-reflective AI" that explains its reasoning to clinicians. This approach preserves the "moral agency" of nurses, who act as vital patient advocates ensuring AI tools are introduced ethically and reflect the diverse preferences of those they serve.

    Looking ahead, the conversation stresses the need for fluid collaboration between academia and industry to keep pace with rapid innovation. George envisions a holistic future for AI that prioritizes human dignity and autonomy, utilizing generative tools to adapt complex medical information to the specific literacy and language needs of patients and their caregivers.

    続きを読む 一部表示
    33 分
  • Martin Raison CTO of Nabla on Architecting the Agentic AI Era in Healthcare
    2026/03/18

    Martin Raison, Co-founder and CTO of Nabla speaks with Pitt HexAI host Jordan Gass-Pooré about Nabla’s central role in architecting the agentic AI era in healthcare. Martin details Nabla’s evolution from a specialized ambient scribing tool into a comprehensive "Adaptive Agentic Platform". They discuss the significant challenges involved in making it possible for AI agents to perform complex clinical tasks and how Nabla has been thrust into tackling a labyrinth of structural and data hurdles. These range from the integration of fragmented, unstructured patient charts and hospital guidelines to the complex technicalities of agent discoverability, interoperability, and the establishment of standardized accountability frameworks.


    The interview highlights a significant shift in Nabla's technical strategy: moving from probabilistic Large Language Models (LLMs) toward world models. Raison explains that while LLMs are effective at generating text, they lack a fundamental understanding of cause-and-effect and the ability to simulate evolving environments. To address this, Nabla has entered an exclusive partnership with Advanced Machine Intelligence (AMI), a research lab co-founded by Yann LeCun. This collaboration provides Nabla with early access to world model technologies that can "imagine" different scenarios and simulate the consequences of actions, providing a more deterministic and auditable path for AI in high-stakes clinical settings.


    In discussing the technical foundations of computational health, Martin addresses the critical need for inference optimization to manage the millions of model executions required daily at scale. Furthermore, Martin envisions a fundamental shift in the paradigm of AI inference through the adoption of world models. He suggests that these architectures will blur the traditional boundary between training and inference by enabling continuous learning, where the model adjusts and evolves in real-time based on new data and clinician feedback, rather than being limited by the static context windows of current LLMs.


    Beyond the core technology, Martin and Jordan discuss the critical importance of explainability and interoperability in the "agentic web" of healthcare. They specifically highlight architectural initiatives like MIT’s Project NANDA, which focuses on the foundational layers of the agentic web, including critical elements like discoverability and authentication that go beyond the AI layer alone. Martin emphasizes that the sector must move toward standardized "Agent Fact Files" to ensure accountability and ease of governance as organizations begin to manage thousands of agents. He concludes by looking toward a future of "emergent intelligence," where the collaboration between multiple models creates sophisticated patterns that can eventually help clinicians improve their own professional practice over time.

    続きを読む 一部表示
    38 分
  • Ekaterina Kldiashvili from the Tbilisi Medical Academy on Responsible Uses of AI, Medical Education and Inter-University Collaboration
    2026/02/07

    Ekaterina Kldiashvili, Vice Rector for Research at Petre Shotadze Tbilisi Medical Academy, and Pitt’s HexAI podcast host, Jordan Gass-Pooré, discuss public health, the incorporation of AI into healthcare, responsible uses of AI, medical education and inter-university collaboration.

    Ekaterina and Jordan explore opportunities and concerns surrounding commercial AI applications, noting that while AI can improve healthcare efficiency, it must support clinical reasoning rather than replace it. They cover the Tbilisi Medical Academy’s work on responsible AI usage, particularly in educating providers and patients, demonstrating how AI-enhanced text and visuals can significantly improve patient understanding and follow-up rates. They also touch on challenges associated with the use of AI in non-English languages like Georgian and delve into advances in computational genomics and rapid molecular diagnostics. Looking ahead, they discuss the strengthening ties between the University of Pittsburgh and the Tbilisi Medical Academy through knowledge sharing and faculty training and broadly discuss inter-university collaboration and the idea of seeing students investigate how different cultures and communities trust and accept AI in healthcare settings.

    続きを読む 一部表示
    28 分
まだレビューはありません