エピソード

  • Healthcare Leadership, Operational Reality, and System Signals | Markeisha Snaith | The Signal Room
    2026/03/25

    Send us Fan Mail

    Healthcare transformation often begins with strategy, but its success depends on how those ideas translate into operational reality.

    In this episode of The Signal Room, Chris Hutchins speaks with Markeisha Snaith about leadership inside complex healthcare systems and how strategic decisions shape the way organizations actually function.

    They explore:

    • Where healthcare transformation efforts most often stall
    • How leadership communication influences operational behavior
    • The signals that reveal whether organizational alignment is strong or weakening
    • What leaders should ask before introducing new technology into care environments
    • How healthcare systems can design structures that support both patients and clinical teams

    This conversation examines the relationship between leadership decisions, organizational behavior, and the operational realities that determine whether change efforts succeed.

    If you care about healthcare leadership, organizational alignment, and the signals that shape system performance, this episode offers an inside look at the challenges leaders face when strategy meets real-world complexity.



    Support the show

    続きを読む 一部表示
    53 分
  • Caregivers as the Connective Tissue of Healthcare Innovation | Amanda Roser
    2026/03/18

    Send us Fan Mail

    Behind every treatment plan is a caregiver coordinating the work no one else sees.

    Parents navigating rare disease care often become the organizers, translators, and connectors holding the healthcare system together. They track symptoms, manage appointments, translate medical language, and bridge communication between specialists who may never speak directly to one another.

    In this episode of The Signal Room, Chris Hutchins sits down with Amanda Roser, Vice President of Marketing at Social Strategy1, Head of Marketing for Ketotic Hypoglycemia International, and parent advocate navigating rare disease care with her son.

    They discuss:

    • The hidden operational role caregivers play in healthcare
    • The language families must learn in order to advocate effectively
    • Why caregivers often have to retell the patient story at every appointment
    • The coordination work happening outside the medical record
    • How new tools are helping families prepare for clinical conversations
    • What healthcare systems could look like if caregivers were recognized as part of the care team

    This conversation explores the realities of caregiving inside complex healthcare systems and what leaders designing care models might learn from the families navigating them every day.

    If you care about patient advocacy, healthcare system design, and the lived realities behind rare disease care, this episode offers a perspective rarely heard in conversations about healthcare operations.


    Support the show

    続きを読む 一部表示
    50 分
  • The Scary Truth About Healthcare AI in the ER and Why Clinical Judgment Still Wins | Dr. Natasha Dole
    2026/03/13

    Send us Fan Mail

    Artificial intelligence is entering emergency departments, acute care settings, and clinical workflows at speed. But when seconds matter, who do clinicians trust — the algorithm or their own judgment?

    In this episode of The Signal Room, Chris Hutchins sits down with Natasha Dole, Emergency Medicine Consultant and Digital Health & AI Lead, to explore how credibility is established in high-pressure clinical environments — and what that means for AI adoption.

    They discuss:

    • How trust is built in the resuscitation room before anyone speaks
    • What clinicians need to see before relying on AI recommendations
    • When AI supports credibility — and when it undermines it
    • The real drivers behind AI resistance in healthcare
    • What “earned trust” should look like for AI at the bedside
    • The responsibilities that remain uniquely human in clinical care

    This conversation moves beyond hype to examine authority, bias, professional responsibility, and the hidden assumptions embedded in healthcare technology.

    If you care about responsible AI, clinician trust, and the future of decision-making in acute care — this episode is for you.

    Subscribe for more conversations at the intersection of leadership, ethics, and healthcare innovation.

    Support the show

    続きを読む 一部表示
    44 分
  • The Enterprise AI Journey: From Data Foundations to Generative and Agentic AI | Gary Cao
    2026/03/04

    Send us Fan Mail

    Send a text

    AI strategy and AI governance at the enterprise level are not tool decisions. They are operating model decisions that determine whether organizations scale responsibly or stall.

    In this episode of The Signal Room, Chris Hutchins sits down with Gary Cao, Chief Data & Analytics / AI Officer, to explore the enterprise AI journey from an executive perspective.

    This conversation moves beyond hype and definitions. Instead, it focuses on what actually changes inside an organization when AI becomes strategic:

    • Moving from AI experimentation to enterprise maturity
    • Integrating generative AI into structured data environments
    • Deterministic systems vs. probabilistic reasoning
    • The role of semantic layers and data management bottlenecks
    • Automation vs. agentic AI systems
    • Measuring enterprise ROI in an era of high abandonment rates

    Gary shares practical insight into AI maturity models, governance design, risk tolerance tiers, and the evolving role of the CDAO in coordinating strategy, technology, and accountability.

    If you are a board member, C-suite executive, data leader, or healthcare leadership team navigating AI strategy at scale, this episode provides a grounded view of what it takes to move from ambition to responsible AI execution.

    Connect with Gary Cao on LinkedIn:
    https://www.linkedin.com/in/garycao/

    Subscribe to The Signal Room for conversations at the intersection of leadership, governance, and AI innovation.

    Support the show

    Support the show

    続きを読む 一部表示
    42 分
  • From AI Strategy to Execution: Trust, Leadership, and the Operational Reality of Healthcare AI | Brian Sutherland
    2026/02/25

    Send us Fan Mail

    AI ambition isn’t the problem in healthcare. Execution is.

    In this episode of The Signal Room, Chris Hutchins sits down with Brian Sutherland, Lead AI Product Manager and advisor specializing in customer-facing AI for high-consequence healthcare environments.

    Brian built Humana’s first member-facing Intelligent Virtual Assistant — generating $7M+ in annual savings while improving patient experience and task completion. In this conversation, we move beyond AI hype and examine what actually breaks between executive strategy and operational reality.

    We explore:

    • Why AI pilots succeed but enterprise adoption stalls
    • Trust as infrastructure — not philosophy
    • The leadership shift required as AI embeds into clinical workflows
    • Where hype is outrunning evidence in healthcare AI
    • What responsible scale actually looks like

    If you are a healthcare executive, board member, digital health leader, or AI product owner, this episode is a grounded discussion on what it takes to move from ambition to accountable execution.

    Connect with Brian Sutherland on LinkedIn:
    https://www.linkedin.com/in/briandsutherland/

    Subscribe for practical conversations at the intersection of leadership, ethics, and healthcare innovation.

    Support the show

    続きを読む 一部表示
    41 分
  • Why AI Governance and Verification, Not Speed, Is the Real Bottleneck in Pharmaceutical Innovation
    2026/02/18

    Send us Fan Mail

    AI is transforming drug discovery—but faster models alone do not get drugs approved.

    In this episode of The Signal Room, host Chris Hutchins speaks with David Finkelshteyn, CEO of Pivotal AI, about why verification—not speed or model accuracy—is the real bottleneck in pharmaceutical AI.

    David explains why generating AI-designed molecules without rigorous validation creates more risk than value, especially in regulated environments like pharma and healthcare. The conversation breaks down where AI outputs most often fail between discovery and regulatory acceptance, why black-box models struggle under scrutiny, and what it actually means to verify an AI insight in drug development.

    They also explore practical challenges around data integrity, auditability, missing context, hallucinations, and the growing use of consumer AI tools in health decisions. Rather than chasing hype, this episode focuses on how AI can responsibly accelerate drug development by failing faster, tightening verification loops, and building systems that can be defended to regulators, auditors, and clinicians.

    This episode is essential listening for leaders working in pharmaceutical R&D, healthcare AI, data science, AI governance, and regulated technology environments.

    Guest: David Finkelshteyn, CEO, Pivotal AI
    LinkedIn: https://www.linkedin.com/in/david-finkelshteyn-03191a130/

    Support the show

    続きを読む 一部表示
    38 分
  • No Alerts, Still Breached: Understanding Cybersecurity Risks and Ethical Leadership in Healthcare AI'
    2026/02/11

    Send us Fan Mail

    This episode explores ethical leadership and AI governance challenges in healthcare cybersecurity, emphasizing the risks of undetected breaches.'

    In this episode of The Signal Room, Chris Hutchins speaks with Guman Chauhan, a cybersecurity and risk leader, about one of the most dangerous conditions in modern organizations: being breached and not knowing it. While dashboards stay green and alerts stay quiet, attackers increasingly operate using valid credentials, normal behavior patterns, and long dwell times—remaining invisible for weeks or months.

    Guman explains why “no alerts” is often mistaken for “no breach,” and why silence is one of the most misleading signals in cybersecurity. The conversation unpacks how attackers deliberately avoid detection, why security tools alone do not equal security outcomes, and where organizations create blind spots through untested assumptions, alert fatigue, and fragmented processes.

    They explore why undetected breaches are more damaging than known ones, how time compounds risk once attackers are inside, and what separates organizations that mature after incidents from those that repeat the same failures. Guman emphasizes that proven security is not built on policies, certifications, or dashboards—but on continuous testing, validated detection, and teams that know how to act under pressure.

    This episode is a practical guide for executives, security leaders, healthcare organizations, and regulated enterprises that need to move from assumed security to proven breach readiness.

    Guest: Guman Chauhan
    LinkedIn: https://www.linkedin.com/in/guman-chauhan-m-s-cissp-cism-600824103/

    Topics Covered

    • Why undetected breaches are more dangerous than known breaches
    • How attackers use valid credentials to avoid detection
    • Why “no alerts” does not mean “no breach”
    • Alert fatigue and the signal-to-noise problem
    • Security tools vs security outcomes
    • Visibility gaps, unknown assets, and logging failures
    • External penetration testing and real-world validation
    • Cultural and leadership factors in breach response
    • Assumed security vs proven security

    Key Takeaways

    • Silence is not security; it often means you are not seeing the right signals.
    • Most breaches go undetected because attackers behave like legitimate users.
    • Security tools do not fail—untested assumptions do.
    • Alert fatigue hides real risk by normalizing noise.
    • Proven security requires testing detection and response end to end.
    • Mature organizations treat breaches as learning moments, not events to hide.
    • Confidence without validation creates the most dangerous blind spots.

    Chapters / Timestamps

    00:00 – Why undetected breaches are the real risk
    02:30 – Being breached vs being breached and not knowing
    06:00 – How attackers stay invisible using valid credentials
    08:30 – Why dashboards and alerts create false confidence
    10:00 – Common reasons breaches go undetected for months
    13:30 – Security tools vs security outcomes
    16:00 – Technology, process, and people failures
    19:30 – Alert fatigue and finding real signals
    22:30 – Why external penetration testing still matters
    26:30 – What mature organizations do after a breach
    31:00 – One action to improve breach readiness this year
    32:45 – The uncomfortable question every leader should ask
    34:30 – Assumed security vs proven security
    36:30 – How to connect with Guman & closing

    Support the show

    続きを読む 一部表示
    34 分
  • Scaling Care with Responsible AI: Healthcare Leadership, Human Judgment, and Clinical Trust
    2026/02/04

    Send us Fan Mail

    What does it truly mean to scale care with AI inside a real hospital environment? In this episode of The Signal Room, host Chris Hutchins talks with Mark Gendreau, emergency physician and Chief Medical Officer, about the intersection of healthcare AI, ethical leadership, and AI strategy. Together, they discuss how AI is transforming clinical workflows by amplifying human judgment rather than replacing it.

    They explore real-world applications in healthcare AI such as radiology co-pilots, ambient clinical documentation, and workflow intelligence designed to relieve clinician burnout. Dr. Gendreau highlights the need for responsible AI and human oversight in high-reliability healthcare settings.

    The conversation also covers critical topics like AI governance, clinical trust, alert fatigue, and leadership accountability. Listeners will gain insights into why successful AI adoption in healthcare depends on culture and ethical leadership, not just technology.

    This episode is essential for healthcare leaders, clinicians, informaticists, and policymakers seeking practical guidance on AI readiness, ethical AI practices, and driving AI strategies that improve patient care while maintaining human judgment at the core.

    Key Takeaways

    • AI delivers the most value when it amplifies clinicians, not when it attempts to replace them
    • Human judgment is essential in high-risk clinical decisions, even with advanced AI support
    • Ambient documentation can dramatically reduce after-hours EHR work (“pajama time”)
    • Alert fatigue is a governance problem, not just a technical one
    • Trust in AI is built through reliability, transparency, and clear ethical intent
    • Successful AI adoption depends more on leadership and culture than IT execution
    • Interoperability and governance are the biggest barriers to scaling AI across health systems
    • Emotional intelligence, empathy, and shared decision-making remain human responsibilities

    Guest Info

    Mark Gendreau, MD, MS, CPE
    Emergency Medicine Physician | Chief Medical Officer

    Dr. Gendreau is an experienced emergency physician and healthcare executive with deep expertise in clinical operations, patient safety, and responsible AI adoption. He focuses on using technology to improve access, quality, and clinician experience while preserving the human core of medicine.

    🔗 LinkedIn: https://www.linkedin.com/in/markgendreaumd/

    Chapters (YouTube & Spotify)

    00:00 – Introduction and framing the AI scaling challenge
    01:18 – Workforce scarcity and why AI must amplify clinicians
    02:10 – AI in radiology: co-pilots, fatigue reduction, and safety
    05:26 – Ambient documentation and eliminating “pajama time”
    07:17 – Using AI to improve clinician communication and empathy
    09:33 – Where AI falls short and why humans must stay in the loop
    12:44 – Guardrails, trust, and human-AI partnership
    13:44 – Trust in AI vs trust in human relationships
    16:07 – Adoption curves and clinician buy-in
    18:05 – Why AI fails when treated as an IT project
    20:41 – Leadership’s role in shaping AI culture
    22:07 – Interoperability, governance, and scaling challenges
    26:04 – Signals that an organization is truly AI-ready
    29:26 – Emotional intelligence and where AI should never lead
    33:59 – Alert fatigue and governance accountability
    37:27 – Measuring success: outcomes, equity, and pajama time
    38:36 – How to connect with Dr. Gendreau
    39:31 – Episode close

    Support the show

    続きを読む 一部表示
    34 分