From AI Strategy to Execution: Ethical Leadership, Trust and the Operational Reality of Healthcare AI | Brian Sutherland
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
Send us Fan Mail
AI strategy for healthcare stalls when it collides with operational reality — Brian Sutherland on ethical leadership and moving healthcare AI from plan to execution.
Healthcare innovation leadership rarely stalls at the strategy layer. It stalls when AI strategy for healthcare collides with operational reality, leadership alignment, and the workflow assumptions the plan never questioned. Brian Sutherland, an AI product manager who built Humana's first member-facing intelligent virtual assistant, joins Chris Hutchins to examine why AI pilots do not scale and what AI leadership strategies look like when they survive first contact with the bedside.
What We Cover- Why AI pilots stall at the 80/20 trust problem, and what that reveals about leadership alignment in healthcare organizations
- The adoption gap where technology outpaces human change, and why friction is the most underestimated variable in every AI deployment plan
- How to treat an AI system like a junior employee, with structured onboarding, retraining as policies change, and designed failure response, instead of a finished product
- Why AI governance is most useful as an enabler, not a bureaucracy, and what that looks like in practice
- How diverse perspectives expose AI blind spots, and why resistance appears exactly when those blind spots force a course correction
- AI strategic leadership means designing for failure, not just for success. Breaches and errors are inevitable. Organizations that plan for them absorb the hit; the others do not.
- Trust in healthcare is declining even as reliance on AI increases. Any AI leadership strategy that ignores the trust paradox inherits it by default.
- AI strategy consultant work stops being consulting the moment rollout begins. Execution is the real strategy, and the gap between executive approval and frontline use is where most transformation ends.
- 30/60/90 AI onboarding model (treat AI systems like junior employees)
- Humana member-facing intelligent virtual assistant ($7M annual savings, 31% lift in task completion)
- Governance-as-enabler vs. governance-as-bureaucracy
- Human-in-the-loop clinical decision points
- Structured breach and error response for AI systems
- 00:00 Brian Sutherland on AI in high-stakes, customer-facing healthcare
- 01:48 Why AI fails: leadership, workflow, and operating model gaps
- 03:08 The adoption gap: technology outpaces human change
- 03:51 The trust paradox: declining human trust, rising AI reliance
- 05:22 Why pilots do not scale: the 80/20 trust problem
- 07:13 Learn before building: operating the workflow first
- 09:56 The gap between executive approval and frontline use
- 11:54 Governance done right: enablement vs. bureaucracy
- 14:30 Diverse perspectives: reducing AI blind spots
- 17:44 Governance as multi-perspective decision-making
Support the show
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com