エピソード

  • Busy or Better: The Real Productivity Math Behind AI
    2026/03/25

    You’ve been using AI tools for months. So why do you have less time than before?

    The answer is 150 years old. In 1865, an economist named William Stanley Jevons discovered something deeply counterintuitive: when a technology becomes more efficient, total consumption of the resource it saves tends to go up — not down. More efficient coal engines didn’t reduce coal use. They made coal cheaper to run, so demand exploded.

    The same mechanism is running on your calendar right now. Researchers call it workload creep — and it follows a predictable pattern. The faster AI lets you produce, the more output gets expected of you. That efficiency gain doesn’t go to you. It gets absorbed into the new baseline before you ever had a chance to keep it.

    In this episode, we break down the Jevons Paradox and what it actually means for leaders deploying AI tools across their organizations. We look at why 95% of large enterprise AI investments are generating zero measurable return — while 90% of workers are successfully using AI on their own outside company systems. We examine the jagged frontier: where AI performs brilliantly and where it silently fails. And we get to the one architectural shift that actually breaks the cycle — the difference between automating a task and automating a workflow.


    About the host

    Laurence Gill is an IT leader with more than two decades of experience overseeing technology implementation across the U.S. government.. He is a doctoral candidate in cybersecurity with a dissertation focused on federal IT spending, and has spent years training youth and adults in workforce development skills including financial literacy, cybersecurity, entrepreneurship, and AI. That training mission is the reason this podcast exists: making complex, high-stakes knowledge accessible to the people who need it most, without requiring a technical background to benefit from it.

    続きを読む 一部表示
    24 分
  • Hostage Protocol: When Hackers Hold Patients for Ransom
    2026/03/16

    Ransomware attacks on hospitals are not a technology problem — they are a patient safety crisis. In Episode 4, Laurence Gill draws on his background as a doctoral candidate in cybersecurity and two decades of federal IT leadership to break down why healthcare is the number one ransomware target in the country, how these attacks produce documented patient harm, and why the ransom decision is a clinical emergency, not a policy debate. Anchored by the ransomware storyline in The Pitt Season 2 Episode 7 — which aired the same morning the University of Mississippi Medical Center was hit by a real attack — this episode delivers the governance framework every healthcare leader needs before the next crisis lands: how to build resilience before an attack, how to execute during one, and the accountability questions that belong on every leadership agenda right now.

    続きを読む 一部表示
    29 分
  • Confident Nonsense: When AI Lies With Authority in Healthcare
    2026/03/09

    AI doesn’t just get things wrong. It gets things wrong with complete confidence — citing studies that don’t exist, building logical arguments around biological impossibilities, and delivering dangerous recommendations in the fluent, authoritative voice of a clinical expert. In healthcare, that gap between confidence and accuracy isn’t an inconvenience. It’s a patient safety crisis.


    In Episode 3 of AI Literacy for Leaders, Laurence Gill breaks down the mechanics of AI hallucination in clinical settings — drawing on guidance from the World Health Organization, UK regulators, and a landmark study that tested whether humans can actually catch AI lies. The findings are more alarming than most healthcare leaders realize.


    You’ll learn the three distinct types of AI hallucination that clinicians need to recognize, why experienced physicians miss them more often than you’d expect, and why using a second AI to check the first one doesn’t solve the problem. You’ll get a practical green-yellow-red framework for where AI is safe to use, where it requires careful oversight, and where it should never go near a clinical decision. And you’ll hear about a failure mode that almost nobody is talking about — not AI that lies, but AI that goes dangerously silent.
    The episode uses storylines from The Pitt on HBO Max — where a hospital’s AI clinical assistant hallucinates a treatment recommendation and calls the entire tool into question — as a narrative anchor for what real healthcare organizations are navigating right now.


    The future of medicine isn’t AI versus doctor. It’s the clinician who knows how to interrogate AI output versus the one who accepts it at face value. This episode gives you the framework to be the former.- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Runtime: Approx. 30 minutes

    Hosted by: Laurence Gill

    Series: AI Literacy for Leaders

    続きを読む 一部表示
    30 分
  • Agents of Chaos: When AI Gets the Power to Act
    2026/03/04

    What happens when AI stops responding and starts doing? In Episode 2 of AI Literacy for Leaders, Laurence Gill breaks down one of the most revealing stress tests ever run on autonomous AI agents — a research experiment called Agents of Chaos, where AI systems were given real tools: email accounts, file access, and the ability to execute commands. Then they were let loose.

    What the researchers found wasn’t a story about rogue AI. It was a story about what happens when organizations deploy powerful systems without the architecture to contain them.

    You’ll hear about the agent that wiped its entire email account trying to delete one message — and celebrated. The social engineering attack that extracted a user’s home address, bank account, and social security number in four messages. And the developer community’s response that reframes the entire conversation: these aren’t prompting problems. They’re architecture problems. And architecture problems have solutions.

    By the end of this episode, you’ll understand what autonomous agents actually are, why they represent a fundamentally different category of risk than the AI you already know, the three architectural fixes that separate a safe deployment from a dangerous one, and the accountability question that better engineering alone cannot answer.

    The agents aren’t coming. They’re already here. This episode gives you the framework to lead in that reality.


    Learn more about Laurence at: www.laurencegill.com

    続きを読む 一部表示
    22 分
  • Why 95% of AI Projects Fail (And What Leaders Can Do About It)
    2026/02/20

    Learn more about Laurence Gill at: www.laurencegill.com


    The headlines say AI is transforming everything. The data tells a different story.

    In this debut episode of AI Literacy for Leaders, host Laurence Gill breaks down one of the most sobering statistics in enterprise technology today — 95% of AI initiatives are delivering zero measurable ROI. That means for every 20 AI projects your organization launches, 19 of them are burning cash.

    But this isn’t a story about technology failing. It’s a story about foundations.

    Laurence unpacks the five bottlenecks that turn high-potential AI into expensive science projects — from dirty data and fragile infrastructure, to silent model decay, governance gaps, and the human culture problems that no algorithm can fix. No jargon, no hype. Just the unsexy, critical realities that every leader needs to understand before greenlighting their next AI investment.

    In this episode:

    ∙ Why 80% of your organization’s data is invisible to most AI systems

    ∙ The hidden technical debt quietly killing your AI initiatives

    ∙ What “data drift” is and why set-it-and-forget-it is lethal for AI

    ∙ How the EU AI Act is changing the legal stakes of AI deployment

    ∙ Why CEO-level oversight is the single biggest predictor of AI success

    The bottom line: AI isn’t a magic wand. It’s a mirror. And until leaders start treating data infrastructure as seriously as the models built on top of it, the failure rate isn’t going anywhere.

    AI Literacy for Leaders is built for executives, department heads, and decision makers who need to understand AI well enough to lead — without going back to school for a computer science degree.

    続きを読む 一部表示
    17 分
  • Podcast Brief 02: The Canada and Germany AI Alliance
    2026/02/19

    Canada and Germany signed a major joint declaration of intent. They're launching a new sovereign technology alliance.


    続きを読む 一部表示
    2 分
  • Podcast Brief: France Bans Teams and Zoom Over Spying
    2026/02/18

    France officially banned Microsoft Teams, Zoom, google and Cisco Webex platforms for all government use. In this episode I explain why.

    続きを読む 一部表示
    2 分