エピソード

  • AI Appetite Is Easy, Digestion Is Hard with Diana Wu David
    2026/05/01

    Everybody wants AI. AI adoption conversations are dominated by tools, models, and metrics. Far fewer organizations have figured out what to do with it once it's inside the building. The harder question, one that most leaders are avoiding, is what happens to the humans? Diana Wu David, Director of Futures at ServiceNow joins Juan and Tim to unpack what leaders should and should not be doing.

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    54 分
  • TAKEAWAY - AI Appetite Is Easy, Digestion Is Hard with Diana Wu David
    2026/05/01

    This is the takeaway episode with Diana Wu David, Director of Futures at ServiceNow where we discuss AI adoption, the metrics and how far fewer organizations have figured out what to do with AI once it's inside the building.

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    5 分
  • EVA - A Framework for Evaluating Voice Agents by ServiceNow
    2026/04/29

    Voice AI agent evaluation — why it's fundamentally harder than text, how cascade failures derail conversations invisibly, and ServiceNow's open-source framework to establish industry evaluation standards. Featuring real audio examples showing authentication failures, leaked reasoning, and latency problems.

    WHAT WE COVER

    TARA BOGAVELLI — Research Engineer, ServiceNow
    Leading the open-source voice agent evaluation framework. Explains why existing benchmarks don't measure what matters and what ServiceNow is releasing to establish industry standards.

    KATRINA STANKIEWICZ — Staff Machine Learning Engineer, ServiceNow
    Cascade model architecture expert. Breaks down STT → LLM → TTS failure modes, named entity transcription challenges, and real audio example analysis.

    GABRIELLE GAUTHIER MELANÇON — Staff Applied Research Scientist, ServiceNow
    Multi-language evaluation specialist. Reveals why Large Audio Language Models lag behind, the native speaker requirement, and bot-to-bot simulation methodology.

    CHAPTERS
    0:00 Introduction — The evaluation gap
    1:11 ServiceNow's Open-Source Framework Announcement — Tara Bogavelli
    2:43 Meet the Researchers
    3:43 Voice-Specific Challenges — Tara Bogavelli
    5:03 Cascade Architecture: STT → LLM → TTS — Katrina Stankiewicz
    7:57 The Named Entity Problem — Katrina Stankiewicz
    10:06 Evaluation Metrics: Accuracy vs Experience — Gabrielle Gauthier Melançon
    11:23 Bot-to-Bot Testing at Scale — Gabrielle Gauthier Melançon
    14:30 The LALM Gap: Why Audio AI Judges Struggle — Tara Bogavelli
    16:57 Real Audio Example: Flight Rebooking Gone Wrong
    21:58 Breaking Down the Failures — Katrina Stankiewicz 28:30 Wrap-Up & Resources

    KEY INSIGHTS

    The Cascade Failure Problem: STT → LLM → TTS errors propagate invisibly Named Entity Transcription: The #1 enterprise blocker—names, confirmation codes, emails break authentication Accuracy vs Experience: Perfect task completion means nothing if users hang up due to poor experience LALM Gap: Large Audio Language Models lag behind text LLMs—human evaluators remain essential Latency Kills Conversations: Five-second pauses make users think the call dropped, breaking the experience even when tasks complete Open-Source Framework: ServiceNow releasing evaluation tools, metrics, and bot-to-bot simulation methodology for the industry.

    LEARN MORE

    Website: https://servicenow.github.io/eva/ GitHub:
    https://github.com/servicenow/eva Blog Post:
    https://huggingface.co/blog/ServiceNow-AI/eva Dataset: https://huggingface.co/datasets/ServiceNow-AI/eva

    ABOUT

    Hosted by Bobby Brill. ServiceNow Insights podcast explores AI research, real-world applications, and the people building the future of work. #VoiceAI #AIEvaluation #ServiceNow #MachineLearning #OpenSource #ConversationalAI #STT #TTS #LLM #VoiceAgents #AIResearch #Podcast

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    30 分
  • It's Friday: Juan and Tim rant about AI, Agents, and the Uncomfortable Truth About Data's New Center of Gravity
    2026/04/24

    Juan and Tim's Friday rant covers a lot of ground, from Juan's TED takeaways on AI's unprecedented speed and what it means for humanity, to the uncomfortable shift data teams need to make: work and decisions are the point, not pipelines and gold layers. They dig into what Medallion Architecture 2.0 looks like (feedback loops, insights to action, agent governance), why organizational design theory applies directly to agent swarms, and what library science can teach us about the future data stack. The thread running through all of it: the humans who thrive in this moment won't be the ones who build the most, but the ones with taste.

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    39 分
  • Think Like a Librarian: Why the Reference Interview Is the Framework Data Teams Are Missing with Jenna Jordan and Amalia Child
    2026/04/16

    Data teams spend enormous energy building pipelines, platforms, and governance frameworks but often skip the most fundamental step: truly understanding what people are actually asking for. In this episode, Juan and Tim sit down with data librarians Jenna Jordan and Amalia Child to explore why library science may be the missing lens for data work.

    At the heart of the conversation is the reference interview, a structured technique librarians use to uncover a user's "true information need," which almost never matches the first question they ask. From establishing trust and listening without judgment, to asking open-ended questions and verifying whether the need was actually met, the reference interview offers a rigorous, repeatable framework for anyone serving data users.

    If you've ever wondered why data projects deliver less value than expected, this episode will reframe the problem entirely and give you a practical toolkit to start closing the gap.

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    52 分
  • TAKEAWAY - Think Like a Librarian: Why the Reference Interview Is the Framework Data Teams Are Missing with Jenna Jordan and Amalia Child
    2026/04/15

    Data teams obsess over pipelines and platforms but often skip the most fundamental step: truly understanding what people are actually asking for. We chat with data librarians Jenna Jordan and Amalia Child who share a framework for exactly that; it's called the reference interview, and it might be the most practical toolkit data teams have never used.

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    6 分
  • AI Control Tower - Governing AI at Scale with ServiceNow
    2026/04/15

    AI governance at scale — what it means, how to do it, and what regulations you need to know now. Host Bobby Brill brings together five ServiceNow experts across two conversations for a complete 20-minute briefing on governing AI in the enterprise.

    ━━━━━━━━━━━━━━━━━━━━━━━━
    WHAT WE COVER
    ━━━━━━━━━━━━━━━━━━━━━━━━

    RAVI KRISHNAMURTHY — VP, AI Platform, ServiceNow
    Why hidden AI is one of the biggest unmanaged risks in the enterprise — and why governance is an accelerator, not a brake.

    PETER WEIGT — Responsible AI, ServiceNow
    The innovation paradox: how AI Control Tower makes governance a team sport and breaks down the silos that slow AI deployment down.

    SAMPADA CHAVAN — AI Control Tower, ServiceNow
    How AI Control Tower was built, what the discovery problem really looks like, and why compliance must be baked into the AI lifecycle — not bolted on at the end.

    ANDREA LAFOUNTAIN — AI Legal, ServiceNow
    The three regulatory frameworks every enterprise needs to know: EU AI Act, Colorado AI Act, and NIST. Plus: the compliance strategy that scales across all of them.

    NAVDEEP GILL — Responsible AI, ServiceNow
    The math on enterprise AI compliance — why it's exponential — and how AI Control Tower's automated discovery keeps you ahead of it.

    ━━━━━━━━━━━━━━━━━━━━━━━━
    CHAPTERS
    ━━━━━━━━━━━━━━━━━━━━━━━━

    0:00 Introduction
    1:23 The Hidden AI Problem — Ravi Krishnamurthy & Sampada Chavan
    5:33 AI Control Tower in Practice — Peter Weigt & Sampada Chavan
    7:37 The Regulatory Landscape — Andrea LaFountain & Navdeep Gill
    14:38 Compliance in Action & Key Deadlines
    17:05 Wrap-Up

    ━━━━━━━━━━━━━━━━━━━━━━━━
    KEY DATES TO KNOW
    ━━━━━━━━━━━━━━━━━━━━━━━━

    EU AI Act enforcement: August 2026
    Colorado AI Act enforcement: June 2026
    NIST AI RMF: Voluntary framework, increasingly referenced by regulators

    ━━━━━━━━━━━━━━━━━━━━━━━━
    LEARN MORE
    ━━━━━━━━━━━━━━━━━━━━━━━━

    ServiceNow AI Control Tower: https://www.servicenow.com
    NIST AI Risk Management Framework: https://www.nist.gov/artificial-intelligence

    ━━━━━━━━━━━━━━━━━━━━━━━━
    ABOUT THIS PODCAST
    ━━━━━━━━━━━━━━━━━━━━━━━━

    Hosted by Bobby Brill. A ServiceNow podcast exploring the people, technology, and ideas shaping the future of work.

    #AIGovernance #ServiceNow #AIControlTower #ResponsibleAI #EUAIAct #EnterpriseAI #AICompliance #FutureOfWork #NowAssist #Podcast

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    19 分
  • The Void Between Data and Decisions with Pete Williams
    2026/04/09

    There's a gap at the heart of most organisations and data hasn't filled it. Pete Williams, an experienced data leader, has spent years watching companies build sophisticated data capabilities, only to see insight stall before it reaches action. In this episode, we diagnose why: organisations still make decisions along traditional vertical lines, while data sits in a horizontal layer that was never given real authority. AI doesn't solve this misalignment, it exposes it! Pete makes a compelling case for why fixing this structural void is the defining challenge for data leadership today.

    See omnystudio.com/listener for privacy information.

    続きを読む 一部表示
    57 分