• Making AI Agents Reusable Across the Enterprise (Samantha McConnell)
    2026/02/07

    Stop building the same capabilities over and over when everyone builds agents. Standardize and reuse common features across your business.

    In this episode of “What’s the BUZZ?”, Andreas Welsch sits down with Samantha McConnell to discuss how large enterprises can build reusable AI agents that create real business value. The conversation moves beyond vendor claims to examine how organizations operationalize agentic AI, manage rapid innovation cycles, and balance empowerment with governance.

    Samantha shares how Cox approaches AI through centralized hubs, agent registries, and differentiated governance models for individual productivity agents versus enterprise-scale solutions. The discussion also highlights why adoption is critical, and why many AI agents will have much shorter lifecycles than traditional software products.

    Catch the BUZZ:

    • Preventing reinvention through AI hubs and agent registries
    • Governing enterprise AI agents without slowing innovation
    • Managing the lifecycle of rapidly evolving AI agents
    • Measuring adoption and business impact, not just usage
    • Connecting agent initiatives to clear business success metrics
    • Using a land-and-expand approach to scale agentic AI responsibly

    Key Takeaways:

    • Balance innovation and control by tailoring governance to agent scale and risk
    • Design for faster time-to-value and shorter solution lifespans
    • Define outcome-based success metrics before deploying AI agents

    A practical episode for leaders focused on turning agentic AI from experimentation into repeatable, enterprise-ready impact.


    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    23 分
  • Top Lessons from Deploying AI Agents in Banking (Mo Jamous)
    2026/01/24

    Imagine shrinking a one-hour code review to under ten minutes—and using that same agentic approach to boost sales, reduce fraud, and make branch and call‑center staff far more productive.

    In this episode, Andreas Welsch interviews Mo Jamous, CIO at U.S. Bank, who has taken agentic AI from experiments into real production at a major financial institution. Mo walks through what worked, what surprised him, and the practical guardrails banks (and other regulated companies) need to adopt agents safely and effectively.

    Episode highlights:

    • A clear three‑bucket strategy: persona‑driven productivity, revenue/growth use cases, and operational excellence (fraud, security, DevOps, resilience).
    • A concrete win: an agentic code‑review tool built in weeks that reduced review time from ~1 hour to <10 minutes and scaled to hundreds of thousands of reviews per year.
    • How to instrument agents for measurement: attach metadata to agents, count executions, and map successful runs to dollar or productivity impact so you can report ROI.
    • People, process, platform: upskill teams with hackathons and brown‑bags, put a governance council (risk, security, compliance) in place, and build an orchestration/registry layer to track many agent implementations.
    • Common pitfalls: getting stuck on “one tool” decisions, underestimating change management and adoption, and failing to bake monitoring and guardrails into deployments.
    • Practical starting advice: pick high‑value, low‑complexity pilots (e.g., developer or call‑center assistants), measure outcomes from day one, and scale using an observability dashboard rather than betting on a single vendor.

    Who should listen: business and tech leaders who want actionable guidance for moving beyond demos and into production-ready agentic AI that creates measurable business outcomes.

    Want step‑by‑step lessons from an operator who’s done it? Listen to the full episode now to learn how to turn agent AI hype into real business value.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    27 分
  • What Enterprise AI Actually Wins At (Jon Reed)
    2026/01/10

    Stop chasing flashy multi‑agent demos. The big gains in enterprise AI are coming from focused, context‑driven systems, not agents in a room.

    In this year‑end conversation host Andreas Welsch and analyst Jon Reed cut through the noise to explain where AI is failing in the wild and where it's producing measurable business value. Jon lays out the vendor‑customer gap, the real risks of agentic experiments, and the practical architectures that are working today: compound systems, context engineering, RAG/knowledge graphs, evaluation and observability, and right‑time data layers.

    What you’ll learn:

    • Why multi‑agent orchestration rarely works at scale today and the narrow exception where it does
    • How vendors are ahead of buyers, and how leaders should close the gap with clear communication and upskilling
    • The difference between treating AI as a worker vs. a tool, and why that choice matters for people and projects
    • Practical, enterprise‑ready wins: document intelligence, procurement RFP automation, AP/AR, hyper‑personalization, and focused assistants
    • Why explainability, audit trails, and granular autonomy toggles are essential for trust and compliance
    • How to approach AI readiness: clean data, metadata/annotation, and composing smaller specialized models into reliable workflows

    If you build or buy AI in the enterprise, this episode is full of real examples and honest advice on where to invest, what to avoid, and how to design systems that produce results now, while preparing for broader scale.

    Tune in to hear the full conversation and get actionable guidance for turning AI hype into business outcomes.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    1 時間 7 分
  • Teaching AI Agents Ethical Behavior (Rebecca Bultsma)
    2025/12/20

    Can you trust an AI agent to act in line with your values — and who’s responsible when it doesn’t?

    In this episode, Andreas Welsch talks with AI ethics consultant Rebecca Bultsma about the pitfalls of rushing AI agents into business workflows and practical steps leaders should take before handing autonomy to software. Rebecca draws on her early ChatGPT experiments and academic work in data & AI ethics to explain why generative AI raises fresh ethical risks and how organizations can reduce harm.

    What you’ll learn:

    • Why generative AI and agents amplify old AI ethics problems (bias, hidden assumptions, and Western-centric worldviews).
    • Why you should build internal understanding first: experiment with low-stakes, traceable use cases before deploying public agents.
    • The importance of audit trails, explainability, and oversight to trace decisions and assign accountability when things go wrong.
    • Practical red flags: agents that transact autonomously, weak logging, and complacency about vendor claims.
    • A legal reality check: new laws (like California’s chatbot rules) are emerging and could increase liability for organizations that deploy chatbots or agents prematurely.

    The top takeaways:

    • Learn by experimenting personally and internally in your organization to discover where agents fail.
    • Start small with low-stakes, narrowly scoped tasks you can monitor and audit.
    • Don’t rush; rather, observe others' failures, train your people, and build governance before going public.

    If you’re a leader evaluating agents or responsible for AI governance, this episode gives clear, actionable advice for keeping your organization out of the headlines for the wrong reasons. Tune in to hear the whole conversation and learn how to turn AI hype into safer business outcomes.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    16 分
  • Designing Workforces for Agentic AI: What HR Must Do Next (Todd Raphael)
    2025/12/06

    What actually changes when AI agents become part of your workforce — and which human skills still matter most?

    In this episode, host Andreas Welsch talks with HR and talent-intelligence veteran Todd Raphael about the practical realities of bringing agentic AI into organizations. They move beyond proofs-of-concept to ask the tough questions: How do agents fit into daily workflows, what invisible human contributions should you protect, and how should HR and IT collaborate to redesign roles, org charts, and the employee lifecycle?

    Listen for concrete thinking and strategic framing, including:

    • The hidden value humans bring: Why many critical contributions (trusted relationships, customer touchpoints, institutional memory) don’t appear on job descriptions — and what that means when you automate tasks.
    • Rethinking structure and advancement: How flatter org models and new measures of impact (knowledge, networks, influence) may change who gets promoted and how leadership is defined.
    • HR’s seat at the table: Why HR is uniquely positioned to plan holistically for hire-to-retire changes, from skills-based hiring to internal marketplaces, reskilling, and retention when agents handle more tasks.


    You’ll also hear examples and practical prompts for leaders: identify the intangible work that must remain human, map tasks vs. relationships before automating, and start workforce planning that considers people and agents together.

    If you’re an HR leader, people manager, or technology decision-maker trying to turn agent hype into durable business outcomes, this episode gives you a playbook to start redesigning work the right way.

    Tune in now to learn how to protect human advantage and build an effective human+agent workforce.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    27 分
  • Agents Need IDs: How to Authenticate & Score Agent Trust (Tim Williams)
    2025/11/22

    When AI agents can self‑spawn, act at machine speed and delete their own trails, identity and trust become business-critical.

    In this episode, Andreas Welsch talks with Tim Williams—an experienced practitioner who’s helped organizations commercialize AI—about the security gaps agentic AI exposes and practical ways to close them. Tim explains why traditional username/passwords and persistent tokens won't cut it, how trust for agents should be treated like a credit score rather than a binary yes/no, and why observability and transaction-level controls are essential.

    Highlights you’ll get from the conversation:

    • Why agents operate at a different scale and cadence than humans, and the new risks that creates.
    • Real breach lessons (e.g., persistent token compromises) that show why persistent access is dangerous.
    • The concept of sliding trust: using a trust score to gate actions (low-risk vs high-risk transactions).
    • Short-lived, transaction-based approvals and why persistent credentials must be replaced.
    • Why cryptographically verifiable, immutable identifiers (and why blockchain can help) matter for accountability.
    • Practical governance: observability, human-in-the-loop checkpoints, and preparing infrastructure in parallel with agent adoption.

    Who this episode is for: business leaders deciding what to delegate to agents; security and identity teams rethinking access; product and platform builders designing safe workflows for autonomous systems.

    If you want actionable guidance on how to let agents accelerate your business without exposing you to runaway risk, tune in and learn how to turn agent hype into reliable business outcomes.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    26 分
  • How AI Agents Drive Disruptive Innovation (Christian Muehlroth)
    2025/11/15

    AI agents are reshaping how enterprises innovate, organize work, and experience disruption.

    In the latest episode of “What’s the BUZZ?”, Andreas Welsch speaks with Christian Muehlroth, CEO of ITONICS, about how agentic AI will redefine innovation management and why many organizations remain structurally unprepared for it.


    Here are the key insights from the conversation:

    • AI’s foundations were created decades ago, but recent advances in computing, interfaces, and delivery models have turned it into a scalable innovation engine.
    • Each technological wave builds on prior ones, accelerating change while large organizations slow down due to processes, politics, and legacy structures.
    • Agents function as tireless digital interns with expert-level capabilities in narrow domains, amplifying the output of teams that already demonstrate initiative and creativity.
    • Many place AI on top of legacy processes or rely too heavily on public LLMs, resulting in misaligned outputs and “AI tourism” instead of measurable impact.
    • Clean enterprise data, secure deployment setups, and redesigned processes are essential to making agentic AI operational and strategically valuable.

    Key takeaways:

    1. Focus on real-value use cases: innovation begins with a business problem, not with experimenting for its own sake.
    2. Prioritize structural readiness: clean data, redesigned workflows, and enterprise-grade AI infrastructure determine whether agentic systems deliver results.
    3. Empower motivated teams: the highest return comes from equipping individuals who seek change with advanced tools that amplify their capacity, rather than attempting blanket adoption across the organization.


    Leaders need to take disruption seriously, double down on strategic intelligence, empower the people who want change, and invest in data and platform foundations before scaling agents.


    Is Agentic AI already disrupting businesses (or can we just not see it yet)?

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    30 分
  • Evolving Your Leadership for Hybrid Teams (Danielle Gifford)
    2025/10/31

    Agentic AI is pushing leaders to rethink roles, processes, and governance far beyond another automation wave.

    In this episode, Andreas Welsch speaks with Danielle Gifford, PwC Managing Director of AI, about how organizations should prepare for agentic AI. Danielle draws on frontline experience with enterprise pilots and deployments to explain why agents require new infrastructure, clearer role boundaries, and fresh approaches to governance and workforce design.

    Highlights from the conversation:

    • Why agents are different from classic rule-based automation: they’re goal-driven, context-aware and can act with autonomy, which creates both opportunity and risk.
    • Where companies (especially in Canada) are on the adoption curve: pilots and POCs are increasing, but full-scale deployments need better data, guardrails, and change planning.
    • How leaders should approach agent projects: start with the business problem, map processes, and decide where human + agent collaboration delivers the highest value.
    • Workforce design and the “digital coworker”: practical advice on defining role boundaries, delegation rules, and how to evaluate outcomes when humans and agents collaborate.
    • Multi-agent orchestration and governance: how to prevent agents from converging on weak solutions and how to build review, control, and accountability into agent systems.

    Key takeaways:

    1. Business first: define the problem before choosing technology. Agents aren’t a silver bullet — they must solve a real, scoped pain point.
    2. Move from experimentation to implementation: Canadian enterprises are ready to progress beyond proofs of concept and invest in production-ready agent solutions with proper controls.
    3. Agents ≠ automation: treat agents as goal-based collaborators that need explicit boundaries, evaluation metrics, and workforce redesign.


    If you lead teams, product strategy, or AI initiatives and want practical guidance for turning agent hype into measurable outcomes, this episode is for you. Listen now to get the full conversation and actionable next steps.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    続きを読む 一部表示
    31 分