エピソード

  • If AI Eats The Routine, What Human Skills Survive?
    2025/11/14

    The hype cycle is over; the accountability era has arrived. We unpack how gen AI has moved from pilots to proof, with daily use now common among senior leaders and measurable ROI becoming the standard.

    Pulling from the Wharton GBK Collective’s year‑three findings, Google Notebook LMs agents trace the three waves of adoption and show why “accountable acceleration” is the defining theme for late 2025.

    TLDR / At A Glance:

    • Gen AI Usage Is Mainstream
      • 82% use Gen AI weekly (+10pp YoY)
      • 46% use it daily (+17pp YoY)
      • ChatGPT (67%), Copilot (58%), Gemini (49%) dominate
      • Interesting note: Gemini grew fastest (+9% YoY)
    • 𝗥𝗢𝗜 𝗜s Now Table Stakes for Businesses
      • 72% formally track ROI metrics
      • 74% report positive returns
      • 88% expect budget increases in next 12 months
      • 60% now have Chief AI Officers - strategy has moved to the C-suite
      • 30% of Gen AI technology spending now goes to internal R&D - enterprises are building custom solutions, not just buying off-the-shelf tools

    Across functions, the story is clear: practical, repeatable work is getting faster. Data analysis, meeting and document summarisation, and everyday writing see the broadest gains, while legal and operations post surprising leaps in self‑reported expertise as tools slot into structured workflows.

    Google NotebookLMs agents examine the sector split too — tech, telecoms, finance, and professional services are far ahead, while retail and complex physical operations navigate slower integration and tougher data constraints.

    ROI is rising, budgets are expanding, and the spend is changing shape. With 88% planning to increase investment and many allocating £5m+ to gen AI, enterprises are shifting 30% of their budgets into internal R&D to build custom capabilities on top of sanctioned platforms like ChatGPT, Copilot, and Gemini.

    That move, from generic efficiency to defensible differentiation, raises the stakes on governance, data, and talent.

    The toughest challenges now centre on people. Leadership is consolidating responsibility with CAIO roles, access is broadening under tighter guardrails, and AI is increasingly used to manage risk. Yet a training paradox persists: lack of training is a top barrier even as training investment softens.

    Google Notebook LMs agents dig into augmentation vs skill atrophy, the scramble for advanced talent, and why many leaders expect to hire more interns for AI‑enabled entry roles.

    Our closing challenge: if automation eats the routine, which human skills will you invest in to drive the next wave of return?

    Full study by The Wharton School and GBK Collective here: https://knowledge.wharton.upenn.edu/special-report/2025-ai-adoption-report/

    Enjoy the conversation?

    Follow the show, share it, and leave a review to help more people find it.

    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    17 分
  • From Tools To Teammates: How Agentic AI Rewrites Work And Society
    2025/11/12

    The promise of autonomy just got practical. We explore how agentic AI moves beyond passive automation to become a goal-driven collaborator that plans, adapts, and coordinates at human and machine scale. From UPS’s ORION saving millions of miles to smart factories reducing downtime and energy use, we lay out the concrete ways agents optimise complex systems while staying aligned with human oversight.

    TL;DR;

    • Defining agentic AI through goals, autonomy, learning, reasoning, and collaboration
    • Manufacturing shifts to predictive, dynamic operations
    • Precision agriculture and resource-efficient inputs
    • Smart grids balancing renewables, storage, and demand
    • Labour market change and reskilling needs
    • Ethics, transparency, and the EU AI Act
    • A phased, governed roadmap for adoption

    Across sectors, the patterns repeat: precision agriculture uses sensors and imaging to fine-tune inputs in real time; smart grids balance renewables, storage, and demand to keep power stable; research teams accelerate discovery by screening vast chemical spaces and designing better experiments; creatives partner with agents that riff on styles, structure ideas, and expand the canvas without replacing human taste. We unpack the five pillars that make this possible—goal orientation, autonomous decisions, learning and adaptation, complex reasoning, and collaboration—so you can recognise genuine capability and spot hype.

    We also confront the tough trade-offs. Jobs will change, which makes reskilling and mobility essential. Governance must keep pace, with risk-based rules, transparency, and strong safety cases for high-impact deployments. Privacy and data sovereignty demand encryption, auditability, and clear accountability. And for those worried about cost or complexity, we show how cloud platforms, pre-trained models, and no-code tools lower barriers so even lean teams can run meaningful pilots and prove ROI.

    If you want a pragmatic path into autonomy—one that pairs ambition with control—this conversation maps the milestones: pick outcome-centric use cases, start small, build human-in-the-loop guardrails, measure impact, and scale responsibly. Subscribe, share with a colleague who’s planning an AI pilot, and leave a review with the one question you want us to tackle next.

    Why not buy the book on Amazon or the audio book from Audible.com?


    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    20 分
  • From Hype To Habit: How Leaders Turn AI Into 30% Productivity Gains
    2025/11/11

    The ground just shifted under every boardroom and team room: AI isn’t a side experiment anymore, it’s a core business tool changing how work gets done. We trace the acceleration in adoption and show why hands-on leadership—not memos—separates winners from watchers. From marketing and legal to operations and customer support, we walk through concrete examples of where AI already delivers results: real‑time personalisation, faster contract review, earlier demand forecasts, and sharper executive briefings. Along the way, we unpack how small, repeated wins create measurable ROI, lift revenue and profit, and compress decision cycles without sacrificing judgment.

    TL;DR:

    • AI moves from IT to every function
    • Executives use AI for analysis and communication
    • Productivity gains reshape project cycles and models
    • Proof points across quality, consulting, and support
    • Risks of delay and scattered pilots
    • Governance, safety, and human-in-the-loop design
    • Upskilling leaders to set strategy and build trust

    We also address the hard truth about timing. Most tech leaders see a near-term deadline for adoption, and the opportunity cost of waiting is rising: lost market share, slower innovation, and a tougher fight for top talent. But speed without understanding leads to scattered pilots and weak governance. We share a practical path to disciplined speed—pick high-signal workflows, define safety guardrails, and measure outcomes that matter. Expect a clear case for human-in-the-loop review, data security, and transparent audit trails that reduce risk while raising confidence.

    Finally, we confront the leadership gap. Many senior teams haven’t received formal AI training and lack confidence in safe use. That’s solvable. When leaders learn the tools, they can spot real use cases, set strategy, and model the behaviour that drives adoption across the organisation. The message is simple: those who lead from the front will turn AI into lasting advantage; those who wait will be managed by it. If this helps you lead the change, follow the show, share it with a colleague, and leave a review so more leaders can find it.

    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    4 分
  • Practical AI Governance For HR
    2025/11/11

    AI is already inside your organisation, whether leadership has a plan or not. We unpack how HR and L&D can turn quiet workarounds into safe, transparent practice by pairing thoughtful governance with practical training.

    From the dangers of Shadow AI to the nuance of enterprise copilots, we share a clear, humane path that protects people while unlocking real productivity gains.

    TLDR / At a Glance:

    • duty of care for AI adoption in HR and L&D
    • why blanket bans fail and fuel Shadow AI
    • understanding data flows, privacy, and GDPR
    • identifying and mitigating bias in models and outputs
    • transparency, disclosure, and human oversight for decisions
    • culture change to reward openness not secrecy
    • choosing enterprise tools and setting guardrails

    We dig into bias with concrete examples and current legal cases, showing how historical data and cultural blind spots distort outcomes in recruitment and learning.

    Rather than treating AI as a black box, we explain how to map data flows, set boundaries for sensitive information, and publish plain-language guidance that staff can actually follow.

    You’ll hear why disclosure must be rewarded, how managers can credit judgment as well as output, and what it takes to create a culture where people feel safe to say “AI helped here.”

    Hallucinations and overconfidence get their own spotlight. We outline simple verification habits - ask for sources, cross-check claims, and consult a human for consequential decisions - so teams stop mistaking fluent text for facts.

    We also clarify the difference between public tools and enterprise deployments, highlight GDPR and subject access exposure, and show how small process changes prevent large penalties.

    The result is a compact playbook: acceptable use policy, clear guardrails, training on prompting and bias, periodic audits, and a commitment to job enrichment rather than workload creep.

    If you’re ready to move beyond fear and bans, this conversation offers the structure and language you can use tomorrow. Subscribe, share with a colleague in HR or L&D, and leave a review with your biggest AI governance question-we’ll tackle it in a future show.

    Exciting New HI for HR and L&D Professionals Course:

    Ready to move beyond theory and develop practical AI skills for your HR or L&D role? We're excited to announce our upcoming two-day workshop specifically designed for HR and L&D professionals who want to confidently lead AI implementation in their organizations.

    Join us in November at the beautiful MCS Group offices in Belfast for hands-on learning that will transform how you approach AI strategy.

    Check here details on how to register for this limited-capacity event - https://kierangilmurray.com/hrevent/ or chat https://calendly.com/kierangilmurray/hrldai-leadership-and-development

    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    17 分
  • Leading AI Starts With Behaviour, Not Code
    2025/11/10

    The promise of AI isn’t stalling because the models are weak; it’s stalling because our leadership habits are.

    We open the hood on why executives talk about stewardship and enterprise health while managers feel like they’re juggling a second job just to keep up, then add AI on top.

    The tension isn’t about tools. It’s about incentives, silos, and the courage to share ownership for outcomes that cut across the org chart.

    TLDR:

    • AI framed as a leadership and behavioural challenge
    • Disconnect between C‑suite aspirations and operational reality
    • Managers feeling AI as a “second job” without role redesign
    • Need for shared executive ownership across functional boundaries
    • Team coaching to move from polite alignment to joint accountability
    • Incentives driving individual attainment over enterprise outcomes
    • Governance and KPIs that reward cross‑functional results
    • AI as an accelerant of existing misalignment if behaviours do not change

    Across a candid, practical conversation, we explore how automation refuses to respect functional boundaries and what that means for the C‑suite. If rewards are tied to individual attainment and quarterly optics, leaders will say “collaborate” while behaving in ways that block end‑to‑end value.

    We dig into team coaching as a lever for change shifting executive teams from polite alignment to genuine joint accountability, setting shared KPIs, and making decisions that trade local optimisation for enterprise results.

    We also get real about the strain on middle managers who hear the AI mandate but lack redesigned roles, budgets, and guardrails to make it work.

    You’ll hear clear steps for turning intent into impact: name the vision–reality gap, sponsor value streams that span functions, co-own outcomes at the top, and create short learning cycles where cross-functional teams can test, measure, and adapt.

    We talk governance that empowers rather than slows, incentives that reward cross-boundary wins, and behaviours that build psychological safety so constraints surface early. If you want AI to be a force multiplier instead of a stress multiplier, start by rewiring how leaders lead together.

    If this conversation resonates, follow the show, share it with a colleague who’s navigating AI at scale, and leave a review to help more leaders find it.

    Are you struggling with AI or do you need fractional executive help implementing AI in your business?

    If the answer is yes, then lets chat about getting you the help you need - https://calendly.com/kierangilmurray/executive-leadership-and-development


    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    3 分
  • Fractional Power For Growing SMEs
    2025/11/08

    What if you could borrow a battle-tested executive exactly when you need them and only for as long as it takes to move the needle? We pull back the curtain on fractional leadership for SMEs, focusing on the people side of change where complexity lives and value compounds.

    TLDR / AT A Glance:

    • What fractional leadership is and why it matters for SMEs
    • Complexity of people versus utility of technology
    • Why “digital transformation” distracts from systems and culture
    • How empowerment beats command and control
    • Common SME patterns and how to fix them
    • Product thinking for HR and quick, measurable wins
    • What makes a great fractional leader
    • How to choose and work with a fractional chief

    Across a fast-moving conversation, we define what a fractional C‑suite leader actually does and why it’s not a cheaper helpdesk. Think senior judgment on tap: diagnosing root causes, running smart experiments, and owning outcomes.

    We challenge the myth of “digital transformation” by reframing technology as a utility and putting human systems front and centre. AI and automation clear the noise; empowered teams create the signal. That means unlearning command and control, pushing decisions closer to the market, and building lightweight structures that turn information into action.

    You’ll hear concrete patterns from the SME trenches: overcontrol driven by tight budgets, underused tech, and fads that burn cash without impact.

    We share how product thinking applies to HR - test, measure, iterate and what early proof points look like when you are fixing retention, engagement, and capability. For practitioners eyeing fractional work, we cover the mindset shift from corporate to owner-led businesses, the breadth required across HR disciplines, and why emotional intelligence is as critical as technical depth.

    For founders choosing a fractional chief, we offer a simple playbook: verify track record, align on outcomes, insist on execution, and value a partner who can challenge without breaking trust.

    If you want senior expertise without the full-time price tag and you’re ready to trade decks for delivery this conversation maps the path.

    Subscribe, share with a fellow leader who needs this lens, and leave a review with the one capability you’d rent first.

    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    18 分
  • Why Executive Fear, Not Technology, Holds Back AI Success
    2025/11/07

    Fear rarely announces itself; it hides behind control, certainty, and the urge to delay.

    We sit down with expert CPO Brian Parks to unpack why artificial intelligence stalls in boardrooms not because of model limits, but because of leadership habits shaped by pressure, incentives, and ambiguity.

    Rather than chasing the newest tool, we explore how fear of failure, public scrutiny, and quarterly cycles push executives to avoid the very experiments that build competence and how that choice quietly taxes performance across the organisation.

    TLDR:

    • Reframing AI as a leadership and behaviour issue
    • Executive fear of failure and loss of control
    • Pressure from short time frames and forecasts
    • Incentives that favour speed over learning
    • The role of AI literacy for leaders and teams
    • Creating safe, small, reversible experiments
    • Retaining AI fluent talent through clarity and trust

    Across this fast-paced conversation, we reframe AI as a human and behavioural challenge. Brian breaks down the real dynamics at the top: shrinking time frames, harder forecasts, and compensation structures that reward visible delivery over exploration.

    We dig into practical ways leaders can replace anxiety with clarity, from building shared AI literacy and setting simple guardrails to running small, reversible pilots that deliver measurable wins.

    You’ll hear how to structure incentives that value time saved and quality improved, pair domain experts with AI specialists, and normalise transparent learning with blameless reviews and demo rituals.

    The talent stakes are high. Your best people are already AI fluent, and if your culture blocks them, they’ll move to teams that welcome their skills. We outline concrete steps to keep them: sponsored learning pathways, clear approved tools, and public recognition for applied outcomes.

    By treating AI initiatives like a portfolio of options rather than a single bet, leaders can manage risk, compound insight, and make it safe to change course as evidence emerges.

    If you’re ready to turn uncertainty into momentum and lead with confidence, tune in, take notes, and start the next small experiment today.

    Subscribe, share with a colleague who needs this, and leave a review to help more leaders find practical, human-centred AI guidance.

    Listen in as CPO Brian Parkes explains why AI is often resisted by executives but they should learn to embrace it to reap AI's many rewards.

    Are you struggling with AI or do you need fractional executive help implementing AI in your business?

    If the answer is yes, then lets chat about getting you the help you need - https://calendly.com/kierangilmurray/executive-leadership-and-development

    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    2 分
  • How The Next Workforce Will Redefine Business With Everyday AI
    2025/11/05

    Forget “early adopters.” A new cohort arrives at work expecting intelligent tools by default. We unpack what it means to hire and lead AI natives—the people who open ChatGPT before Google, who treat Grammarly and Canva as standard kit, and who view AI not as innovation but as basic competence. Their expectations are clear: recruitment that feels intelligent rather than bureaucratic, internal tools that run as smoothly as their favourite apps, and leaders who understand how AI works and use it responsibly.

    TL;DR:

    • AI natives start work with intelligent tools
    • using AI framed as baseline competence
    • Expectations for mature, AI-ready organisations
    • Recruitment that feels intelligent, not bureaucratic
    • Internal tools that are fast, intuitive, personalised
    • Purpose, psychological safety, visible leadership
    • Well-being and flexibility as expectations
    • Leaders who understand and use AI responsibly
    • Growth over hierarchy with coaching and mentoring
    • Upskilling across all generations
    • Redesigning workflows with measurable impact
    • Reverse mentoring for faster leadership learning
    • Continuous learning at the core of culture

    We walk through the practical shifts that help any organisation catch up. First, build digital confidence at every level with targeted upskilling and prompt craft, not just one-off workshops. Then redesign workflows to remove toil and add intelligence, measuring impact in time saved, quality lifted, and happier customers. Reverse mentoring helps senior leaders learn from younger colleagues, while clear guardrails for data, privacy, and ethics keep experiments safe. We talk purpose and psychological safety too, because teams need room to try, fail, refine, and learn without fear.

    Customers already expect Amazon-like personalisation and Netflix-level relevance; employees now expect the same inside their workplace. That means faster onboarding, smarter internal search, AI-assisted drafting, and decision support that keeps humans accountable. We share how to move from scattered tools to integrated intelligence: playbooks, prompt libraries, coaching, and visible leadership that models responsible use. If you want to attract and retain AI-native talent—and help everyone else thrive alongside them—this is your roadmap to a learning-first, human-led, AI-augmented culture.

    If this resonates, follow the show, share it with a colleague, and leave a quick review to help more leaders find it. What’s the first workflow you’ll redesign with AI?


    Read the full article here: What AI Natives Mean for the Future of Your Business & Work


    Support the show


    𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

    ☎️ https://calendly.com/kierangilmurray/results-not-excuses
    ✉️ kieran@gilmurray.co.uk
    🌍 www.KieranGilmurray.com
    📘 Kieran Gilmurray | LinkedIn
    🦉 X / Twitter: https://twitter.com/KieranGilmurray
    📽 YouTube: https://www.youtube.com/@KieranGilmurray

    📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


    続きを読む 一部表示
    3 分