エピソード

  • Foundations for AI Success | Faye Ellis
    2026/04/28

    Most organizations are excited about AI. Far fewer are actually ready for it. In this episode of The Pluralsight Podcast, host Josh Burkhead sits down with Faye Ellis — AWS Hero and Pluralsight Author Fellow, cloud architect turned educator, and AI upskilling strategist — to talk about what separates organizations that are stuck in AI curiosity mode from those that are building real, measurable capability.

    Faye brings a practitioner's perspective to some of the most pressing questions in L&D and technology leadership today: How do you close skills gaps when the technology keeps moving? How do you bring non-technical teams along without losing them? And why are 80% of AI pilots still failing to reach production — even as investment in AI continues to climb?

    Whether you're leading an L&D function, managing a technology team, or trying to figure out where to even start with AI upskilling, this conversation is packed with frameworks and honest perspectives you can take back to your team.

    In this episode:

    • Why fear is not an AI strategy — and what to do instead

    • How to move from AI curiosity to a skills-first, outcome-driven program

    • The case for AI literacy at every level of the organization, not just technical teams

    • What a successful upskilling program actually looks like in practice

    • Why the organizations getting it right treat learning as a continuous journey, not a project

    Chapters:

    00:01:08 — From Data Centers to AI: Faye's Career Journey

    00:04:07 — What Got Her Hooked on Teaching

    00:05:41 — The AI Curiosity Trap: Why Organizations Stay Stuck

    00:09:21 — What It Looks Like When Strategy Clicks

    00:12:53 — Running a Skills Gap Analysis in a Moving Target Environment

    00:15:13 — Including Non-Technical Teams in the Talent Pipeline

    00:18:31 — Building a Program That Actually Works

    00:21:07 — Connecting Learning to Business Outcomes

    00:26:13 — Designing for Confidence, Not Just Competence

    00:31:22 — Scaling a Learning Culture Without Letting It Fizzle

    00:37:00 — Trust as the Hidden Driver of Upskilling Success

    00:41:02 — What Leaders Are Still Getting Wrong About AI Literacy

    00:44:20 — Rapid Fire: Myths, Hard Truths, and One Thing in Common




    Want more insights on AI, security, and cloud? Subscribe to our newsletters: https://plrsg.ht/3MZ78ya

    Follow Pluralsight on LinkedIn: https://www.linkedin.com/company/pluralsight/

    Connect with Faye Ellis on LinkedIn: https://plrsg.ht/420lS3W

    Questions or comments? podcast@pluralsight.com

    www.pluralsight.com

    続きを読む 一部表示
    46 分
  • Your AI Needs a Reviewer | Maaike Van Putten
    2026/04/15

    What does it take to write code that's actually ready for an AI-powered world — and what happens when it isn't?

    In this episode of The Pluralsight Podcast, Maaike Van Putten — software developer, Pluralsight author, and instructor known for making technical concepts genuinely approachable — makes the case that clean code has never mattered more than it does right now. Not because the standard has changed, but because AI can generate bad code faster than ever before, and someone has to catch it.

    Maaike traces that argument back to a practical reality: as AI takes on more of the writing, the job of the developer increasingly becomes the job of the reviewer. From there, she breaks down why functions that do too many things are the silent killer of maintainable codebases, what the "driving fast" analogy reveals about the relationship between speed and code quality, and why AI should be treated as a teammate — not an authority — before you let it anywhere near data you can't afford to lose.

    We also get into why the entry bar for junior developers has shifted dramatically, how a brag document can be a genuine defense against imposter syndrome, and what scheduling "tech dates" actually looks like when you're trying to protect learning time in a world that never stops demanding more of it.

    If you're early in your development career, leading a team of developers, or just trying to figure out how to work alongside AI without letting it work against you — this conversation is a grounded, practical look at the habits and mindsets that hold up across every wave of change.

    Chapters:


    03:12 Why the Entry-Level Developer Market Is Struggling Right Now

    05:51 Clean Code: Why Small Habits Make or Break a Developer

    08:40 Why Code Quality Matters More in the AI Era

    10:21 What Happens When a Team Isn't Aligned on Standards

    11:16 AI as a Teammate, Not an Authority: Lessons from a Hard Drive Wipe

    13:09 What Leaders Should Consider Before Deploying AI in Development

    14:15 Vibe Coding vs. Agentic Coding: Is There a Difference?

    14:34 Why Reading Code Is Now More Valuable Than Writing It

    16:20 How to Schedule and Protect Learning Time (Tech Dates)

    18:33 How to Ask Your Manager for Learning Time

    19:30 Beating Information Overload: Focus on Fundamentals

    21:47 The Brag Document: Fighting Imposter Syndrome with Evidence

    24:51 How to Share Your Work Without Feeling Exposed

    26:35 What Motivated Maaike to Start Teaching and Creating Content

    27:58 Skills Young Developers Are Overlooking Right Now

    30:03 Maaike's 2026 Goals & Upcoming Book: *Illustrated Python*

    📖 Illustrated Python by Maaike Van Putten — available now on Amazon: https://a.co/d/0dSvCYrR

    Want more insights on AI, security, and cloud? Subscribe to our newsletters: https://plrsg.ht/3MZ78ya

    Follow Pluralsight on LinkedIn: https://www.linkedin.com/company/pluralsight/

    Connect with Maaike Van Putten on LinkedIn: https://www.linkedin.com/in/maaikevanputten/

    Questions or comments? podcast@pluralsight.com

    www.pluralsight.com

    続きを読む 一部表示
    33 分
  • Skills First, Roles Second | Jose Ramirez
    2026/04/01

    What if the reason your AI adoption isn't working has nothing to do with the technology — and everything to do with how you prepared your people?

    In this episode of The Pluralsight Podcast, Jose Ramirez — L&D strategist and former research analyst who spent a decade advising CIOs on building high-performing tech teams — makes the case that most organizations are solving the wrong problem. It's not a tools problem. It's a skills problem. And until leaders make learning part of the job instead of a break from it, no amount of AI investment will move the needle.

    Jose traces that argument back to a simple but powerful reframe: the difference between building AI tool adopters and building AI value creators. From there, he breaks down why a skills-first approach makes teams more resilient than role-based hiring, how the best tech leaders use storytelling to win over skeptical stakeholders, and why handing employees a new AI tool without context or strategy is one of the most expensive mistakes a leader can make right now.

    We also get into how to measure the real impact of upskilling beyond completion rates, why career mobility is the most overlooked metric in any L&D program, and what it looks like when a learning culture is actually working.

    If you lead technology teams, learning programs, or both — this conversation is a practical and honest look at what it takes to close the skills gap before it's too late.

    Want more insights on Security, Cloud, and AI? Subscribe to our newsletters: https://plrsg.ht/3MZ78ya

    Connect with Jose Ramirez on LinkedIn → https://www.linkedin.com/in/joseramirez5/

    Questions or comments? Email → podcast@pluralsight.com

    Website → https://plrsg.ht/4rlhB5m

    Subscribe to our channel and hit the notification bell to stay up to date with the latest tech career and interview insights from Pluralsight → https://plrsig.ht/subscribe

    続きを読む 一部表示
    42 分
  • AI Ethics, Bias, and Responsible Innovation | Kesha Williams
    2026/03/25

    What happens when the data you feed an AI system is already broken — and no one stops to ask why?

    In this episode of The Pluralsight Podcast, Kesha Williams — AI ethicist, AWS Hero, and 30-year tech veteran — makes the case that building powerful AI systems isn't enough. Building responsible ones is the only real standard that matters.

    Kesha traces her focus on AI ethics back to a single project: a crime prediction model that exposed how easily biased data can corrupt a machine learning system before a single line of code is written. From there, she breaks down the three types of bias teams face — data, algorithmic, and interpretation — why interpretation bias is the one most teams are still getting wrong, and what model drift means for organizations that think their work is done once a model ships.

    We also get into AI governance in the age of agents, why the ability to roll back an AI action may be the most underrated capability in any AI stack, and what an AI Center of Excellence actually looks like in practice.

    If you're building AI systems — or leading teams that do — this conversation is a practical and honest look at where things go wrong, and what it actually takes to get them right.

    Chapters:

    00:00:33 — Introduction: Kesha Williams, AWS AI Hero

    00:01:05 — Kesha's 30-year journey and spotting emerging tech early

    00:02:51 — The moment that changed everything: building a crime prediction model

    00:04:18 — Pre-crime, Minority Report, and bias hiding in UK stop-and-search data

    00:05:44 — The Clear News AI case study: how bias shapes what a nation reads

    00:07:57 — The three types of bias — and why interpretation bias is now the hardest

    00:09:16 — Role play: interpretation bias and the home loan example

    00:11:53 — Red flags: why skipping model retraining silently reintroduces bias

    00:13:21 — Favorite tools: SageMaker Clarify, AI Fairness 360, and Fairlearn

    00:14:22 — SHAP and LIME: making model decisions explainable

    00:15:28 — Agentic AI governance: visibility, guardrails, and rollback

    00:18:09 — Accountability and the case for an AI Center of Excellence

    00:20:53 — Skills engineers need to prioritize: prompt engineering and LLM literacy

    00:22:37 — The mindset of learners who thrive: curiosity and innovation

    00:24:32 — No-code platforms, citizen developers, and guardrails

    00:25:28 — Where to find Kesha: LinkedIn and Pluralsight

    Want more insights on AI, security, and cloud? Subscribe to our newsletters: https://plrsg.ht/3MZ78ya

    Follow Pluralsight on LinkedIn: https://www.linkedin.com/company/pluralsight/

    Connect with Kesha Williams on LinkedIn: https://www.linkedin.com/in/keshaewilliams/

    Questions or comments? podcast@pluralsight.com

    www.pluralsight.com

    続きを読む 一部表示
    27 分
  • Quantum, AI, and the Case for Continuous Curiosity | Frank La Vigne
    2026/03/17

    What does it take to stay curious, keep learning, and stay relevant when the technology landscape keeps shifting beneath your feet?

    In this episode of The Pluralsight Podcast, Frank La Vigne — Principal AI Product Marketing Manager at Red Hat and one of Pluralsight's most dedicated learners — makes the case that adaptability isn't just a career skill. It's the only real career strategy.

    Frank breaks down what's actually kept him learning every single day for over 1,000 consecutive days, why intrinsic motivation beats structured programs every time, and what leaders get wrong when they try to inspire their teams to grow. From Commodore 64 nostalgia to quantum cryptography to the hidden risks of AI-generated code, this conversation covers a lot of ground — and all of it connects back to one idea: the most adaptable people and organizations always win.

    We also dig into the looming collision between quantum computing and modern encryption, what Frank took away from the NVIDIA conference in DC, and why cutting junior talent pipelines today could be one of the most costly mistakes the industry makes.

    Chapters:

    00:01:16 — Meet Frank La Vigne

    00:02:07 — 1,100 Days of Learning: Inside Frank's Pluralsight Streak

    00:04:25 — How to Keep Your Team Motivated to Learn

    00:08:05 — Commodore 64 and the Roots of a Tech Career

    00:10:31 — Quantum Computing and the Future of Cybersecurity

    00:15:42 — How AI Is Reshaping Red Hat's Security Approach

    00:17:56 — Are We in an AI Bubble? The Dot-Com Parallel

    00:20:21 — Inside the Nvidia Conference: Sovereign AI and National Security

    00:23:50 — When AI Generates Bad Code: The Developer Tension

    00:25:54 — The Junior Talent Pipeline Problem

    00:28:13 — Adaptability as the Core Skill of the Future

    00:29:19 — What Leaders Overlook in AI Adoption and Skill Development

    00:32:59 — Frank's Favorite Pluralsight Authors and Learning Areas

    00:35:24 — Final Thoughts and Where to Find Frank



    Want more insights on AI, security, and cloud? Subscribe to our newsletters: https://plrsg.ht/3MZ78ya

    Follow Pluralsight on LinkedIn: https://www.linkedin.com/company/pluralsight/

    Connect with Frank La Vigne on LinkedIn: https://www.linkedin.com/in/frank-lavigne/

    Questions or comments? podcast@pluralsight.com

    www.pluralsight.com

    続きを読む 一部表示
    37 分
  • Why Security Policies Fail: The Human Side of Cybersecurity | John Elliott
    2026/03/10

    Most security failures aren't technical — they're human. So why do we keep designing security programs that ignore how people actually think and behave?

    In this episode of The Pluralsight Podcast, John Elliott — Pluralsight author fellow, PCI DSS contributor, and specialist in regulated security and data protection — makes the case that the language, culture, and psychology behind your security program matter just as much as the controls themselves.

    John breaks down why policies get misread, ignored, or worked around, and what leaders can do differently. From the neurolinguistics of security training to the aviation concept of "just culture," this conversation is packed with practical frameworks for building security programs that people actually follow.

    We also dig into the expanding attack surface of agentic AI, why your cybersecurity team is likely more anxious than you realize, and what organizations need to do right now to prepare for what's coming.

    Chapters:

    02:58 How John Discovered the Human Side of Security

    05:30 Why Security Communication Is So Often Overlooked

    06:03 Where Policies Break Down in Practice

    08:26 The Importance of Explaining the "Why"

    09:31 Connecting Individual Behavior to Organizational Security

    11:41 Designing Controls and Training People Will Actually Follow

    12:49 Compliance Is Always a Risk Decision

    14:36 Can You Ever Hit 100% Security Coverage?

    17:03 Beta Testing Policies Before You Roll Them Out

    18:05 What Most Teams Get Wrong About Security Training

    19:15 The COM-B Model: Capability, Opportunity, and Motivation

    21:04 How to Diagnose the Real Skill Gap in Your Organization

    24:24 Don't Patronize People — And Don't Give Them 50 Things Not to Do

    25:44 The Compliance Budget: You Only Get 3% of Someone's Brain

    27:55 Building a Healthy Security Culture

    28:10 Psychological Safety as the Foundation of Security Culture

    29:10 What "Just Culture" Means and Where It Comes From

    30:34 The Badge Policy Problem — And Why It Backfired

    34:07 Balancing Risk Appetite Across Large Enterprises

    35:22 AI's Unique and Poorly Understood Attack Surface

    38:09 Agentic AI, Open Source Agents, and the Enterprise Risk

    41:49 Two Practical Changes Leaders Can Make Right Now

    44:49 Benchmarking Security Skills

    Want more insights on Security, Cloud, and AI? Subscribe to our newsletters: https://plrsg.ht/3MZ78ya

    Follow Pluralsight on LinkedIn: https://www.linkedin.com/company/pluralsight/

    Connect with John Elliott on LinkedIn: https://www.linkedin.com/in/withoutfire/

    Questions or comments? podcast@pluralsight.com

    www.pluralsight.com

    続きを読む 一部表示
    46 分
  • The Technologists' Edge: Skills that AI Can't Replace | Dr. Lyron Andrews
    2026/03/04

    In a world where AI can generate code, automate tasks, and accelerate innovation, what skills still set great technologists apart?

    In this episode of The Pluralsight Podcast, Dr. Lyron Andrews shares his unconventional path into technology — from working before finishing high school to building a career through certifications, teaching, and eventually earning his doctorate at 49. Along the way, he explains why resistance, trial and error, and even failure aren't liabilities — they're signals of growth.

    We explore how simplifying complexity unlocks deeper understanding, from quantum computing and cryptography to Zero Trust architecture. Lyron breaks down intimidating concepts with practical analogies and challenges technologists to focus less on the "kitchen" (the tools) and more on the "meal" (business outcomes).

    The conversation also dives into AI governance, ISO 42001, and why organizations risk accelerating the wrong results if they don't build security and guardrails into their AI strategies from the start.

    Chapters:


    02:37 Lyron's Unconventional Path Into Tech


    05:59 Learning How to Learn


    08:21 What Makes Someone Employable in Tech


    12:57 The Superpower of Children


    18:33 Babe Ruth, Failure, and Experimentation


    24:16 Why Shared Definitions Matter


    26:44 Simplify the Complex


    30:16 AI Governance and ISO 42001


    36:14 Skills AI Can't Replace


    39:52 AI Bias and the Dutch Tax Fraud Case


    43:06 Zero Trust and Federal Security Challenges


    48:02 The "Frankenstein Tech Stack" Problem


    50:17 Outcomes Before Tools


    55:04 Stackable Credentials and Career Agility


    59:21 How to Choose Skills to Learn


    1:06:20 Increase Your Value, Understand the Business

    Want more insights on Security, Cloud, and AI? Subscribe to our newsletters:
    https://plrsg.ht/3MZ78ya
    Follow Pluralsight on LinkedIn:
    https://www.linkedin.com/company/pluralsight/
    Connect with Dr. Lyron Andrews on LinkedIn
    Questions or comments?
    podcast@pluralsight.com
    www.pluralsight.com

    続きを読む 一部表示
    1 時間 11 分
  • Why Most Training Fails (and How to Make Learning Stick) | Dr. Will Thalheimer
    2026/02/19

    Most L&D teams rely on learner satisfaction surveys to gauge training effectiveness. The problem? Happy learners and competent learners aren't the same thing. Dr. Will Thalheimer, learning researcher and author of The CEO's Guide to Training, E-Learning, and Work, breaks down why traditional evaluation methods send organizations in the wrong direction — and shares the four learning sciences (retrieval practice, spacing, context alignment, and feedback) that research shows can double training results. He also introduces a framework for rethinking how L&D creates competitive advantage, moving beyond "did they like it?" to "can they actually perform?



    Want to go deeper? Check out our weekly newsletters focused on Security, Cloud, and AI.



    Follow Pluralsight on Linkedin and join the conversation: https://www.linkedin.com/company/pluralsight/



    Check out Dr. Will Thalheimer's latest book "The CEO's Guide to Training, E-Learning, and Work": https://a.co/d/05f2TMMh



    Connect directly with Dr. Will Thalheimer on Linkedin

    Will Thalheimer is a learning scientist, author, and internationally recognized expert on evidence-based learning design. With more than four decades of experience researching how people learn and how training actually works in the real world, Will has become one of the most influential voices challenging myths and assumptions in workplace learning.

    He is the founder of Work-Learning Research, where he helps organizations design learning experiences that drive real behavior change and business impact. Will is best known for his groundbreaking research on learning transfer, spaced practice, retrieval practice, feedback, and evaluation, as well as for creating the Learning Transfer Evaluation Model (LTEM), which redefines how organizations should measure the effectiveness of learning.

    Will is the author of two widely respected books, including:

    • Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness
    • The CEO's Guide to Training, eLearning & Work: Empowering Learning for a Competitive Advantage

    His work consistently bridges the gap between academic research and day-to-day learning practice, helping teams build significantly more effective learning and learning evaluation practices.

    A frequent speaker, advisor, and thought leader, Will is known for his clarity, rigor, and willingness to challenge the status quo in workplace learning. His work continues to shape how organizations think about upskilling, capability building, and creating learning experiences that truly stick.

    Questions or comments? podcast@pluralsight.com

    www.pluralsight.com

    続きを読む 一部表示
    40 分