エピソード

  • Less DEI, more FAIRness (ft. author Lily Zheng)
    2026/02/24

    Send a text

    For years, organizations have poured millions into DEI training.

    And yet most employees still report discrimination. Promotion gaps persist. Trust remains uneven.

    So what’s going on?

    In this episode of FUTUREPROOF., I sit down with Lily Zheng — strategist and author of Fixing Fairness — to interrogate a hard truth: much of what we call DEI doesn’t work. Not because fairness is unpopular. Not because inclusion is misguided. But because we keep trying to fix people instead of fixing systems.

    Lily introduces the FAIR framework — Fairness, Access, Inclusion, and Representation — and argues that the real leverage isn’t in workshops. It’s in incentives, evaluation criteria, hiring processes, and executive accountability.

    We explore:

    • Why standalone DEI training can backfire
    • The “missing stair” metaphor — and how organizations normalize dysfunction
    • The Cobra Effect of poorly designed diversity incentives
    • Why representation is ultimately about trust, not optics
    • What meritocracy gets wrong about itself
    • And why rebranding DEI won’t solve structural problems

    At a moment when DEI faces political backlash and corporate retrenchment, Lily makes a counterintuitive claim: the future of workplace inclusion will be more rigorous, more measured, and more accountable — not less.

    This is a systems conversation.

    Not about slogans.
    Not about performative commitments.
    About incentives, power, and what actually moves outcomes.

    If you care about leadership, governance, and the second-order effects of institutional design, this episode will challenge you.

    続きを読む 一部表示
    32 分
  • Soft Skills Are the Hard Advantage in the AI Era (ft. Bushra Khan)
    2026/02/17

    Send a text

    For years, we treated emotional intelligence like a cultural add-on.

    Nice to have.
    Important, maybe.
    But not central to performance.

    That framing doesn’t survive the AI era.

    In this episode of FUTUREPROOF., I sit down with Dr. Bushra Khan, founder of Leading with BK, to examine what actually differentiates leaders as automation compresses the knowledge gap. When AI can draft, analyze, summarize, and even simulate difficult conversations, the advantage shifts. It moves from what you know to how you show up.

    Bushra has spent over 15 years helping leaders translate emotional intelligence from buzzword into operating system. We talk about why “soft skills” should be understood as strategic skills, how negativity bias quietly distorts leadership judgment, and why loneliness inside high-performing teams is less about remote work and more about emotional avoidance.

    We also explore some uncomfortable tensions:

    • If AI amplifies leaders, what exactly is it amplifying?
    • When does candor become bluntness — and erode trust instead of building it?
    • Why do leaders underestimate the emotional consequences of automation?
    • What does bravery look like when decisions are both rational and painful?

    Bushra argues that most organizations are still trying to fix people instead of fixing environments. They invest in workshops while ignoring incentives. They push productivity while neglecting psychological safety. They assume proximity equals connection.

    But as AI takes over more technical tasks, influence becomes the real differentiator. And influence is emotional before it is analytical.

    This conversation isn’t about positivity or platitudes. It’s about leadership under pressure — layoffs, automation, rapid skills shifts — and what it takes to signal trust and authority through noise.

    Because the future of work won’t just test our systems.

    It will test our emotional maturity.

    続きを読む 一部表示
    28 分
  • How People Endure When Systems Collapse (ft. Trevor Reed, author & Russia detainee)
    2026/02/10

    Send a text

    This episode of FUTUREPROOF. is different.

    My guest is Trevor Reed, a former U.S. Marine who was wrongfully detained and abused in a Russian gulag for nearly three years, freed in a high-profile prisoner exchange in 2022—and then made a decision few could comprehend: he voluntarily went to Ukraine to fight against the same system that imprisoned him.

    In this conversation, Trevor reflects on what captivity does to the human mind, how survival reshapes your definition of justice, and why freedom—real freedom—can’t be taken for granted once you’ve lost it.

    We talk about:

    • What daily life inside a Russian penal colony is actually like—and how close he came to dying there
    • The mental discipline required to survive prolonged isolation, hunger, and uncertainty
    • The emotional toll of being turned into a geopolitical bargaining chip
    • Why revenge eventually gave way to a deeper definition of justice
    • The surreal contrast between everyday life and active war zones in Ukraine
    • Being critically wounded by a landmine—and what it means to survive twice
    • How his understanding of freedom, responsibility, and humanity has fundamentally changed

    This is not a conversation about politics.
    It’s a conversation about power, resilience, moral injury, and what it means to remain human when systems fail you.

    Trevor’s memoir, Retribution: A Former US Marine's Harrowing Journey from Wrongful Imprisonment in Russia to the Front Lines of the Ukrainian War, is not an easy read—but it is an important one. And this conversation is not comfortable—but it is necessary.

    続きを読む 一部表示
    25 分
  • The ROI of Not Being a Robot (ft. author & VaynerX exec Claude Silver)
    2026/02/03

    Send us a text

    What if the most undervalued leadership skill in the AI era isn’t technical fluency—but emotional presence?

    This episode of FUTUREPROOF. features Claude Silver, the world’s first Chief Heart Officer and the No. 2 executive at VaynerX, joining the show to unpack why authenticity, empathy, and belonging are no longer “nice-to-haves,” but strategic advantages.

    Claude’s 2025 book, Be Yourself at Work, challenges the long-standing belief that professionalism requires emotional distance. Instead, she argues that in a world defined by AI, automation, and burnout, the leaders who win are the ones who lead with heart—intentionally, skillfully, and without performative fluff.

    We explore:

    • Why “authenticity” has been misunderstood—and how to practice it without oversharing or losing authority
    • What leading with heart actually looks like inside a 2,000-person global organization
    • How emotional skills become power skills as AI absorbs more technical work
    • The difference between fitting in and true belonging—and why that gap is costing companies talent and trust
    • How leaders can balance emotional bravery with emotional efficiency in an always-on, high-pressure world

    This is a conversation about leadership after the old playbook breaks—and what replaces it when humanity becomes the edge.

    続きを読む 一部表示
    25 分
  • Designing AI You Can Trust & the Future of Human-Centered Healthcare (ft. Peter Skillman, Philips' global head of design)
    2026/01/27

    Send us a text

    Healthcare is entering its most consequential design moment in decades.

    As AI moves from the background into the core of clinical decision-making, diagnostics, and patient experience, the real question isn’t what AI can do—it’s whether people can trust it.

    This week on FUTUREPROOF., I’m joined by Peter Skillman, Global Head of Design at Philips, and one of the few leaders shaping what responsible, human-centered AI looks like in healthcare at scale.

    Peter has spent three decades designing products and systems at the intersection of hardware, software, and services—across Palm, Nokia, Microsoft, AWS, and now Philips. Today, he’s helping reimagine healthcare not as a hierarchy of authority, but as an experience built around patients, clinicians, and trust.

    We talk about:

    • Why AI in healthcare must be designed with people, not just for them
    • What happens when teenagers—future patients and clinicians—help design care systems
    • How healthcare design is shifting from “what looks impressive” to “what feels humane”
    • Why speed, clarity, and emotional context now matter as much as clinical accuracy
    • The long timelines of healthcare innovation—and why today’s design choices shape the next decade
    • What it really means to make AI visible, explainable, and trustworthy in life-and-death environments

    This conversation isn’t about futuristic demos or abstract ethics.
    It’s about how design decisions today will determine whether AI improves healthcare—or quietly erodes trust in it.

    続きを読む 一部表示
    26 分
  • AI Is Scaling Fast—Accessibility Isn’t. Here’s How We Fix That.
    2026/01/06

    Send us a text

    Guest: Joe Devon
    Title: Chair, GAAD Foundation | Co-founder, Global Accessibility Awareness Day

    AI is reshaping how we design software—but accessibility still too often shows up as an afterthought. In this episode of FUTUREPROOF., Joe Devon joins us to unpack what it actually means to build technology that works for everyone, especially as generative AI becomes embedded across products, platforms, and workflows.

    Joe explains why accessibility isn’t a niche concern—it affects more than 1.3 billion people globally—and why AI represents both the biggest risk and the biggest opportunity the accessibility movement has ever seen. We dig into the early findings from the AI Model Accessibility Checker (AIMAC), what most AI models still get wrong about accessible code, and why “AI will fix it later” is a dangerous assumption.

    We also explore how front-end tools like AI-generated captions, voice interfaces, and image descriptions are changing daily life for users with disabilities—and where back-end AI systems can finally close the gap between automated testing and real-world usability. Throughout the conversation, Joe makes a compelling case that accessibility is not just a moral imperative, but a design discipline that will separate future-proof products from legacy ones.

    Topics covered:

    • Why most digital products still fail basic accessibility standards
    • How AI can dramatically expand—or quietly restrict—access
    • What AIMAC reveals about how accessible today’s AI models really are
    • Front-end vs. back-end accessibility breakthroughs
    • The ethical stakes of deploying inaccessible AI at scale
    • Why inclusive design must be a core requirement, not a patch
    続きを読む 一部表示
    23 分
  • Could Crowdfunding Solar Could Do What Governments Can’t? (ft. Lassor Feasley, renewables.org)
    2025/12/23

    Send us a text

    Climate change is a global problem—but climate capital doesn’t flow globally.

    In this episode of FUTUREPROOF., Jeremy sits down with Lassor Feasley, co-founder and CEO of Renewables.org, to unpack why some of the highest-impact climate solutions on Earth remain dramatically underfunded.

    Renewables.org applies a Kiva-style crowdfunding model to distributed solar projects across the Global South. Individuals can invest as little as $25 into no-interest loans that fund solar installations—and are repaid monthly over five years, allowing capital to be recycled again and again.

    Lassor explains why:

    • A dollar invested in Global South solar can deliver up to 5x the carbon impact of a comparable U.S. project
    • Traditional climate fintech and ESG models break down in frontier markets
    • Repayment isn’t just financial—it’s proof of impact
    • Design, not just technology, determines whether climate solutions scale

    This conversation goes beyond solar panels to explore systems, incentives, trust, and the future of climate finance—and why everyday individuals may be better positioned than institutions to fund the energy transition where it matters most.

    If climate change is a race against time, this episode asks a harder question: are we deploying capital where it actually counts?

    続きを読む 一部表示
    20 分
  • What Marketers Need to Know About AI Search Optimization (ft. Aja Frost, HubSpot)
    2025/12/09

    Send us a text

    When Google’s algorithm changes caused HubSpot’s traffic to plummet 80%, most companies would have panicked.

    Aja Frost saw an opportunity.

    As Senior Director of Global Growth at HubSpot, Aja led the transformation that helped HubSpot not only recover—but become the most-cited CRM in generative AI results.

    In this episode of FUTUREPROOF., Jeremy Goldman sits down with Aja to talk about how the rules of discovery, demand, and digital visibility are being rewritten in real time—and why Answer Engine Optimization (AEO) may be the next big discipline marketers can’t afford to ignore.

    They discuss:
    🔍 What happens when users trust ChatGPT more than Google
    🧠 How HubSpot rebuilt its content strategy around AI answers
    💬 The formula for getting cited by AI models—and what most brands get wrong
    📈 Why visibility beats clicks in an LLM-driven world
    🌐 The new off-site frontier: Reddit, YouTube, and the “dark funnel” of discovery
    ⚙️ How to measure success when your customer journey starts with a chatbot

    If you work in marketing, SEO, or content—and you’ve felt the ground shifting under your feet—this episode will help you understand how to thrive in the AI search era.

    続きを読む 一部表示
    23 分