エピソード

  • Fire Your Worst Customers
    2026/04/30

    EPISODE 54

    Kevin and Jason break down why Anthropic is out of compute, why that's actually a strategy, and what it means for everyone using Claude right now. They dig into the Mythos model as the best marketing moment in AI, why artificial scarcity works, and why $200/month for Claude Max is the cheapest hire you'll ever make. Then they shift to AI in the enterprise — why one "AI Week" won't rewire your company, why Anthropic's one-person growth marketing team is the new bar, and the playbook for founders selling into big companies: pick a hard enough wedge, stop selling the product, and sell transformation instead.


    CHAPTERS

    00:00 – Cold Open: The Mythos Model Is Too Good To Release

    00:35 – Welcome to Founder Mode

    00:59 – Why Anthropic Is Out of Compute

    02:28 – Good Customer, Bad Customer: Who's to Blame?

    04:00 – $200/Month Is the Cheapest Hire You'll Ever Get

    05:31 – Token Maxing and the New Scarcity

    07:01 – How to Fire Bad Customers (And Why Anthropic Is Doing It Wrong)

    10:50 – The Mythos Model: The Best Marketing Moment in AI

    13:19 – AI in the Enterprise: Why One AI Week Isn't Enough

    18:25 – The "One More Prompt" Flow State

    19:57 – Selling Into Enterprise: Pick a Hard Wedge, Sell Transformation

    22:48 – Closing Takeaways and Top Five


    LINKS

    Stay Connected with Founder Mode

    Subscribe to our newsletter


    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    25 分
  • AI as a Financial Co-Pilot with Shain Noor
    2026/04/23

    EPISODE 53

    In this episode, Kevin and Jason sit down with Shain Noor, co-founder of Silvia, an AI-powered personal CFO built to help people reason through financial decisions, not just track them. Shain explains why the entire history of personal finance apps has focused on clicking and aggregating data rather than helping users actually decide what to do, and how Silvia uses Anthropic-powered agents with a verification layer to deliver trustworthy, personalized financial guidance. The conversation covers the co-pilot vs. autopilot distinction, the surprising discovery that users ask Silvia things they'd never tell their human financial advisor, how proactive alerts like the daily summary email drove retention, and why building the reasoning layer first, before adding any execution or action capabilities, is the right foundation for trust.


    CHAPTERS

    00:00 - The judgment-free financial advisor

    02:38 - Introducing Shain Noor and Silvia

    03:51 - Why finance apps have always missed the reasoning layer

    05:51 - Co-pilot vs. autopilot: trust, transparency, and guardrails

    08:29 - What surprised Shain: users sharing what they hide from their advisors

    12:42 - Measuring retention and the proactive alerts breakthrough

    17:02 - Team size, the ProCap merger, and competing with legacy finance

    19:41 - The future: everyone becomes a manager of AI agents


    LINKS

    Connect with Shain Noor

    SilviaLinkedInX/Twitter


    Stay Connected with Founder Mode

    Subscribe to our newsletter



    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    23 分
  • Grab A Shovel
    2026/04/16

    EPISODE 52

    Jason Shafton and Kevin Henrikson unpack where AI is genuinely useful and where it starts to create more noise than leverage, using examples from AI email triage, long chat memory drift, and agentic workflows. Kevin explains how memory can become polluted when models start treating their own prior inferences as fact, including a prompt he used to compare what an AI thought was “ground truth” against what he had actually told it. From there, the conversation shifts into a practical framework for building AI systems and human teams the same way: define the job, provide the right tools and access, layer in review and guardrails, and judge success by whether time spent together compounds into more output. They close by connecting startup hiring, high-agency operators, and founder-led culture back to the same core test they use for AI: does this person or tool create leverage, or does it create drag?


    CHAPTERS

    00:00 – AI memory drift and false “ground truth”

    01:24 – Testing AI email triage and the risks of over-filtering

    03:13 – Good AI versus bad AI in real workflows

    05:31 – Why controlled memory leads to more consistent AI outputs

    08:29 – How to apply AI to workflows that currently rely on humans

    11:12 – Building multi-agent content systems with clear roles and QA

    13:40 – Hiring high-agency people for early-stage teams

    16:01 – The “pick up the shovel” standard for startup operators

    22:36 – The real test for both employees and AI: leverage or drag

    26:16 – Founder Mode Top 5 Takeaways


    LINKS

    Connect with Kevin Henrikson

    LinkedInX/Twitter


    Stay Connected with Founder Mode

    Subscribe to our newsletter


    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    27 分
  • Turning Audiences Into Businesses with Courtney Spritzer
    2026/04/09

    EPISODE 51

    Courtney Spritzer breaks down how she built, scaled, and monetized a community-first business by starting with conversations instead of a business model, and why most founders confuse audiences with real communities. Drawing on her journey from launching a social media agency to co-founding Entreprenista, she explains how trust and engagement—not follower count—determine whether a community actually works, and how to measure that through real outcomes like connections, clients, and visibility. The conversation covers practical approaches to monetization through membership tiers, founder-led power groups, and events, as well as why IRL experiences are a powerful growth engine. Courtney also shares how she evaluates opportunities as an investor, how she maintains authenticity while scaling, and why founders should build around their strengths rather than chase trends like AI.


    CHAPTERS

    00:00 – Following vs. community: the core distinction

    04:49 – From agency to Entreprenista: turning audience into business

    08:23 – How to measure if a community is actually working

    10:56 – Monetization: tiers, power groups, and testing models

    13:15 – Transitioning post-exit and doubling down on community

    14:22 – IRL events as a growth and engagement engine

    17:50 – Investing and evaluating founder-led opportunities

    20:25 – Building in the AI era vs. staying human-first


    LINKS

    Connect with Courtney Spritzer

    EntreprenistaEntreprenista LinkedInX/TwitterLinkedInInstagram


    Stay Connected with Founder Mode

    Subscribe to our newsletter


    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    26 分
  • When AI Agents Go Rogue
    2026/04/02

    EPISODE 50

    Kevin Henrikson and Jason Shafton unpack the reality of working with AI agents, why they feel more “broken” than chatbots when they fail, and what it actually takes to make them useful in real workflows. They explore the shift from prompt-based interactions to autonomous systems with memory, triggers, and recurring tasks, and why expectations are often misaligned with how these systems behave. The conversation dives into the importance of guardrails, human-in-the-loop review, and treating AI like a junior employee rather than a perfect operator. They also cover the emerging dopamine loop of working with AI, how it’s changing the way people think and work, and why communication—not technical skill—is becoming the key differentiator in an AI-driven world.


    CHAPTERS

    00:00 – Why AI agents feel more broken than chatbots

    02:38 – Harness engineering, workflows, and expectations

    05:50 – AI agents as employees and human-in-the-loop systems

    07:25 – The dopamine loop and changing how we work

    13:54 – The future of work and communication as the edge


    LINKS

    Connect with Kevin Henrikson

    WebsiteLinkedInX/Twitter


    Stay Connected with Founder Mode

    Stay Connected with Founder Mode

    Subscribe to our newsletter



    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    19 分
  • The Future of AI-Built Software with Nima Keivan
    2026/03/26

    EPISODE 49

    Nima Keivan joins Kevin Henrikson and Jason Shafton to break down what it takes to move AI-built software from demos into production. Drawing on his background in robotics and autonomy, Nima explains why the real challenge is not generating code but closing the “autonomy gap” between what a system can do reliably and the messy corner cases humans still have to carry. He unpacks why durable software starts with requirements, how his team approaches PRD-driven development and scenario testing, why just-in-time mocking matters when automations touch live enterprise systems, and where natural-language software building is already working versus where full end-to-end autonomy is still not ready.


    CHAPTERS

    00:00 – The autonomy gap between demos and production

    03:15 – What robotics teaches AI builders about reliability

    07:32 – Why code generation is not the real bottleneck

    13:18 – How Durable turns operator workflows into production software

    26:32 – When natural language can actually replace writing code


    LINKS

    Connect with Nima Keivan

    DurableLinkedInX/Twitter


    Stay Connected with Founder Mode

    Stay Connected with Founder Mode

    Subscribe to our newsletter



    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    32 分
  • The End of Prompt Engineering with Dennis Pilarinos
    2026/03/19

    EPISODE 48

    Dennis Pilarinos joins Kevin Henrikson and Jason Shafton to unpack what AI in software development actually looks like beyond the demos, arguing that the real bottleneck is not code generation but context. Drawing on his experience building Buddybuild, working inside Apple, Amazon, and Microsoft, and now leading Unblocked, Dennis explains why source code alone is not enough for either engineers or AI agents to succeed, how historical decisions buried in Slack, Jira, and docs shape production-safe software, where AI-generated code breaks down in legacy systems, what traits matter most when hiring in an AI-native era, and why the workflows teams rely on today, including pull requests, may look very different in the near future.


    CHAPTERS

    00:00 – Why AI coding tools fail without context

    04:03 – Building Unblocked to solve the missing layer in engineering

    12:11 – Where AI-generated code breaks in real production environments

    20:33 – Hiring for curiosity, ownership, and success in an AI-native world

    27:18 – Founder lessons from aviation, uncertainty, and staying grounded


    LINKS

    Connect with Dennis Pilarinos

    UnblockedLinkedInX/Twitter


    Stay Connected with Founder Mode

    Subscribe to our newsletter



    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    33 分
  • AI Agents Are the New Employees
    2026/03/12

    EPISODE 47

    In Episode 47 of Founder Mode, Kevin Henrikson and Jason Shafton unpack why AI agents should no longer be thought of as simple tools, but as a new kind of workforce that founders can hire, coach, evaluate, and orchestrate. They explore how the founder role is shifting from building and doing toward designing systems, managing agent workflows, and making the judgment calls that still require human trust, taste, and strategic thinking. Along the way, they discuss AI-first operating habits, what this means for hiring and team design, why small teams may become dramatically more powerful, where human oversight still matters most, and why the real moat in an AI-native world is not the agents themselves but distribution, proprietary workflows, and unique data.


    CHAPTERS

    00:00 – The founder’s job has changed

    00:32 – AI agents as a new staffing model

    06:29 – What AI-first workflows look like in practice

    16:48 – Hiring and scaling in the age of agents

    24:06 – The new moat: data, workflows, and distribution


    LINKS

    Connect with Founder Mode

    Founder Mode


    Stay Connected with Founder Mode

    Stay Connected with Founder Mode

    Subscribe to our newsletter



    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    27 分