『Founder Mode』のカバーアート

Founder Mode

Founder Mode

著者: Kevin Henrikson and Jason Shafton
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

Founder Mode is a podcast for builders—whether it’s startups, systems, or personal growth. It’s about finding your flow, balancing health, wealth, and productivity, and tackling challenges with focus and curiosity. Each week, you’ll gain actionable insights and fresh perspectives to help you think like a founder and build what matters most.Kevin Henrikson and Jason Shafton マネジメント・リーダーシップ リーダーシップ 経済学
エピソード
  • Fire Your Worst Customers
    2026/04/30

    EPISODE 54

    Kevin and Jason break down why Anthropic is out of compute, why that's actually a strategy, and what it means for everyone using Claude right now. They dig into the Mythos model as the best marketing moment in AI, why artificial scarcity works, and why $200/month for Claude Max is the cheapest hire you'll ever make. Then they shift to AI in the enterprise — why one "AI Week" won't rewire your company, why Anthropic's one-person growth marketing team is the new bar, and the playbook for founders selling into big companies: pick a hard enough wedge, stop selling the product, and sell transformation instead.


    CHAPTERS

    00:00 – Cold Open: The Mythos Model Is Too Good To Release

    00:35 – Welcome to Founder Mode

    00:59 – Why Anthropic Is Out of Compute

    02:28 – Good Customer, Bad Customer: Who's to Blame?

    04:00 – $200/Month Is the Cheapest Hire You'll Ever Get

    05:31 – Token Maxing and the New Scarcity

    07:01 – How to Fire Bad Customers (And Why Anthropic Is Doing It Wrong)

    10:50 – The Mythos Model: The Best Marketing Moment in AI

    13:19 – AI in the Enterprise: Why One AI Week Isn't Enough

    18:25 – The "One More Prompt" Flow State

    19:57 – Selling Into Enterprise: Pick a Hard Wedge, Sell Transformation

    22:48 – Closing Takeaways and Top Five


    LINKS

    Stay Connected with Founder Mode

    Subscribe to our newsletter


    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    25 分
  • AI as a Financial Co-Pilot with Shain Noor
    2026/04/23

    EPISODE 53

    In this episode, Kevin and Jason sit down with Shain Noor, co-founder of Silvia, an AI-powered personal CFO built to help people reason through financial decisions, not just track them. Shain explains why the entire history of personal finance apps has focused on clicking and aggregating data rather than helping users actually decide what to do, and how Silvia uses Anthropic-powered agents with a verification layer to deliver trustworthy, personalized financial guidance. The conversation covers the co-pilot vs. autopilot distinction, the surprising discovery that users ask Silvia things they'd never tell their human financial advisor, how proactive alerts like the daily summary email drove retention, and why building the reasoning layer first, before adding any execution or action capabilities, is the right foundation for trust.


    CHAPTERS

    00:00 - The judgment-free financial advisor

    02:38 - Introducing Shain Noor and Silvia

    03:51 - Why finance apps have always missed the reasoning layer

    05:51 - Co-pilot vs. autopilot: trust, transparency, and guardrails

    08:29 - What surprised Shain: users sharing what they hide from their advisors

    12:42 - Measuring retention and the proactive alerts breakthrough

    17:02 - Team size, the ProCap merger, and competing with legacy finance

    19:41 - The future: everyone becomes a manager of AI agents


    LINKS

    Connect with Shain Noor

    SilviaLinkedInX/Twitter


    Stay Connected with Founder Mode

    Subscribe to our newsletter



    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    23 分
  • Grab A Shovel
    2026/04/16

    EPISODE 52

    Jason Shafton and Kevin Henrikson unpack where AI is genuinely useful and where it starts to create more noise than leverage, using examples from AI email triage, long chat memory drift, and agentic workflows. Kevin explains how memory can become polluted when models start treating their own prior inferences as fact, including a prompt he used to compare what an AI thought was “ground truth” against what he had actually told it. From there, the conversation shifts into a practical framework for building AI systems and human teams the same way: define the job, provide the right tools and access, layer in review and guardrails, and judge success by whether time spent together compounds into more output. They close by connecting startup hiring, high-agency operators, and founder-led culture back to the same core test they use for AI: does this person or tool create leverage, or does it create drag?


    CHAPTERS

    00:00 – AI memory drift and false “ground truth”

    01:24 – Testing AI email triage and the risks of over-filtering

    03:13 – Good AI versus bad AI in real workflows

    05:31 – Why controlled memory leads to more consistent AI outputs

    08:29 – How to apply AI to workflows that currently rely on humans

    11:12 – Building multi-agent content systems with clear roles and QA

    13:40 – Hiring high-agency people for early-stage teams

    16:01 – The “pick up the shovel” standard for startup operators

    22:36 – The real test for both employees and AI: leverage or drag

    26:16 – Founder Mode Top 5 Takeaways


    LINKS

    Connect with Kevin Henrikson

    LinkedInX/Twitter


    Stay Connected with Founder Mode

    Subscribe to our newsletter


    Connect with Kevin

    LinkedInX/Twitter


    Connect with Jason

    LinkedInX/Twitter

    続きを読む 一部表示
    27 分
adbl_web_anon_alc_button_suppression_c
まだレビューはありません