エピソード

  • 310 - Mitchell Hashimoto on Ghostty & His Agentic Coding Workflow
    2026/04/14
    Mitchell Hashimoto co-founded HashiCorp, built some of the most impressive DevOps tools like Vagrant and Terraform, sold the company to IBM — and then built a terminal. Ghostty is now where a huge chunk of agentic coding actually happens. Mitchell was an AI skeptic. We walk through his six-step adoption framework and the workflows he uses day to day — warm-start research, Hail Mary prompts across twenty GitHub issues, and knowing when to let the agent slam dunk it. Full shownotes at fragmentedpodcast.com. Show Notes HashiCorp VagrantTerraformIBM acquires Hashicorp Ghostty Ghostty - Mitchell's fast, native terminal built for platform integration across Mac and LinuxTerminal shellSSH - secure shellPTY - pseudoterminalsTerminal Multiplexers tmux - most popular open source oneXTGETTCAP by xtermlibghostty - the cross-platform terminal emulation library that powers Ghostty's corexterm-js - powers terminal for apps like VSCode and the cloudJedi Term - Intellij's embedded terminalGhostty is now a non-profitcmux - native macOS terminal multiplexer built on libghostty — a fork Mitchell championsFree Software Definition - the 4 essential freedoms The freedom to run the program as you wish, for any purpose.The freedom to study how the program works, and change it to make it do what you wish.The freedom to redistribute copies so you can help others.The freedom to distribute copies of your modified versions to others.Mitchell's tweet on unsolicited PRs and transfer of ownership The AI Adoption Journey My AI Adoption Journey - Mitchell's blog post outlining his five-step frameworkStep 1: Drop the Chatbot Episode 301 - AI Coding ladder - Different stages of AI adoptionStep 2: Reproduce Your Own WorkStep 3: End-of-Day Agents OpenAI Deep Research - kick off research tasks for a "warm start" the next morningSpine AI research - deep research tool for longer, hour-long analysis tasksStep 4: Outsource the Slam Dunks Claude status hooks - warcraft peonsConductorStep 5: Engineer the Harness Episode 307 - Harness Engineering - Fragmented's deep dive on harness engineering, heavily inspired by Mitchell's postStep 6: Always have an Agent runningPeter SteinbergerCodex plugin for Claude Code Get in touch We'd love to hear from you. Email is the best way to reach us or you can check our contact page for other ways. We want to hear all the feedback: what's working, what's not, topics you'd like to hear more on. Contact usNewsletterYoutubeWebsite Co-hosts: Kaushik GopalIury Souza [!fyi] We transitioned from Android development to AI starting withEp. #300. Listen to that episode for the full story behind our new direction.
    続きを読む 一部表示
    1 時間
  • 309 - Background Agents
    2026/04/01

    Andrej Karpathy says the goal is to maximize how long an agent runs without your intervention. But there's a false summit most teams hit first: individual speed goes up while system speed stalls, your laptop roars under four parallel Gradle builds, and review queues back up. Kaushik and Iury trace the full arc — from local multitasking to cloud-hosted async work to fully autonomous agents that fire on repo events and put PRs in your inbox.

    Show Notes
    • Andrej Karpathy on agents and token throughput - NoPriors podcast — maximize agent runtime, not token burn
    • Cursor Agent Mode - Multiagent interface - introduced the multi-agent board as a new paradigm for local parallel agents
    • Google Antigravity - Agent Manager interface
    • Claude Code Agent Teams - spawn
      sub-agents from a main orchestrator, with tmux pane integration
    • Git worktrees - /reddit
    Remote Background Agents in the cloud
    • Google Jules - hosted GitHub-connected agent,
      proposes a plan, edits code, runs tests, opens a PR
    • Cursor Cloud Agents - remote agents
      that clone your repo in the cloud and work in parallel
    • OpenAI Codex - cloud software
      engineering agent for parallel tasks
    • Claude Code on the web - cloud-hosted Claude Code
      sessions decoupled from your local machine
    Building trust
    • Episode 307 - Harness Engineering - the earlier episode on
      shaping agent environments — and why this ceiling exists
    Get in touch

    We'd love to hear from you. Email is the best way to reach us or you can check our contact page for other ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    26 分
  • 308 - How Image Diffusion Models Work - the 20 minute explainer
    2026/03/24

    You already know how LLMs work from our popular 20-minute explainer. Now we take it to images. What does Michelangelo have to do with stable diffusion? More than you'd think. Walk away knowing how image generation actually works — and what it has in common with the text models you already understand.

    Full shownotes at fragmentedpodcast.com.

    Show Notes
    • Episode 303 - How LLMs work in 20 minutes - text generation
    • VAE -
      Variational Autoencoder
    • RGB Color model - wikipedia
    • Word2Vec technique - wikipedia
      • Efficient Estimation of Word Representation -
        original Word2Vec paper by Mikolov et al.
    • High-Resolution Image Synthesis with Latent Diffusion Models -
      Rombach et al. (2022) — the paper behind Stable Diffusion
    • Image Training data
      • LAION-5B - 5 billion image-text pairs
        scraped from the web, used to train many image generation models
      • WebLI - Google's internal image-text
        dataset
    • Michelangelo
    Get in touch

    We'd love to hear from you. Email is the
    best way to reach us or you can check our contact page for other
    ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like
    to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    25 分
  • 307 - Harness Engineering - the hard part of AI coding
    2026/03/17

    The hard part of AI coding isn't generating code — it's controlling quality, safety, and drift. Kaushik and Iury break down harness engineering: the five pillars for shaping an agent's environment and what it looks like when teams build custom harnesses from scratch.

    Full shownotes at fragmentedpodcast.com.

    Show Notes Why it matters
    • Harness Engineering -
      OpenAI's post on building their Codex codebase (~1M lines of code, 1,500 PRs
      merged, zero manually written)
    Shaping the harness
    • The Feed's Lost and Found -
      Iury's newsletter consolidating harness engineering themes
    1. Agent legibility
    2. Closed feedback loops
    3. Persistent memory
    4. Entropy control
    5. Blast radius controls
    Building the harness
    • Minions: Stripe's one-shot, end-to-end coding agents -
      Stripe forked Goose to build custom agents for their codebase
    • Goose - open-source coding agent from Block
    • Superpowers by Jesse Vincent - skills
      that enforce a proper software engineering process
    • Open Code - open-source coding agent you can fork and
      customize
    Other resources
    • Agent Harness Glossary -
      Latent Patterns
    • Towards self-driving codebases -
      Cursor
    • Agentic Workflows -
      GitHub Next
    • Future of Software Development -
      ThoughtWorks
    Get in touch

    We'd love to hear from you. Email is the
    best way to reach us or you can check our contact page for other
    ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like
    to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    30 分
  • 306 - Keeping your agent instructions in sync and effective
    2026/03/10

    AGENTS.md is becoming the common language for AI coding tools, but keeping repo
    rules, personal rules, and tool-specific files in sync is still messy. In this
    episode, Kaushik and Iury break down the sync problem, compare their own setups,
    and unpack what the latest AGENTS.md research actually says.

    Full shownotes at fragmentedpodcast.com.

    Show Notes The sync problem
    • AGENTS.md - Official spec
    • Custom instructions with AGENTS.md -
      Open AI
    • Keep your AGENTS.md in sync - Kaushik's post
    • Rulesync - What Iury uses
    • Tweet by Ryan Carson and Claude frustrations
    Other links
    • Evaluating AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents?
    • Harness engineering - Check the section about using AGENTS.md as a table of contents
    • OpenCode
    Get in touch

    We'd love to hear from you. Email is the
    best way to reach us or you can check our contact page for other
    ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like
    to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    23 分
  • 305 - Subagents explained - What they are, when (not) to spawn them
    2026/02/17

    Subagents are becoming a core primitive for serious AI-assisted development. In this episode, Kaushik and Iury disambiguate "agent" terminology, unpack plan mode vs subagents, and explain how parallel, scoped workers improve research quality without polluting the main thread.

    Full shownotes at fragmentedpodcast.com.

    Show NotesResources & DocumentationOfficial Documentation

    Agents, Modes, Subagents: official harness docs

    • Claude Code Subagents
    • Gemini CLI Subagents
    • Opencode Subagents
    • Cursor Subagents
    • Antigravity Agent Modes
    • AOE Scouting
    Research Papers & Articles
    • Introducing Claude Opus 4.5
    • Deep-Research Agents Paper
    • Post: GPT-5 System Card by Alex
      Xu
    • Self-Driving Codebases Blog -
      multi-agent systems making 1,000 commits/hour
    Get in touch

    We'd love to hear from you. Email is the
    best way to reach us or you can check our contact page for other
    ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like
    to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    27 分
  • 304 - Agent Skills - when to use them and why they matter
    2026/02/09

    Agent Skills look simple, but they are one of the most powerful building blocks
    in modern AI coding workflows. In this episode, Kaushik and Iury break down when
    to use skills, how progressive disclosure works, and how skills compare with
    commands, instructions, and MCPs.

    Full shownotes at fragmentedpodcast.com.

    Show NotesMain References
    • Progressive Disclosure -
      how skills are loaded into context
    • Agent Skills Open Specification
    • AAIF (Agentic AI Foundation) -
      Linux Foundation initiative for AI interoperability
    • Needle in a Haystack Problem - original
      "Lost in the Middle" paper
    • Agent-Invokable vs User-Invokable -
      merging skills and commands in Claude Code
    Creating Skills
    • Skill Creator -
      Anthropic's skill for creating new agent skills
    • Claude Code frontmatter reference
      • see model: * & context: fork
    Using other Skills
    • Anthropic Skills GitHub Repository -
      official collection of Claude skills and examples
    • Clawdhub - Clawdbot's skill hub. All versions are
      archived here
    • SKILLS.sh - Vercel's skills hub
    Warnings before installing random skills

    [!warning] Don't install from hubs blindly.

    Inspect the repo code before adding anything to your agent.

    • Prompt Injection Attacks -
      OWASP guide to LLM prompt injection vulnerabilities
    • OpenClaw <- MoltBot <- Clawdbot
    • OpenClaw Security Analysis -
      analysis of prompt injection risks in open agent frameworks
    • Malware found in a top-downloaded Clawhub skill -
      incident report thread
    Additional resources
    • Few-Shot Prompting -
      improving outputs with examples
    • .agents/skills - proposal
      to standardize the skills folder path
    • Vercel: AGENTS.md vs Skills -
      comparison of agent instruction methods
    Get in touch

    We'd love to hear from you. Email is the
    best way to reach us or you can check our contact page for other
    ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like
    to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    27 分
  • 303 - How LLMs Work - the 20 minute explainer
    2026/02/02

    Ever get asked "how do LLMs work?" at a party and freeze? We walk through the full pipeline: tokenization, embeddings, inference — so you understand it well enough to explain it. Walk away with a mental model that you can use for your next dinner party.

    _Full shownotes at fragmentedpodcast.com.

    Show NotesWords -> Tokens:
    • OpenAI Tokenizer visualizer -
      Visualize how text becomes tokens
    Tokens -> Embeddings:
    • RGB Color model - wikipedia
    • Word2Vec technique - wikipedia
      • Efficient Estimation of Word Representation -
        original Word2Vec paper by Mikolov et al.
    Embeddings -> Inference:
    • Word embedding
    • Temperature, Top-k, Top-p samping
    Get in touch

    We'd love to hear from you. Email is the
    best way to reach us or you can check our contact page for other
    ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like
    to hear more on. We want to make the show better for you so let us know!

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    26 分