『Fragmented - AI Developer Podcast』のカバーアート

Fragmented - AI Developer Podcast

Fragmented - AI Developer Podcast

著者: Kaushik Gopal Iury Souza
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

Fragmented is an AI developer podcast for engineers who want to go beyond vibe coding and ship real software. We cover AI-assisted development the way working engineers actually use it: prompting strategies, code review, testing, debugging, workflows, and building production-grade software with AI tools. No hype. No "I shipped a SaaS in a weekend" stories. Just tactics that work. Hosted by Kaushik Gopal and Iury Souza — software engineers using AI daily to build and ship real products. From vibe coding to software engineering — one episode at a time. Our goal: help you use AI to become a better engineer, not be replaced by one.2026 The Fragmented Podcast 科学
エピソード
  • 310 - Mitchell Hashimoto on Ghostty & His Agentic Coding Workflow
    2026/04/14
    Mitchell Hashimoto co-founded HashiCorp, built some of the most impressive DevOps tools like Vagrant and Terraform, sold the company to IBM — and then built a terminal. Ghostty is now where a huge chunk of agentic coding actually happens. Mitchell was an AI skeptic. We walk through his six-step adoption framework and the workflows he uses day to day — warm-start research, Hail Mary prompts across twenty GitHub issues, and knowing when to let the agent slam dunk it. Full shownotes at fragmentedpodcast.com. Show Notes HashiCorp VagrantTerraformIBM acquires Hashicorp Ghostty Ghostty - Mitchell's fast, native terminal built for platform integration across Mac and LinuxTerminal shellSSH - secure shellPTY - pseudoterminalsTerminal Multiplexers tmux - most popular open source oneXTGETTCAP by xtermlibghostty - the cross-platform terminal emulation library that powers Ghostty's corexterm-js - powers terminal for apps like VSCode and the cloudJedi Term - Intellij's embedded terminalGhostty is now a non-profitcmux - native macOS terminal multiplexer built on libghostty — a fork Mitchell championsFree Software Definition - the 4 essential freedoms The freedom to run the program as you wish, for any purpose.The freedom to study how the program works, and change it to make it do what you wish.The freedom to redistribute copies so you can help others.The freedom to distribute copies of your modified versions to others.Mitchell's tweet on unsolicited PRs and transfer of ownership The AI Adoption Journey My AI Adoption Journey - Mitchell's blog post outlining his five-step frameworkStep 1: Drop the Chatbot Episode 301 - AI Coding ladder - Different stages of AI adoptionStep 2: Reproduce Your Own WorkStep 3: End-of-Day Agents OpenAI Deep Research - kick off research tasks for a "warm start" the next morningSpine AI research - deep research tool for longer, hour-long analysis tasksStep 4: Outsource the Slam Dunks Claude status hooks - warcraft peonsConductorStep 5: Engineer the Harness Episode 307 - Harness Engineering - Fragmented's deep dive on harness engineering, heavily inspired by Mitchell's postStep 6: Always have an Agent runningPeter SteinbergerCodex plugin for Claude Code Get in touch We'd love to hear from you. Email is the best way to reach us or you can check our contact page for other ways. We want to hear all the feedback: what's working, what's not, topics you'd like to hear more on. Contact usNewsletterYoutubeWebsite Co-hosts: Kaushik GopalIury Souza [!fyi] We transitioned from Android development to AI starting withEp. #300. Listen to that episode for the full story behind our new direction.
    続きを読む 一部表示
    1 時間
  • 309 - Background Agents
    2026/04/01

    Andrej Karpathy says the goal is to maximize how long an agent runs without your intervention. But there's a false summit most teams hit first: individual speed goes up while system speed stalls, your laptop roars under four parallel Gradle builds, and review queues back up. Kaushik and Iury trace the full arc — from local multitasking to cloud-hosted async work to fully autonomous agents that fire on repo events and put PRs in your inbox.

    Show Notes
    • Andrej Karpathy on agents and token throughput - NoPriors podcast — maximize agent runtime, not token burn
    • Cursor Agent Mode - Multiagent interface - introduced the multi-agent board as a new paradigm for local parallel agents
    • Google Antigravity - Agent Manager interface
    • Claude Code Agent Teams - spawn
      sub-agents from a main orchestrator, with tmux pane integration
    • Git worktrees - /reddit
    Remote Background Agents in the cloud
    • Google Jules - hosted GitHub-connected agent,
      proposes a plan, edits code, runs tests, opens a PR
    • Cursor Cloud Agents - remote agents
      that clone your repo in the cloud and work in parallel
    • OpenAI Codex - cloud software
      engineering agent for parallel tasks
    • Claude Code on the web - cloud-hosted Claude Code
      sessions decoupled from your local machine
    Building trust
    • Episode 307 - Harness Engineering - the earlier episode on
      shaping agent environments — and why this ceiling exists
    Get in touch

    We'd love to hear from you. Email is the best way to reach us or you can check our contact page for other ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    26 分
  • 308 - How Image Diffusion Models Work - the 20 minute explainer
    2026/03/24

    You already know how LLMs work from our popular 20-minute explainer. Now we take it to images. What does Michelangelo have to do with stable diffusion? More than you'd think. Walk away knowing how image generation actually works — and what it has in common with the text models you already understand.

    Full shownotes at fragmentedpodcast.com.

    Show Notes
    • Episode 303 - How LLMs work in 20 minutes - text generation
    • VAE -
      Variational Autoencoder
    • RGB Color model - wikipedia
    • Word2Vec technique - wikipedia
      • Efficient Estimation of Word Representation -
        original Word2Vec paper by Mikolov et al.
    • High-Resolution Image Synthesis with Latent Diffusion Models -
      Rombach et al. (2022) — the paper behind Stable Diffusion
    • Image Training data
      • LAION-5B - 5 billion image-text pairs
        scraped from the web, used to train many image generation models
      • WebLI - Google's internal image-text
        dataset
    • Michelangelo
    Get in touch

    We'd love to hear from you. Email is the
    best way to reach us or you can check our contact page for other
    ways.

    We want to hear all the feedback: what's working, what's not, topics you'd like
    to hear more on.

    • Contact us
    • Newsletter
    • Youtube
    • Website
    Co-hosts:
    • Kaushik Gopal
    • Iury Souza

    [!fyi] We transitioned from Android development to AI starting with
    Ep. #300. Listen to that episode for the full story behind
    our new direction.

    続きを読む 一部表示
    25 分
まだレビューはありません