エピソード

  • AI Agents in Production (part 2)
    2026/02/03
    From reactive chatbots to agents that plan, delegate, and think across extended timescales. We explore Deep Agents and the Recursive Language Models paradigm that's redefining AI in 2026.

    What's the difference between a chatbot and an agent? A chatbot responds, an agent acts.

    In this episode, we go deep on:
    • Deep Agents: systems that plan and delegate like project managers
    • Planning patterns: hierarchical decomposition, reactive planning, plan repair
    • Recursive Language Models: context folding that enables multi-day tasks without degradation
    • Production architectures: how Manus and Claude Code orchestrate complex agents
    • The future: autonomous, collaborative, and metacognitive agents
    The future of AI isn't in bigger models, it's in intelligent architecture around them.

    This episode includes AI-generated content.
    続きを読む 一部表示
    15 分
  • AI Agents in Production (part 1)
    2026/01/28
    Why does 60% of an AI agent's success have nothing to do with the model? In this episode, we explore context engineering: the hidden discipline that separates impressive demos from systems that actually work in production.

    Ever built an AI agent that was brilliant in testing but failed miserably in production? The problem isn't the model. It's the context.

    In this episode, we cover:
    • Context rot: why agents "forget" and degrade over time
    • Context blindness: when the agent has information but can't use it
    • Context hallucination: the danger of plausible inventions
    • Memory architecture: hot, warm, and cold memory for robust agents
    • Production patterns: what Manus and Claude Code do behind the scenes
    If you're building with AI, this episode will change your approach.

    This episode includes AI-generated content.
    続きを読む 一部表示
    13 分
  • Context Rot
    2025/11/21
    This episode of Rooting exposes the hidden enemy of modern agent architectures: context rot. We explore why long context windows aren’t a silver bullet, how attention budgets degrade over time, and the four ways rot shows up in real systems—poisoning, distraction, confusion, and clash. Listeners learn why million-token prompts still fail, why observability must extend into the model’s working memory, and how emerging strategies such as isolation, selective retrieval, compression, external memory, semantic chunking, and standards such as MCP are reshaping how robust agents are built. This is a practical, technical deep-dive for architects and developers who want their AI systems to survive contact with reality.

    This episode includes AI-generated content.
    続きを読む 一部表示
    23 分
  • Formal Logic - pt. 02
    2025/09/03
    We are moving past the limitations of probabilistic AI by embracing formal logic and verification. This approach allows us to mathematically prove that our AI systems will behave correctly under specific conditions, providing a new level of trust and reliability essential for critical applications in finance, healthcare, and beyond.

    This is the second and last part of the episode.
    続きを読む 一部表示
    13 分
  • Formal Logic - pt. 01
    2025/08/26
    We are moving past the limitations of probabilistic AI by embracing formal logic and verification. This approach allows us to mathematically prove that our AI systems will behave correctly under specific conditions, providing a new level of trust and reliability essential for critical applications in finance, healthcare, and beyond.

    This is the first part of a two-part podcast
    続きを読む 一部表示
    19 分
  • Context Engineering
    2025/08/19
    We go beyond prompt engineering to focus on context engineering, a systematic approach to building production-grade AI. By treating the agent's context as a structured, observable pipeline, we enable teams to create robust, cost-effective, and scalable AI systems that deliver real business value.
    続きを読む 一部表示
    17 分
  • Memory
    2025/08/12
    What separates a forgetful chatbot from an AI agent that feels truly smart?

    In this episode, Root unpacks the power of memory in AI—from basic sliding windows to advanced retrieval-augmented and memory-augmented strategies inspired by neuroscience.

    Discover how the right memory architecture enables personalization, learning, and adaptability in your agents.

    Whether you’re building customer support bots, coding copilots, or next-gen assistants, you’ll get practical examples, production tips, and a glimpse into the future of multi-modal, real-time memory in AI.

    Tune in and learn how to make every interaction count!
    続きを読む 一部表示
    17 分
  • A Bigger Context
    2025/08/05
    Dive into the evolving world of AI context windows with Rooting. This episode unpacks the surprising paradox of 'more memory' in AI models: exploring the immense benefits of larger context, the hidden limitations like reasoning degradation, and the cutting-edge engineering techniques—from positional encoding tricks to novel reasoning architectures—that are shaping how AI truly understands and remembers. Essential listening for solution architects, software developers, and data scientists navigating the complexities of large language models.
    続きを読む 一部表示
    29 分