エピソード

  • Conditional Intelligence: Inside the Mixture of Experts architecture
    2025/10/07

    Send us a text

    What if not every part of an AI model needed to think at once? In this episode, we unpack Mixture of Experts, the architecture behind efficient large language models like Mixtral. From conditional computation and sparse activation to routing, load balancing, and the fight against router collapse, we explore how MoE breaks the old link between size and compute. As scaling hits physical and economic limits, could selective intelligence be the next leap toward general intelligence?

    Sources

    • What is mixture of experts? (IBM)
    • Applying Mixture of Experts in LLM Architectures (Nvidia)
    • A 2025 Guide to Mixture-of-Experts for Lean LLMs
    • A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications
    続きを読む 一部表示
    14 分
  • AI at Work, AI at Home: How we really use LLMs each day?
    2025/09/21

    Send us a text

    How are people really using AI, at home, at work, and across the globe? In this episode of The Second Brain AI Podcast, we dive into two reports from OpenAI and Anthropic that reveal the surprising split between consumer and enterprise use.

    From billions in hidden consumer surplus to the rise of automation vs augmentation, and from emerging markets skipping skill gaps to enterprises wrestling with “context bottlenecks,” we explore what these usage patterns mean for productivity, global inequality, and the future of knowledge work.

    Source:

    • Anthropic Economic Index report: Uneven geographic and enterprise AI adoption
    • How people are using ChatGPT
    • Building more helpful ChatGPT experiences for everyone
    続きを読む 一部表示
    16 分
  • Mind the Context: The Silent Force Shaping AI Decisions
    2025/07/16

    Send us a text

    In this episode of we dive into the emerging discipline of context engineering: the practice of curating and managing the information that AI systems rely on to think, reason, and act.

    We unpack why context engineering is becoming important, especially as the use of AI shifts from static chatbots to dynamic, multi-step agents. You'll learn why hallucinations often stem from poor context, not weak models, and how real-world systems like McKinsey's "Lilly" are solving this problem at scale.

    From strategies like write, select, compress, and isolate to key challenges around data fragmentation and semantic unification, this episode breaks down how to design smarter, more reliable AI by managing information, not just prompts.

    Sources:

    • "Beyond Prompts: The Rise of Context Engineering​​" by Rahul Singh
    • "The rise of context engineering" by LangChain
    • "Context Engineering is the New Vibe Coding" by Analytics India Magazine
    • "Why Context Engineering Matters More Than Prompt Engineering" by TowardsAI
    続きを読む 一部表示
    23 分
  • The SLM Advantage: Rethinking Agent Design with SLMs
    2025/06/29

    Send us a text

    In this episode, we explore why Small Language Models (SLMs) are emerging as powerful tools for building agentic AI. From lower costs to smarter design choices, we unpack what makes SLMs uniquely suited for the future of AI agents.

    Source:

    • "Small Language Models are the Future of Agentic AI" by NVIDIA Research
    続きを読む 一部表示
    20 分