エピソード

  • Graph Neural Networks: Learning from Connections, Not Just Data
    2025/09/30

    This episode breaks down what graph neural networks (GNNs) are and why they matter. You’ll learn how GNNs use nodes and edges to represent relationships and how message passing lets models make sense of social, biological, and networked data. We’ll also cover recent advancements like PGNN for multi-modal graphs and common pitfalls like scalability and over-smoothing.

    続きを読む 一部表示
    31 分
  • Neuro-Symbolic AI: Combining Learning With Logic
    2025/09/16

    In this episode, we explain what neuro-symbolic AI is and why it matters. You’ll learn how neural networks handle patterns, how symbolic systems handle rules, and how combining the two can help models reason more reliably. We also cover real examples where this approach is already being applied in assistants and robotics, showing how it could make AI systems more trustworthy and useful.

    続きを読む 一部表示
    25 分
  • LLMs in Chip Design: How AI Is Entering the Hardware Workflow
    2025/09/02

    In this episode, we look at how large language models are being used in chip and hardware design. We break down what LLM-aided design actually means, how models can generate HDL code, assist with testbench creation, and even support formal verification. You'll also hear about real-world tools like ChatEDA and how companies are starting to use AI in their silicon development workflows.

    続きを読む 一部表示
    20 分
  • How Embeddings and Vector Databases Power Generative AI
    2025/08/19

    This episode explains how embedding models turn language into numerical vectors and how vector databases like Pinecone, FAISS, or Weaviate store and search those vectors efficiently. You'll learn how this system enables GenAI models to retrieve relevant information in real-time, power RAG pipelines, and scale up tool-augmented LLM workflows.

    続きを読む 一部表示
    19 分
  • Agentic AI: What Happens When Models Start Acting
    2025/08/05

    In this episode, we explore agentic AI systems built to not just predict or classify, but to plan, reason, and act autonomously. We break down what makes these models different, how they use tools, memory, and feedback to complete tasks, and why they represent the next step beyond traditional LLMs. You’ll hear how concepts like action loops, world modeling, and autonomous decision-making are shaping everything from research tools to enterprise automation.

    続きを読む 一部表示
    19 分
  • Understanding Attention: Why Transformers Actually Work
    2025/07/22

    This episode unpacks the attention mechanism at the heart of Transformer models. We explain how self-attention helps models weigh different parts of the input, how it scales in multi-head form, and what makes it different from older architectures like RNNs or CNNs. You’ll walk away with an intuitive grasp of key terms like query, key, value, and how attention layers help with context handling in language, vision, and beyond.

    続きを読む 一部表示
    20 分
  • Markov Chains, Monte Carlo, and HMC: A Deep Dive
    2025/07/08

    In this episode, we break down the essentials of Markov Chains, Monte Carlo simulations, and Markov Chain Monte Carlo methods. We explain key ideas like memoryless processes, stationary distributions, and how random sampling helps model uncertainty. We also explore gradient-based techniques like Hamiltonian Monte Carlo, highlighting their role in modern statistical modeling. Ideal for anyone curious about the mechanics behind simulations and complex probabilistic models.

    続きを読む 一部表示
    24 分
  • The Model Context Protocol (MCP): Making LLMs Actually Useful
    2025/06/24

    In this episode, we dive into the Model Context Protocol, or MCP. It’s a new standard that helps large language models connect with real-world tools, data, and APIs in a more structured way. We’ll break down how MCP works, why it matters for building smarter AI agents, and what it means for developers working on enterprise-grade AI systems.

    続きを読む 一部表示
    17 分