『AI Papers Explained』のカバーアート

AI Papers Explained

AI Papers Explained

著者: Anass El Basraoui
無料で聴く

このコンテンツについて

AI Papers Explained turns advanced AI research into clear and structured explanations. Each episode explores a major paper such as Attention Is All You Need, BERT, or GPT, showing how these ideas transformed deep learning and natural language processing. Created by Anass El Basraoui, a data scientist who believes that understanding the logic behind modern models is the key to innovation, the show bridges technical depth with accessible storytelling for curious minds.Anass El Basraoui
エピソード
  • RAG: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
    2025/12/21

    Paper: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Lewis et al., 2020)

    Let’s stop guessing. Let’s search.

    LLMs hallucinate. They don't know your private data.
    In this episode, we dive into RAG, the architecture that changed how modern AI systems handle knowledge.

    Instead of relying solely on parametric memory (weights), models can now retrieve external documents to produce factual answers.

    We break down the 2020 seminal paper and explain:
    🔹 Why Large Language Models hallucinate.
    🔹 The "Retriever-Generator" architecture.
    🔹 Why retrieval has become the backbone of Enterprise AI.

    Clear intuition, real examples, and practical insight.

    🎧 Listen now to master the tech behind ChatPDF and Search.

    As we close the year, tell us what you think in the Q&A!

    続きを読む 一部表示
    12 分
  • Mamba: Linear-Time Sequence Modeling with Selective State Spaces
    2025/12/06

    In this episode of AI Papers Explained, we explore Mamba: Linear-Time Sequence Modeling with Selective State Spaces, a 2023 paper by Albert Gu and Tri Dao that rethinks how AI handles long sequences.Unlike Transformers, which compare every token to every other, Mamba processes information linearly and selectively, remembering only what matters.

    This marks a shift toward faster, more efficient architectures, a possible glimpse into the post-Transformer era.

    続きを読む 一部表示
    12 分
  • CLIP: Learning Transferable Visual Models From Natural Language Supervision
    2025/12/05

    When AI Learned to See:

    In this fourth episode of AI Papers Explained, we explore Learning Transferable Visual Models From Natural Language Supervision — the 2021 OpenAI paper that introduced CLIP.After Transformers, BERT, and GPT-3 reshaped how AI understands language, CLIP marked the moment when AI began to see through words.By training on 400 million image-text pairs, CLIP learned to connect vision and language without manual labels.
    This breakthrough opened the multimodal era-leading to DALL·E, GPT-4V, and Gemini.

    Discover how contrastive learning turned internet captions into visual intelligence.

    続きを読む 一部表示
    12 分
まだレビューはありません