エピソード

  • AI Hardware: GPUs, TPUs and Beyond
    2026/04/28

    This episode is all about the specialized hardware that makes modern AI possible. We explain how GPUs became the workhorses of deep learning by offering massive parallelism for matrix math, and how companies like Google went further to build TPUs (Tensor Processing Units) optimized for neural network workloads. You’ll hear about the latest AI chips, from NVIDIA’s powerful GPUs driving large model training, to emerging AI accelerators like Graphcore’s IPU, Cerebras’s wafer-scale engine, and even AI on the edge (Apple’s neural engines, etc.). We discuss what each brings in terms of speed, memory, efficiency, and how they’re deployed, giving a peek into the data centers (and devices) where AI calculations run.

    続きを読む 一部表示
    26 分
  • Synthetic Data: Artificial Data for Real Insights
    2026/04/14

    In this episode, we explore how synthetic data is created and used to improve AI models. Synthetic data refers to artificial datasets generated by models (like GANs or language models) that mimic real data. We discuss how this can help in situations with little real data or strict privacy requirements for example, generating realistic medical records to train an AI without exposing any patient’s information. You’ll learn about techniques for producing synthetic images, text, and tabular data, and how they are validated to ensure they reflect real-world patterns. We also cover the benefits and challenges of synthetic data, from reducing bias and augmenting rare cases, to ensuring the synthetic data doesn’t inadvertently leak sensitive info.

    続きを読む 一部表示
    31 分
  • Explainable AI: Opening the Black Box
    2026/03/31

    In this episode, we look at how researchers are making AI models more transparent and interpretable. We discuss techniques like SHAP values and LIME that explain model predictions by attributing importance to features! So an AI system isn’t just a black box, you can understand why it made a decision. You’ll hear about example use cases (like explaining a medical AI’s diagnosis to a doctor or a loan model’s decision to a loan officer) and recent research into interpreting the internals of neural networks (from visualizing what vision models detect to “probing” language models’ knowledge). By the end, you’ll appreciate the growing toolkit for Explainable AI (XAI) and why it’s crucial for building trust in AI systems.

    続きを読む 一部表示
    25 分
  • Aligning AI with Human Intent: RLHF in Action
    2026/03/17

    In this episode, we demystify how researchers teach AI models to behave helpfully and safely using Reinforcement Learning from Human Feedback (RLHF). We discuss why even very large models can generate undesired outputs and how RLHF addresses this by incorporating human preferences. You’ll learn how methods like InstructGPT were trained: first by gathering human-written demonstration responses, then by having humans rank model outputs to train a reward model, and finally using reinforcement learning (e.g. with PPO) to fine-tune the model so that it better aligns with what users want. We also talk about improvements like Constitutional AI and why aligning AI with human values is an ongoing challenge.

    続きを読む 一部表示
    25 分
  • AI for Code: How Models Write Software
    2026/03/03

    This episode explores the rise of AI coding assistants. We discuss how models like OpenAI’s Codex (which powers GitHub Copilot) are trained on millions of code repositories to generate software from natural language prompts. You’ll hear how these models can autocomplete functions or even draft whole programs, and what they’re capable of today, as well as their limits (like generating errors or insecure code if not carefully guided). We also talk about their impact on developer productivity and the future of programming, where AI becomes a pair programmer that can handle the boilerplate, letting developers focus on the creative parts of coding.

    続きを読む 一部表示
    31 分
  • Multimodal Models: Combining Vision, Language, and More
    2026/02/17

    This episode explores multimodal AI : models that can see, read, and even hear. We explain how models like OpenAI’s CLIP learn joint representations of images and text (by matching pictures with their captions), enabling capabilities like image captioning and visual search. You’ll learn why multimodal systems represent the next leap toward more human-like AI, processing text, images, and audio together for richer understanding. We also discuss recent multimodal breakthroughs (from GPT-4’s vision features to Google’s Gemini) and how they allow AI to analyze content the way we do with multiple senses.

    続きを読む 一部表示
    29 分
  • Efficient Fine-Tuning: Adapting Large Models on a Budget
    2026/02/03

    This episode dives into strategies for fine-tuning gigantic AI models without needing massive compute. We explain parameter-efficient fine-tuning methods like LoRA (Low-Rank Adaptation), which freezes the original model and trains only small adapter weights, and QLoRA, which goes a step further by quantizing model parameters to 4-bit precision. You’ll learn why techniques like these have become essential for customizing large language models on modest hardware, how they preserve full performance, and what recent results (like fine-tuning a 65B model on a single GPU) mean for practitioners.

    続きを読む 一部表示
    29 分
  • Diffusion Models: AI Image Generation Through Noise
    2026/01/20

    In this episode, we break down what diffusion models are and why they’ve become the go-to method for AI image generation. You’ll learn how these models gradually add and remove noise to transform random pixels into coherent images, enabling use cases from art creation to image restoration. We also explore recent advances like latent diffusion, which compresses the generation process for efficiency, and discuss how diffusion techniques have achieved state-of-the-art results in text-to-image tasks while remaining flexible for control and guidance.

    続きを読む 一部表示
    25 分