エピソード

  • GPT-5 Release Fallout, AGI Timeline, Google's Genie 3 and Meta's DINO V3 | EP. 45
    2025/09/03

    In this episode of Hidden Layers, we dive into the most important AI developments of the month. We cover OpenAI’s highly anticipated and controversial GPT-5 release, debate where we really are on the AGI timeline, explore groundbreaking new world models like Google’s Genie 3 and Tencent’s Huanyuan Gamecraft, and unpack Meta’s DINO V3 image encoder breakthrough.

    続きを読む 一部表示
    25 分
  • Bridging Physics and AI for Smarter Climate Decisions | EP. 44
    2025/08/16

    In this episode of Hidden Layers, host Ron talks with Dr. Hannah Lu, assistant professor at the University of Texas at Austin and core faculty at the Odin Institute for Computational Engineering and Sciences. Dr. Lu is pioneering the use of AI-powered surrogate models to make complex scientific simulations—like CO₂ absorption in geological formations—faster, more accurate, and more useful for real-world decision-making.

    They discuss:

    • How surrogate models work and why they’re so powerful
    • The challenges of applying AI to physics-based systems
    • How digital twins and uncertainty quantification are shaping the future of environmental modeling
    • The intersection of generative AI, physics constraints, and climate science
    続きを読む 一部表示
    28 分
  • Apple AI Collapse, Diffusion Video Boom, Copyright Wars & More | EP. 42
    2025/07/16

    In this episode of Hidden Layers: Decoded, Ron Green, Dr. ZZ Si, and Michael Wharton unpack July’s biggest AI developments—from flawed reasoning tests to surprising training breakthroughs.

    Apple’s “Illusion of Thinking” paper draws sharp critiques—from both humans and language models. Meta revives a forgotten 2019 attention mechanism to reshape scaling laws. Video generation tools from BlackForest Labs and others hit new levels of realism and interactivity. Federal courts weigh in on Anthropic and Meta’s use of copyrighted training data. A one-line tweak in training recurrent models dramatically boosts performance on long sequences. Cloudflare announces it will block AI scrapers by default—though it might be too late.

    From Transformer alternatives to copyright battles, this episode dives into the fast-moving intersection of AI research, engineering, and regulation.

    続きを読む 一部表示
    28 分
  • Rewiring AI: What Happens When You Start with the Brain, Not the Data | EP.42
    2025/06/18

    In this episode of Hidden Layers, Ron Green sits down with Dr. Karl Friston—world-renowned neuroscientist and originator of the Free Energy Principle—and Dan Mapes, founder of Verses AI and the Spatial Web Foundation. Together, they explore how neuroscience is beginning to reshape artificial intelligence.

    They break down complex but powerful ideas like active inference, biologically plausible AI, and collective intelligence. You'll hear how concepts from brain science are influencing next-gen AI architectures and what the future might hold beyond large language models.

    From the limitations of backpropagation to the promise of decentralized, embodied, and domain-specific models, this is a deep dive into the future of intelligent systems—and the science behind them.

    続きを読む 一部表示
    40 分
  • Continuous Thought Machines, Absolute Zero, BLIP3-o, Gemini Diffusion & more | EP. 41
    2025/05/29

    In this episode of Hidden Layers: Decoded, Ron Green, Dr. ZZ Si, and Michael Wharton explore the latest AI breakthroughs, including Sakana AI’s biologically-inspired “Continuous Thought Machines,” the self-taught coding model Absolute Zero, and Salesforce’s unified vision-language system BLIP3-o. They discuss the growing importance of reinforcement learning in a data-constrained world, Google’s diffusion-based language and video models, and Anthropic’s industry-leading interpretability efforts. The team also covers Apple’s AI missteps and a new study revealing why single, well-structured prompts outperform long chat sessions. Throughout, they reflect on alignment risks, emergent reasoning, and the changing shape of model development and training strategy.

    続きを読む 一部表示
    45 分
  • Evolving Minds: Dr. Risto Miikkulainen on Creativity, Evolution, and the Next Wave of AI | EP.40
    2025/04/29

    In this episode of Hidden Layers, Ron Green sits down with Dr. Risto Miikkulainen — Vice President of AI Research at Cognizant Advanced AI Labs and Professor of Computer Science at UT Austin — to explore the fascinating world of evolutionary computation. They dive deep into the differences between supervised learning, reinforcement learning, and evolutionary techniques, and why evolutionary approaches offer unique advantages for creativity, scalability, and innovation in AI.

    Dr. Miikkulainen shares real-world examples of unexpected discoveries, from cyber agriculture breakthroughs to designing new AI architectures. They also discuss the future of multi-agent systems, surrogate modeling, and how evolutionary computation could help us better understand the emergence of intelligence and language. Plus, Dr. Miikkulainen previews his upcoming book Neural Evolution: Harnessing Creativity in AI Model Design.

    続きを読む 一部表示
    37 分
  • Anthropic Interpretability, GPT-4 Image Gen, Latent Reasoning, Synthetic Data & more | EP.39
    2025/04/09

    In this episode of Hidden Layers, Ron Green talks with Dr. ZZ Si, Michael Wharton, and Reed Coke about recent AI developments. They cover Anthropic’s work on Claude 3.5 and model interpretability, OpenAI’s GPT-4 image generation and its underlying architecture, and a new approach to latent reasoning from the Max Planck Institute. They also discuss synthetic data in light of NVIDIA’s acquisition of Gretel AI and reflect on the delayed rollout of Apple Intelligence. The conversation explores what these advances reveal about how AI models reason, behave, and can (or can’t) be controlled.

    続きを読む 一部表示
    59 分
  • Can AI Really Think? The Neuroscience of Language Models | EP. 38
    2025/03/19

    In this episode of Hidden Layers, host Ron Green sits down with Dr. Anna Ivanova, Assistant Professor of Psychology at Georgia Tech and Director of the Language, Intelligence, and Thought Lab. Dr. Ivanova's research explores the intricate relationship between language, cognition, and artificial intelligence, shedding light on how the brain processes language and how large language models (LLMs) compare to human thought.

    続きを読む 一部表示
    32 分