エピソード

  • Episode 1: Introducing Vanishing Gradients
    2022/02/16
    In this brief introduction, Hugo introduces the rationale behind launching a new data science podcast and gets excited about his upcoming guests: Jeremy Howard, Rachael Tatman, and Heather Nolis! Original music, bleeps, and blops by local Sydney legend PlaneFace (https://planeface.bandcamp.com/album/fishing-from-an-asteroid)!
    続きを読む 一部表示
    5 分
  • Episode 47: The Great Pacific Garbage Patch of Code Slop with Joe Reis
    2025/04/07
    What if the cost of writing code dropped to zero — but the cost of understanding it skyrocketed? In this episode, Hugo sits down with Joe Reis to unpack how AI tooling is reshaping the software development lifecycle — from experimentation and prototyping to deployment, maintainability, and everything in between. Joe is the co-author of Fundamentals of Data Engineering and a longtime voice on the systems side of modern software. He’s also one of the sharpest critics of “vibe coding” — the emerging pattern of writing software by feel, with heavy reliance on LLMs and little regard for structure or quality. We dive into: • Why “vibe coding” is more than a meme — and what it says about how we build today • How AI tools expand the surface area of software creation — for better and worse • What happens to technical debt, testing, and security when generation outpaces understanding • The changing definition of “production” in a world of ephemeral, internal, or just-good-enough tools • How AI is flattening the learning curve — and threatening the talent pipeline • Joe’s view on what real craftsmanship means in an age of disposable code This conversation isn’t about doom, and it’s not about hype. It’s about mapping the real, messy terrain of what it means to build software today — and how to do it with care. LINKS * Joe's Practical Data Modeling Newsletter on Substack (https://practicaldatamodeling.substack.com/) * Joe's Practical Data Modeling Server on Discord (https://discord.gg/HhSZVvWDBb) * Vanishing Gradients YouTube Channel (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) * Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) 🎓 Want to go deeper? Check out my course: Building LLM Applications for Data Scientists and Software Engineers. Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in. This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful. Includes over $2,500 in compute credits and guest lectures from experts at DeepMind, Moderna, and more. Cohort starts April 7 — Use this link for a 10% discount (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=LLM10)
    続きを読む 一部表示
    1 時間 19 分
  • Episode 46: Software Composition Is the New Vibe Coding
    2025/04/03
    What if building software felt more like composing than coding? In this episode, Hugo and Greg explore how LLMs are reshaping the way we think about software development—from deterministic programming to a more flexible, prompt-driven, and collaborative style of building. It’s not just hype or grift—it’s a real shift in how we express intent, reason about systems, and collaborate across roles. Hugo speaks with Greg Ceccarelli—co-founder of SpecStory, former CPO at Pluralsight, and Director of Data Science at GitHub—about the rise of software composition and how it changes the way individuals and teams create with LLMs. We dive into: - Why software composition is emerging as a serious alternative to traditional coding - The real difference between vibe coding and production-minded prototyping - How LLMs are expanding who gets to build software—and how - What changes when you focus on intent, not just code - What Greg is building with SpecStory to support collaborative, traceable AI-native workflows - The challenges (and joys) of debugging and exploring with agentic tools like Cursor and Claude We’ve removed the visual demos from the audio—but you can catch our live-coded Chrome extension and JFK document explorer on YouTube. Links below. JFK Docs Vibe Coding Demo (YouTube) (https://youtu.be/JpXCkuV58QE) Chrome Extension Vibe Coding Demo (YouTube) (https://youtu.be/ESVKp37jDwc) Meditations on Tech (Greg’s Substack) (https://www.meditationsontech.com/) Simon Willison on Vibe Coding (https://simonwillison.net/2025/Mar/19/vibe-coding/) Johnno Whitaker: On Vibe Coding (https://johnowhitaker.dev/essays/vibe_coding.html) Tim O’Reilly – The End of Programming (https://www.oreilly.com/radar/the-end-of-programming-as-we-know-it/) Vanishing Gradients YouTube Channel (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Greg Ceccarelli on LinkedIn (https://www.linkedin.com/in/gregceccarelli/) Greg’s Hacker News Post on GOOD (https://news.ycombinator.com/item?id=43557698) SpecStory: GOOD – Git Companion for AI Workflows (https://github.com/specstoryai/getspecstory/blob/main/GOOD.md) 🎓 Want to go deeper? Check out my course: Building LLM Applications for Data Scientists and Software Engineers. Learn how to design, test, and deploy production-grade LLM systems — with observability, feedback loops, and structure built in. This isn’t about vibes or fragile agents. It’s about making LLMs reliable, testable, and actually useful. Includes over $2,500 in compute credits and guest lectures from experts at DeepMind, Moderna, and more. Cohort starts April 7 — Use this link for a 10% discount (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?promoCode=LLM10) 🔍 Want to help shape the future of SpecStory? Greg and the team are looking for design partners for their new SpecStory Teams product—built for collaborative, AI-native software development. If you're working with LLMs in a team setting and want to influence the next wave of developer tools, you can apply here: 👉 specstory.com/teams (https://specstory.com/teams)
    続きを読む 一部表示
    1 時間 9 分
  • Episode 45: Your AI application is broken. Here’s what to do about it.
    2025/02/20
    Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app? In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data. In this episode, we dive into: Why “look at your data” is the best debugging advice no one follows. How spreadsheet-based error analysis can uncover failure modes faster than complex dashboards. The role of synthetic data in bootstrapping evaluation. When to trust LLM judges—and when they’re misleading. Why most AI dashboards measuring truthfulness, helpfulness, and conciseness are often a waste of time. If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production. LINKS The podcast livestream on YouTube (https://youtube.com/live/Vz4--82M2_0?feature=share) Hamel's blog (https://hamel.dev/) Hamel on twitter (https://x.com/HamelHusain) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) Vanishing Gradients on Twitter (https://x.com/vanishingdata) Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off) (https://maven.com/s/course/d56067f338) Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To) (https://maven.com/p/ed7a72/llm-agents-when-to-use-them-and-when-not-to?utm_medium=ll_share_link&utm_source=instructor)
    続きを読む 一部表示
    1 時間 18 分
  • Episode 44: The Future of AI Coding Assistants: Who’s Really in Control?
    2025/02/04
    AI coding assistants are reshaping how developers write, debug, and maintain code—but who’s really in control? In this episode, Hugo speaks with Tyler Dunn, CEO and co-founder of Continue, an open-source AI-powered code assistant that gives developers more customization and flexibility in their workflows. In this episode, we dive into: - The trade-offs between proprietary vs. open-source AI coding assistants—why open-source might be the future. - How structured workflows, modular AI, and customization help developers maintain control over their tools. - The evolution of AI-powered coding, from autocomplete to intelligent code suggestions and beyond. - Why the best developer experiences come from sensible defaults with room for deeper configuration. - The future of LLM-based software engineering, where fine-tuning models on personal and team-level data could make AI coding assistants even more effective. With companies increasingly integrating AI into development workflows, this conversation explores the real impact of these tools—and the importance of keeping developers in the driver's seat. LINKS The podcast livestream on YouTube (https://youtube.com/live/8QEgVCzm46U?feature=share) Continue's website (https://www.continue.dev/) Continue is hiring! (https://www.continue.dev/about-us) amplified.dev: We believe in a future where developers are amplified, not automated (https://amplified.dev/) Beyond Prompt and Pray, Building Reliable LLM-Powered Software in an Agentic World (https://www.oreilly.com/radar/beyond-prompt-and-pray/) LLMOps Lessons Learned: Navigating the Wild West of Production LLMs 🚀 (https://www.zenml.io/blog/llmops-lessons-learned-navigating-the-wild-west-of-production-llms) Building effective agents by Erik Schluntz and Barry Zhang, Anthropic (https://www.anthropic.com/research/building-effective-agents) Ty on LinkedIn (https://www.linkedin.com/in/tylerjdunn/) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) Vanishing Gradients on Twitter (https://x.com/vanishingdata) Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
    続きを読む 一部表示
    1 時間 34 分
  • Episode 43: Tales from 400+ LLM Deployments: Building Reliable AI Agents in Production
    2025/01/16
    Hugo speaks with Alex Strick van Linschoten, Machine Learning Engineer at ZenML and creator of a comprehensive LLMOps database documenting over 400 deployments. Alex's extensive research into real-world LLM implementations gives him unique insight into what actually works—and what doesn't—when deploying AI agents in production. In this episode, we dive into: - The current state of AI agents in production, from successes to common failure modes - Practical lessons learned from analyzing hundreds of real-world LLM deployments - How companies like Anthropic, Klarna, and Dropbox are using patterns like ReAct, RAG, and microservices to build reliable systems - The evolution of LLM capabilities, from expanding context windows to multimodal applications - Why most companies still prefer structured workflows over fully autonomous agents We also explore real-world case studies of production hurdles, including cascading failures, API misfires, and hallucination challenges. Alex shares concrete strategies for integrating LLMs into your pipelines while maintaining reliability and control. Whether you're scaling agents or building LLM-powered systems, this episode offers practical insights for navigating the complex landscape of LLMOps in 2025. LINKS The podcast livestream on YouTube (https://youtube.com/live/-8Gr9fVVX9g?feature=share) The LLMOps database (https://www.zenml.io/llmops-database) All blog posts about the database (https://www.zenml.io/category/llmops) Anthropic's Building effective agents essay (https://www.anthropic.com/research/building-effective-agents) Alex on LinkedIn (https://www.linkedin.com/in/strickvl/) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) Vanishing Gradients on Twitter (https://x.com/vanishingdata) Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
    続きを読む 一部表示
    1 時間 1 分
  • Episode 42: Learning, Teaching, and Building in the Age of AI
    2025/01/04
    In this episode of Vanishing Gradients, the tables turn as Hugo sits down with Alex Andorra, host of Learning Bayesian Statistics. Hugo shares his journey from mathematics to AI, reflecting on how Bayesian inference shapes his approach to data science, teaching, and building AI-powered applications. They dive into the realities of deploying LLM applications, overcoming “proof-of-concept purgatory,” and why first principles and iteration are critical for success in AI. Whether you’re an educator, software engineer, or data scientist, this episode offers valuable insights into the intersection of AI, product development, and real-world deployment. LINKS The podcast on YouTube (https://www.youtube.com/watch?v=BRIYytbqtP0) The original podcast episode (https://learnbayesstats.com/episode/122-learning-and-teaching-in-the-age-of-ai-hugo-bowne-anderson) Alex Andorra on LinkedIn (https://www.linkedin.com/in/alex-andorra/) Hugo on LinkedIn (https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Hugo's "Building LLM Applications for Data Scientists and Software Engineers" course (https://maven.com/s/course/d56067f338)
    続きを読む 一部表示
    1 時間 20 分
  • Episode 41: Beyond Prompt Engineering: Can AI Learn to Set Its Own Goals?
    2024/12/30
    Hugo Bowne-Anderson hosts a panel discussion from the MLOps World and Generative AI Summit in Austin, exploring the long-term growth of AI by distinguishing real problem-solving from trend-based solutions. If you're navigating the evolving landscape of generative AI, productionizing models, or questioning the hype, this episode dives into the tough questions shaping the field. The panel features: - Ben Taylor (Jepson) (https://www.linkedin.com/in/jepsontaylor/) – CEO and Founder at VEOX Inc., with experience in AI exploration, genetic programming, and deep learning. - Joe Reis (https://www.linkedin.com/in/josephreis/) – Co-founder of Ternary Data and author of Fundamentals of Data Engineering. - Juan Sequeda (https://www.linkedin.com/in/juansequeda/) – Principal Scientist and Head of AI Lab at Data.World, known for his expertise in knowledge graphs and the semantic web. The discussion unpacks essential topics such as: - The shift from prompt engineering to goal engineering—letting AI iterate toward well-defined objectives. - Whether generative AI is having an electricity moment or more of a blockchain trajectory. - The combinatorial power of AI to explore new solutions, drawing parallels to AlphaZero redefining strategy games. - The POC-to-production gap and why AI projects stall. - Failure modes, hallucinations, and governance risks—and how to mitigate them. - The disconnect between executive optimism and employee workload. Hugo also mentions his upcoming workshop on escaping Proof-of-Concept Purgatory, which has evolved into a Maven course "Building LLM Applications for Data Scientists and Software Engineers" launching in January (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0&utm_medium=partner&utm_source=instructor). Vanishing Gradient listeners can get 25% off the course (use the code VG25), with $1,000 in Modal compute credits included. A huge thanks to Dave Scharbach and the Toronto Machine Learning Society for organizing the conference and to the audience for their thoughtful questions. As we head into the new year, this conversation offers a reality check amidst the growing AI agent hype. LINKS Hugo on twitter (https://x.com/hugobowne) Hugo on LinkedIn (https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/) Vanishing Gradients on twitter (https://x.com/vanishingdata) "Building LLM Applications for Data Scientists and Software Engineers" course (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0&utm_medium=partner&utm_source=instructor).
    続きを読む 一部表示
    44 分