Ep 82 — Fixing Software Code Is Too Slow. Can AI Save Us?
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
In this episode of Tales from the Pros, hosts Michael Georgiou and Eric Lawrence are joined by Gal Vered, co-founder and CEO of Checksum AI, to break down one of the biggest bottlenecks in modern software development—fixing bugs and ensuring code quality in an AI-driven world.
AI can now generate software faster than ever. You can build features, ship MVPs, and even spin up entire applications in minutes. But speed introduces a new problem. How do you verify that the code actually works? And more importantly, how do you trust it in production?
The conversation cuts through the hype around AI coding and focuses on what really matters. Quality, testing, and verification. Gal shares insights from his experience at Google and building Checksum AI, explaining why most teams get stuck in endless bug-fixing loops, how AI can compound bad code patterns, and why strong testing systems are the only way to enable truly autonomous software development.
If you're building software with AI, struggling with bugs, or trying to scale beyond MVP without breaking everything, this episode gives a practical look at what it actually takes to ship reliable software today.
🎯 Highlights You Won’t Want to Miss
- Why AI-generated code often fails in production
- The real bottleneck in software development today
- How bug-fixing loops slow down engineering teams
- Why speed without verification creates bigger problems
- The role of testing in enabling autonomous AI developers
- How bad code patterns compound with AI
- The difference between code that works locally and production-ready code
- Why engineers still matter in an AI-driven workflow
🎧 Listen and Subscribe
Spotify: https://open.spotify.com/show/6QkUtrcNllUkqtq1fjlwnZ
Apple Podcasts: https://podcasts.apple.com/us/podcast/tales-from-the-pros/id1371067192
YouTube: https://www.youtube.com/@Imaginovation/podcasts
SoundCloud: https://soundcloud.com/talesfromthepros
💡 Key Takeaways
- AI can generate code quickly, but quality and verification remain major challenges
- Without proper testing, AI-generated code often leads to bugs and technical debt
- Fixing bugs after deployment is significantly more time-consuming than building features
- Strong testing pipelines are critical for scaling AI-driven development
- AI can amplify both good and bad coding patterns within a codebase
- Developers still play a key role in guiding, reviewing, and validating AI-generated code
- Confidence in code is just as important as finding bugs
- The future of software development depends on automated, continuous verification systems
🗂 Topics We Cover
- AI-generated code and its limitations
- Bug-fixing loops and technical debt
- Testing and verification in modern software development
- AI agents and autonomous engineering
- Vibe coding and rapid MVP creation
- Code quality vs development speed
- Human vs AI roles in software engineering
- The future of software testing and simulation
⏱️ Chapters
00:00 The hidden bottleneck in software development
05:10 AI-generated code vs real-world reliability
11:30 Why testing is the missing layer in AI coding
18:40 Escaping the bug-fixing loop
26:00 AI hype vs enterprise reality
33:20 Code quality, edge cases, and human thinking
40:00 The future of software testing and AI engineering