『Artificial Developer Intelligence』のカバーアート

Artificial Developer Intelligence

Artificial Developer Intelligence

著者: Shimin Zhang & Dan Lasky
無料で聴く

このコンテンツについて

The Artificial Developer Intelligence (ADI) podcast is a weekly talk show where hosts Dan Lasky and Shimin Zhang (two AI Filthy Casuals) discuss the latest news, tools, and techniques in AI enabled software development. The show's for the 99% of software engineers who need to ship features and not fine-tune large language models. We cut through the hype to find the tools and techniques that actually work for us, discuss the latest "LLM wars" and "vibe coding" trends with a healthy dose of skepticism and humor. We also discuss any AI papers that catch our eye, no math background required. ADI will either documents our journey to survive and thrive in the age of AI, or our descend into AI madness.ADIPod 政治・政府
エピソード
  • Episode 5: How Anthropic Engineers use AI, Spec Driven Development, and LLM Psychological Profiles
    2025/12/12

    In this episode, Shimin and Dan explore the evolving landscape of AI in software engineering, discussing the implications of the Cloud Opus 4.5 sole document, the ethical considerations of AI models, and the impact of AI on developer productivity. They delve into spec-driven development, the latest advancements in AI models like DeepSeek v3.2, and the intersection of AI and mental health. The conversation also touches on the potential AI bubble and the challenges faced by developers in integrating AI tools effectively.


    Takeaways
    The Cloud Opus 4.5 sole document reveals insights into AI model training.
    Spec-driven development is a promising approach for AI-assisted coding.
    DeepSeek v3.2 showcases advancements in reasoning models.
    AI models can exhibit traits similar to human emotions and traumas.
    Skills in AI may not always resolve context issues effectively.

    Resources Mentioned
    How AI is transforming work at Anthropic
    Claude 4.5 Opus Soul Document
    12 Factor Agents
    Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl
    From DeepSeek V3 to V3.2: Architecture, Sparse Attention, and RL Updates
    When AI Takes the Couch: Psychometric Jailbreaks Reveal Internal Conflict in Frontier Models
    Are we really repeating the telecoms crash with AI datacenters?
    Anthropic CEO weighs in on AI bubble talk and risk-taking among competitors
    Time until the AI bubble bursts
    Microsoft’s Attempts to Sell AI Agents Are Turning Into a Disaster

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    続きを読む 一部表示
    57 分
  • Episode 4: Open AI Code Red, TPU vs GPU and More Autonomous Coding Agents
    2025/12/05

    In this episode of Artificial Developer Intelligence, hosts Shimin and Dan discuss the evolving landscape of AI in software engineering, touching on topics such as OpenAI's recent challenges, the significance of Google TPUs, and effective techniques for working with large language models. They also delve into a deep dive on general agentic memory, share insights on code quality, and assess the current state of the AI bubble.


    Takeaways

    • Google's TPUs are designed specifically for AI inference, offering advantages over traditional GPUs.
    • Effective use of large language models requires avoiding common anti-patterns.
    • AI adoption rates are showing signs of flattening out, particularly among larger firms.
    • General agentic memory can enhance the performance of AI models by improving context management.
    • Code quality remains crucial, even as AI tools make coding easier and faster.
    • Smaller, more frequent code reviews can enhance team communication and project understanding.
    • AI models are not infallible; they require careful oversight and validation of generated code.
    • The future of AI may hinge on research rather than mere scaling of existing models.


    Resources Mentioned
    OpenAI Code Red
    The chip made for the AI inference era – the Google TPU
    Anti-patterns while working with LLMs
    Writing a good claude md
    Effective harnesses for long-running agents
    General Agentic Memory Via Deep Research
    AI Adoption Rates Starting to Flatten Out
    A trillion dollars is a terrible thing to waste

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    続きを読む 一部表示
    1 時間 4 分
  • Claude Opus 4.5, Olmo 3, and a Paper on Diffusion + Auto Regression
    2025/11/29

    In this episode of Artificial Developer Intelligence, hosts Shimin and Dan explore the latest advancements in AI models, including the release of Claude Opus 4.5 and Gemini 3. They discuss the implications of these models on software engineering, the rise of open-source models like Olmo 3, and the enhancements in the Claude Developer Platform. The conversation also delves into the challenges of relying on AI for coding tasks, the potential pitfalls of the AI bubble, and the future of written exams in the age of AI.

    Takeaways

    • Claude Opus 4.5 setting benchmarks, enhance usability and reduce token consumption.
    • The introduction of open-source models like Olmo 3 is a significant development in AI.
    • The future of written exams may be challenged by AI's ability to generate human-like responses.
    • Relying too heavily on AI can lead to a lack of critical thinking and problem-solving skills.
    • The AI bubble is at 25s to midnight
    • Recent research suggests that AI models can improve their performance through emulating query based search.
    • The importance of prompt engineering in AI interactions is highlighted.

    Resources Mentioned
    Introducing Claude Opus 4.5
    Build with Nano Banana Pro, our Gemini 3 Pro Image model
    Andrej Karpathy's Post about Nano Banana Pro
    Olmo 3: Charting a path through the model flow to lead open-source AI
    Introducing advanced tool use on the Claude Developer Platform
    TiDAR: Think in Diffusion, Talk in Autoregression
    SSRL: SELF-SEARCH REINFORCEMENT LEARNING
    Mira Murati's Thinking Machines seeks $50 billion valuation in funding talks, Bloomberg News reports
    Boom, bubble, bust, boom. Why should AI be different?
    Nvidia didn’t save the market. What’s next for the AI trade?

    Chapters

    • (00:00) - Introduction to Artificial Developer Intelligence
    • (01:25) - Claude Opus 4.5
    • (07:02) - Exploring Gemini 3 and Image Models
    • (11:24) - Olmo 3 and The Rise of Open Flow Models
    • (15:46) - Innovations in AI Tools and Platforms
    • (19:33) - Research Insights: Diffusion and Auto-Regression Models
    • (23:39) - Advancements in AI Output Efficiency
    • (25:45) - Exploring Self Search Reinforcement Learning
    • (27:48) - The Dilemma of Language Models
    • (30:11) - Prompt Engineering and Search Integration
    • (32:55) - Dan's Rants on AI Limitations
    • (38:17) - 2 Minutes to Midnight
    • (46:41) - Outro

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    続きを読む 一部表示
    48 分
まだレビューはありません