『Artificial Developer Intelligence』のカバーアート

Artificial Developer Intelligence

Artificial Developer Intelligence

著者: Shimin Zhang Dan Lasky & Rahul Yadav
無料で聴く

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

Three engineer friends argue about AI so you don't have to. Shimin Zhang, Dan Lasky, and Rahul Yadav are working developers who've been watching AI transform their profession in real time, and they got opinions on the robot takeover. Every week the three get together to riff on the latest AI news, geek out over research papers, roast each other's tool choices, and occasionally have an existential crisis about whether the craft is dying or just getting weird. What you're signing up for: - AI news without the LinkedIn cringe: model drops, acquisitions, open-source drama, and the other stuff that actually matters if you write code for a living. - Technique corner: real tips from the trenches: spec-driven development, multi-agent orchestration, Claude.md tricks, and all the ways they've wasted hours so you don't have to. - Two Minutes to Midnight: the show's running AI bubble tracker, complete with circular funding diagrams, hyperscaler CAPEX math, and a doomsday clock they keep arguing about moving. - Deep dives that (occasionally) go deep: hallucination neurons, agentic memory, workflow automation economics, LLM architectures the papers nobody else is covering because they're hard. - Dan's Rant: Dan frequently gets mad about things. It's a whole thing. - The feelings segment: Yes, Shimin reads Tennyson on a tech podcast. Yes, Rahul wrote an AI-generated country song. No, they're not sorry. Three friends with strong opinions, questionable metaphors, and genuine love for the craft they're also mourning for. If you want to understand AI deeply, use it without embarrassing yourself, and laugh at the absurdity of it all, pull up a chair.ADIPod 政治・政府
エピソード
  • Ep 20: Claude Code Source Leak, Emotion Concepts in LLMs, and Surprising Facts AIs Know About Us.
    2026/04/10

    This week Rahul, Shimin, and Dan returns after a two-week break to cover the leaked Claude Code CLI source code, new model releases (Qwen 3.6 and Gemma 4), Mario Zechner's essay on slowing down with AI-assisted coding, a fun segment on unexpected things AI knows about each host, and two deep dives: Anthropic's research on emotion concepts in LLMs and a paper on how sycophantic AI decreases pro-social intentions.

    Takeaways:

    • Claude Code's dual-track permission system uses both rule-based and ML classifier for destructive bash command
    • "Cognitive bankruptcy" — when cognitive debt interest payments come due and you can't pay
    • AI sycophancy parallels social media echo chambers; no market incentive to fix it
    • On-device models like Gemma 4 could save cloud costs by handling routine tasks (e.g., agent heartbeats)
    • Copilot's terms of service classify it as "for entertainment purposes only"

    Resources Mentioned
    Entire Claude Code CLI source code leaks thanks to exposed map file
    I Read the Leaked Claude Code Source — Here's What I Found
    The Claude Code Source Leak: fake tools, frustration regexes, undercover mode, and more
    Claude Code Unpacked
    Qwen3.6-Plus: Towards Real World Agents
    Gemma 4 Announcement
    Thoughts on slowing the fuck down
    Emotion concepts and their function in a large language model
    Sycophantic AI decreases prosocial intentions and promotes dependence

    Chapters

    • (00:00) - Introduction and Host Updates
    • (01:45) - Cloud Code Source Code Leak
    • (12:49) - New Model News and Open Source Developments
    • (20:51) - Post-Processing and AI Anxiety
    • (25:35) - Unexpected Insights from AI
    • (33:12) - Exploring Emotional Concepts in AI
    • (39:15) - The Dangers of Sycophantic AI
    • (52:39) - Concluding Thoughts and Future Considerations

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai


    続きを読む 一部表示
    54 分
  • Ep 19: Thinking Fast Slow and Artificial, Meta's Trouble with Rogue Agents, and FOMO in the Age of AI
    2026/03/27

    This week, Rahul, Shimin, and Dan covers Claude Code's new channels and scheduling features, a Meta security incident caused by AI-generated advice, Anthropic's survey of 81,000 people on AI expectations, Dan's vibe-coded vector memory CLI project, a deep dive on the paper "Thinking, Fast, Slow and Artificial" about cognitive surrender to AI, a rant about AI tokens as employee compensation, and bubble watch updates including NVIDIA's trillion-dollar demand projections and OpenAI shutting down Sora.

    Takeaways:

    • Claude Code is rapidly absorbing community-developed workflows — the moat may only be in the general model capabilities, not tooling
    • The Meta incident illustrates the emerging pattern of AI-caused production incidents and the need for process guardrails around agent usage
    • Cognitive surrender to AI creates a widening gap: those with high need-for-cognition benefit more while those who dislike effortful thinking defer even more
    • AI confidence inflation (12 percentage point boost) may stem from treating AI like authoritative reference material (encyclopedias, Wikipedia)
    • Historical technology resistance (Socrates on writing, farmers on tractors) suggests the battle against AI adoption may already be lost
    • OpenAI shutting Sora just 4 months after a 3-year Disney partnership signals deeper financial or strategic issues

    Resources Mentioned
    Push events into a running session with channels
    Perhaps not Boring Technology after all
    Meta is having trouble with rogue AI agents
    What 81,000 people want from AI
    Dan's vec-memory-cli
    Thinking—Fast, Slow, and Artificial
    Are AI tokens the new signing bonus or just a cost of doing business?
    Jensen Huang just put Nvidia’s Blackwell and Vera Rubin sales projections into the $1 trillion stratosphere
    Accelerated FOMO in the Age of AI
    OpenAI shutters AI video generator Sora in abrupt announcement

    Chapters
    Connect with ADIPod

    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    続きを読む 一部表示
    1 時間 13 分
  • Ep 18: 8 Levels of AI Engineering, Meta AI Delays, and LLM Neuroanatomy
    2026/03/20

    This week, Dan, Shimin & Rahul covers Meta's struggles with its delayed "Avocado" AI model and potential Gemini licensing, NVIDIA's enterprise-ready NemoClaw fork of OpenClaw, SWE-bench analysis showing PRs wouldn't pass human review, prompting superstitions and developer identity, the 8 levels of agentic engineering, mainstream media framing of AI coding, legal liability for agent-written code, and a deep dive into LLM neuroanatomy where a researcher topped leaderboards by repeating model layers without changing weights.

    Takeaways:

    • Meta may end up licensing Gemini despite massive AI investment — mirroring Apple's path
    • SWE-bench failures were mostly code quality, not functionality — suggesting good enough may be good enough with proper agents.md
    • A coworker analyzed 4.5 years of PRs to create a personalized coding style document for AI priming
    • The fastest software paradigm adoption cycle ever may be the claw/agent paradigm
    • Legal frameworks and insurance haven't caught up to agent-written code shipping to production
    • Repeating later model layers (the "thinking" layers) can boost performance without fine-tuning — raising questions about whether chain-of-thought reasoning is essentially exercising these layers repeatedly
    • Developers compared to ancient Egyptian scribes — language literacy as leverage

    Resources Mentioned
    Meta Delays Rollout of New A.I. Model After Performance Concerns
    NVIDIA NemoClaw
    Research note: Many SWE-bench-Passing PRs Would Not Be Merged into Main
    The Collective Superstitions of People Who Talk to Machines
    The 8 Levels of Agentic Engineering
    Coding After Coders: The End of Computer Programming as We Know It
    Built by Agents, Tested by Agents, Trusted by Whom?
    LLM Neuroanatomy: How I Topped the LLM Leaderboard Without Changing a Single Weight

    Chapters

    • (00:00) - Introduction to AI in Software Development
    • (02:42) - Meta's AI Model Delays and Market Position
    • (09:51) - NVIDIA's New AI Developments
    • (13:58) - Benchmarking AI Models and Code Quality
    • (19:00) - Techniques Corner: AI Prompting and Creativity
    • (22:56) - The Evolution of Coding and Creativity
    • (28:46) - Levels of Agentic Engineering
    • (34:58) - Mainstream Perspectives on AI and Software Development
    • (43:00) - Trusting AI-Generated Code
    • (44:40) - Metrics for Success in Autonomous Teams
    • (46:59) - Legal and Ethical Implications of Autonomous Code
    • (50:21) - Innovations in Language Model Architectures
    • (01:01:02) - User Experience Challenges in Tech Development
    • (01:03:47) - Market Predictions and Financial Insights

    Connect with ADIPod
    • Email us at humans@adipod.ai you have any feedback, requests, or just want to say hello!
    • Checkout our website www.adipod.ai
    続きを読む 一部表示
    1 時間 8 分
まだレビューはありません