『Chaos Agents』のカバーアート

Chaos Agents

Chaos Agents

著者: Sara Chipps and Becca Lewy
無料で聴く

概要

Technologists Sara Chipps and Becca Lewy dive into the chaos of artificial intelligence—unpacking the tech, trends, and ideas reshaping how we work, create, and think. Smart, funny, and just a little bit existential.Copyright 2026 Sara Chipps and Becca Lewy 個人的成功 政治・政府 自己啓発
エピソード
  • Agents in Crypto, Agents at Work, and the Weird New Middle Management - With Meghan Heintz
    2026/02/03

    We’re joined by Meghan Heintz, founding engineer at Herd Labs, to break down where crypto and AI agents actually work. We talk prediction markets, smart contracts, wallets, rug pulls, and how AI can finally explain what happened on-chain.

    Then we zoom out to the human side: benchmarking and evals for agents, misinformation, the Mom Test, and what it feels like to manage coding agents instead of junior engineers.

    Links mentioned: Polymarket · Dune Analytics · Herd Labs · The Mom Test

    続きを読む 一部表示
    51 分
  • Can You Build Anything in a Week? GPUs, Code Gen, and the End of Engineers - With Harper Reed
    2026/01/20

    Becca just got back from NeurIPS, the academic AI conference that feels like an adult science fair. We dig into research on training large AI models across cheap GPUs and slow internet connections—and why that could dramatically lower the barrier to building AI.

    Then we’re joined by Harper Reed, CEO of 2389, for a wide-ranging conversation about code generation, coaching-based engineering teams, and why “production code” might have always been a myth. We talk vibe coding (begrudgingly), the shifting role of software engineers, taste vs. technical skill, and what happens when you can build almost anything in a week.

    Smart, funny, and a little unsettling—Chaos Agents at full volume.

    🎓 Academic AI & research culture
    1. NeurIPS (Conference on Neural Information Processing Systems)
    2. NeurIPS 2024 Accepted Papers

    🧠 Distributed training, GPUs & efficiency
    1. NVIDIA H100 Tensor Core GPU (referenced GPU class)
    2. Pluralis Research (distributed training across low-bandwidth networks)

    ⚙️ Core AI concepts mentioned
    1. GPU vs CPU explained (parallel vs sequential compute)
    2. Data Parallelism vs Model Parallelism (training overview)

    🧑‍💻 Code generation & developer tools
    1. Claude Code (Anthropic code-gen tooling)
    2. Cursor (AI-first code editor, discussed implicitly)

    🛠️ Agent workflows & infrastructure
    1. Matrix (open-source, decentralized chat protocol)
    2. Model Context Protocol (MCP) overview

    🧩 Utilities & recommendations
    1. Jesse Vincent’s Superpowers (Claude workflow enhancer)
    2. Fly.io (deployment platform referenced)
    3. Netlify (deployment & hosting)

    🧪 Related Chaos Agents context
    続きを読む 一部表示
    59 分
  • The Magic Cycle, AI Detectors, and the End of Writing as Proof - With Clay Shirky
    2026/01/06

    Sara’s back from visiting her New Jersey Christian high school—where she gets hit with a genuinely spicy question: How do you reconcile AGI with faith? From there, we go straight into the bigger theme of the episode: education is getting stress-tested by AI in real time.

    Becca breaks down Google’s “magic cycle” — the uncomfortable lesson of inventing transformative research (Transformers, BERT) and then watching someone else ship it to the world. Sara shares what she’s learning about research workflows moving beyond “just chat,” including multi-agent setups for planning, searching, reading, and synthesis.

    Then we’re joined by Clay Shirky, Vice Provost for AI & Technology in Education at NYU, to talk about what’s actually happening on campuses: why students integrated AI “sideways” before institutions could respond, why AI detectors are a trap (and who they harm most), and why the real shift isn’t assignments — it’s assessment.

    We dig into what comes next: oral exams, in-class scaffolding, and designing learning around productive struggle—not just output. And we end in a place that’s both funny and unsettling: the rise of AI “personalities,” RLHF as “reinforcement learning for human flattery,” and what it means when a machine is always on your side.

    Because whether we like it or not: a well-written paragraph is no longer proof of human thought.

    🧠 Foundational AI papers & breakthroughs
    1. Attention Is All You Need (Transformers, 2017)
    2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

    🧪 Google’s “Magic Cycle” framing
    1. Accelerating the magic cycle of research breakthroughs and real-world applications Google Research
    2. How AI Drives Scientific Research with Real-World Benefit (Google Blog)

    🚨 Shipping pressure: Bard + “code red” era
    1. Reuters: Alphabet shares dive after Bard flubs info, ~$100B market cap hit (https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/) Reuters
    2. Google Blog: Bard updates from Google I/O 2023 (https://blog.google/technology/ai/google-bard-updates-io-2023/)
    続きを読む 一部表示
    54 分
まだレビューはありません