『The Emergent AI』のカバーアート

The Emergent AI

The Emergent AI

著者: Justin Harnish
無料で聴く

概要

Welcome to The Emergent, the podcast where two seasoned AI executives unravel the complexities of Artificial Intelligence as a transformative force reshaping our world. Each episode bridges the gap between cutting-edge AI advancements, human adaptability, and the philosophical frameworks that drive them. Join us for high-level insights, thought-provoking readings, and stories of collaboration between humans and AI. Whether you’re an industry leader, educator, or curious thinker, The Emergent is your guide to understanding and thriving in an AI-powered world.Copyright 2026 Justin Harnish 哲学 社会科学 科学
エピソード
  • Vibe Coding to Agentic Engineering: When Everyone Can Build, What Matters Is What You Build
    2026/03/11
    The Emergent Podcast — Episode 9Vibe Coding to Agentic Engineering: When Everyone Can Build, What Matters Is What You Build"AI is now awake. And it's a big contrast to even two, three months ago." — Nick BaguleyListen on: Apple Podcasts · Spotify · YouTube · RSSEpisode Duration: ~1 hr 40 min | Published: 2026 | Season 1, Episode 9🎙️ Episode SummaryOne tweet changed a word. The word changed an industry. The industry is changing what it means to build.In February 2025, Andrej Karpathy — co-founder of OpenAI and former head of AI at Tesla — published a single post coining the term "vibe coding": describe what you want in plain English, accept all AI-generated code without reading the diffs, and just… vibe. Twelve months later, it became the Collins Dictionary Word of the Year, 92% of U.S. developers use AI coding tools daily, 41% of all code is AI-generated — and Karpathy himself has already declared it passé, rebranding the practice as "agentic engineering."In Episode 9, Justin Harnish and Nick Baguley dig into what really happened in that extraordinary year. Both hosts share their personal workflows and real projects — including Justin's intermittent fasting app, his vision of a personal "digital brain" with AI-queryable embeddings, and Nick's AI-native marketplace designed for both human and agent users. They navigate the empirical gut-punch of the METR study(developers are actually 19% slower on mature codebases using AI), the existential labor market questions (traditional programmer roles down 27.5% since ChatGPT's launch), and the philosophical territory that has been the Emergent Podcast's throughline since Episode 1: when code becomes a commodity, what becomes scarce?Their answer: responsible agency — the judgment to decide what should be built, for whom, and with what values. That, they argue, is the skill that neither automation nor benchmarks can yet replicate.📚 Resources & Reading ListEvery link mentioned or referenced in this episode. Organized by theme for your exploration.🔑 The Origin & The Debate (Required Reading)Andrej Karpathy's Original "Vibe Coding" Tweet (Feb 2, 2025)The tweet that launched the year. Karpathy describes accepting all AI code without reading diffs, pasting errors back without comment, and letting the codebase grow beyond comprehension. Note the caveat he included that industry largely ignored: "not too bad for throwaway weekend projects."Karpathy's 2025 LLM Year in Review — bearblog.devHis retrospective on vibe coding's arc from shower-thought tweet to Collins Dictionary Word of the Year. Key insight: "Code is suddenly free, ephemeral, malleable, discardable after single use." He also identifies Claude Code as the first convincing LLM agent.Karpathy on "Agentic Engineering" (Feb 2026) — The New StackOne year after coining vibe coding, Karpathy declares it passé. His new frame — agentic engineering— emphasizes that professionals orchestrate AI agents 99% of the time, with zero compromise on software quality. The rebrand is the narrative bookend of this episode.Simon Willison — "Not All AI-Assisted Programming Is Vibe Coding" (Mar 2025) — simonwillison.netThe essential distinction: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding — that's using an LLM as a very fast typist." Also contains Willison's generous vision: "Everyone deserves the ability to automate tedious tasks."METR Study: AI Makes Experienced Devs 19% Slower (Jul 2025) — metr.orgThe empirical gut-punch of the episode. 16 experienced open-source developers, 246 real-world tasks. They believed AI made them 20% faster; they were actually 19% slower on their own mature codebases. Full paper: arxiv.org/abs/2507.09089Vibe Coding — WikipediaSurprisingly rigorous. Tracks the full timeline, Lovable's 170 vulnerable apps, CodeRabbit's finding that AI code has 1.7× more major issues, Y Combinator stats (25% of W25 startups are 95% AI-coded), and the "vibe coding hangover" reported by Fast Company.📖 Supplemental: The Deeper CutsScott H. Young — "Is Vibe Coding the Future of Skilled Work?"The variance argument: vibe coding may make software both much worse and much better simultaneously. Also argues that conceptual knowledge becomes more, not less, important when AI writes the code. A crucial counterweight to pure optimism.IBM — "What Is Vibe Coding?"Enterprise-oriented overview. Useful on the agile alignment: vibe coding fits fast-prototyping and iterative development. Contains the key qualifier Nick and Justin both echo: "AI generates code, but creativity, goal alignment, and out-of-the-box thinking remain uniquely human."Google Cloud — "Vibe Coding Explained: Tools and Guides"Practical tool comparison from Google's perspective — AI Studio, Firebase Studio, Gemini Code Assist. Useful for understanding which tool fits which use case.Software Engineering Job Market Outlook for 2026 — Final ...
    続きを読む 一部表示
    1 時間 43 分
  • Now You May Kiss the AI: Relationships and AI
    2026/01/26
    Episode 8 — Now You May Kiss the AI: Relationships and AI

    Hosts: Justin Harnish & Nick Baguley

    Episode Theme: Human–AI relationships, co-evolution, and the ethics of emotional engagement with non-human intelligence

    Episode Overview

    In Episode 8 of The Emergent Podcast, Justin Harnish and Nick Baguley explore one of the most intimate and underexamined frontiers of artificial intelligence: our emerging relationships with AI systems.

    This episode moves beyond abstract alignment theory into lived experience—how humans relate to AI when we know it is artificial, when we don’t, and how those interactions are actively shaping both sides of the relationship. From emotional attachment and parasocial bonds, to trust, deception, and the ethics of AI companionship, this conversation asks a core question of the Age of Inflection:

    What does it mean to be in relationship with an intelligence that is not conscious—but is becoming increasingly relational?

    Key Themes & Discussion Threads1. Relating to AI vs. Being Related By AI

    Justin and Nick draw a critical distinction between:

    1. Known-AI relationships (chatbots, copilots, advisors), and
    2. Unknown-AI relationships (emails, calls, avatars, and imitation without disclosure).

    As AI systems increasingly pass social and emotional Turing tests, the burden of trust shifts onto humans—often without our consent.

    2. Co-Adaptation: We Are Training Each Other

    A central thesis of the episode is behavioral co-evolution:

    1. Humans adapt language, tone, and expectations to AI.
    2. AI models simultaneously learn relational patterns from us.

    Every interaction becomes a micro-training event, shaping future norms, expectations, and behaviors—both human and machine.

    3. Sycophancy, Deference, and the Rise of the “Principal Advisor”

    The hosts examine why early AI systems became overly agreeable—and why frontier model providers are now reversing course.

    Emerging design patterns include:

    1. AI constitutions
    2. Rule-based behavioral scaffolds
    3. Opinionated, corrective, non-deferential advisors

    This marks a shift from “helpful assistant” toward trusted principal advisor, raising new relational and ethical questions.

    4. Anthropomorphism, Ghosts, and Alien Minds

    Nick introduces Andrej Karpathy’s framing of LLMs as:

    1. Cognitive operating systems
    2. Trained on the past but lacking lived experience
    3. More like “ghosts” than humans or animals

    This challenges intuitive assumptions about empathy, memory, and identity in AI systems.

    5. Embodiment, Emotion, and the Limits of Simulation

    Drawing heavily from neuroscience and philosophy, the episode interrogates whether:

    1. Consciousness requires embodiment
    続きを読む 一部表示
    1 時間 33 分
  • Machine Ethics: Do unto agents...
    2025/11/24
    🎙️ The Emergent Podcast – Episode 7Machine Ethics: Do unto agents...

    with Justin Harnish & Nick Baguley

    In Episode 7, Justin and Nick step directly into one of the most complex frontiers in emergent AI: machine ethics — what it means for advanced AI systems to behave ethically, understand values, support human flourishing, and possibly one day feel moral weight.

    This episode builds on themes from the AI Goals Forecast (AI-2027), embodied cognition, consciousness, and the hard technical realities of encoding values into agentic systems.

    🔍 Episode Summary

    Ethics is no longer just a philosophical debate — it’s now a design constraint for powerful AI systems capable of autonomous action. Justin and Nick unpack:

    1. Why ethics matters more for AI than any prior technology
    2. Whether an AI can “understand” right and wrong or merely behave correctly
    3. The technical and moral meaning of corrigibility (the ability for AI to accept correction)
    4. Why rules-based morality may never be enough
    5. Whether consciousness is required for morality
    6. How embodiment might influence empathy
    7. And how goals, values, and emergent behavior intersect in agentic AI

    They trace ethics from Aristotle to AI-2027’s goal-based architectures, to Damasio’s embodied consciousness, to Sam Harris’ view of consciousness and the illusion of self, to the hard problem of whether a machine can experience moral stakes.

    🧠 Major Topics Covered1. What Do We Mean by Ethics?

    Justin and Nick begin by grounding ethics in its philosophical roots:

    Ethos → virtue → flourishing.

    Ethics isn’t just rule-following — it’s about character, intention, and outcomes.

    They connect this to the ways AI is already making decisions in vehicles, financial systems, healthcare, and human relationships.

    2. AI Goals & Corrigibility

    AI-2027 outlines a hierarchy of AI goal types — from written specifications to unintended proxies to reward hacking to self-preservation drives.

    Nick explains why corrigibility — the ability for AI to accept shutdown or redirection — is foundational.

    Anthropic’s Constitutional AI makes an appearance as a real-world example.

    3. Goals vs. Values

    Justin distinguishes between:

    1. Goals: task-specific optimization criteria
    2. Values: deeper principles shaping which goals matter

    AI may follow rules without understanding values — similar to a child with chores but no moral context.

    This raises the key question:

    Can a system have values without consciousness?

    4. Is Consciousness Required for Ethics?

    A major thread of the episode:

    Is a non-conscious “zombie” AI capable of morality?

    5. Embodiment & Empathy

    Justin and Nick explore whether AI needs a body — or at least a simulated body — to:

    1. Learn empathy
    続きを読む 一部表示
    1 時間 37 分
まだレビューはありません