『Super Prompt: Generative AI』のカバーアート

Super Prompt: Generative AI

Super Prompt: Generative AI

著者: Tony Wan
無料で聴く

このコンテンツについて

Examining generative AI—not to hype breakthroughs or warn of apocalypse, but to understand how things actually work. Mental models over hot takes. Technology specifics over marketing fog.


Welcome to Super Prompt. Hosted by Tony Wan, ex-Silicon Valley insider.


For The Independents—people who think for themselves, refuse narrative capture, and value depth over certainty.


Independent analysis. Unsponsored. Weekly.


The future belongs to better questions.

© 2025 Super Prompt Productions
マネジメント・リーダーシップ リーダーシップ 政治・政府 経済学
エピソード
  • AI Agents at Work: Scaffold Required
    2025/12/03

    We review four clips from the Dwarkesh Patel Podcast with Satya Nadella, Microsoft's CEO. I highly recommend Dwarkesh’s show—technical & nerdy, but excellent.

    Satya talks about scaffolding—the software wrapped around AI models to make them actually work.

    So we speak with someone building that scaffolding: Neil McKechnie runs two AI-first startups as a CTO.

    He discusses how he orchestrates up to twelve different language models—GPT-5, Claude, Gemini, Llama, Mistral, Cohere, Perplexity.

    We discuss what it actually takes to build production systems with LLMs today—and what that reveals about the agent future we’re being pitched.

    Dwarkesh's Podcast:

    https://www.youtube.com/@DwarkeshPatel




    To stay in touch, sign up for our newsletter at https://www.superprompt.fm

    続きを読む 一部表示
    40 分
  • Agentic AI
    2025/11/07

    Description:

    AI agents from OpenAI, Google, and Anthropic promise to act on your behalf—booking flights, handling tasks, making decisions. What kind of agency do these systems actually have? And whose interests are they serving?

    Enterprise AI agents are already deployed in customer support, code generation, and task automation. Consumer agents—ChatGPT Agent Mode, personal task assistants—face a wider gap between marketing promises and actual capabilities.

    The alignment problem: agents need access to your calendar, email, and personal preferences to help you effectively. But the agent that knows you well enough to serve you is also positioned to steer you. When you delegate decisions to an agent, who decides what success looks like?

    To stay in touch, sign up for our newsletter at https://www.superprompt.fm

    続きを読む 一部表示
    16 分
  • AI Safety: Constitutional AI vs Human Feedback
    2024/06/17

    With great power comes great responsibility. How do leading AI companies implement safety and ethics as language models scale? OpenAI uses Model Spec combined with RLHF (Reinforcement Learning from Human Feedback). Anthropic uses Constitutional AI. The technical approaches to maximizing usefulness while minimizing harm. Solo episode on AI alignment.

    REFERENCE

    OpenAI Model Spec

    https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

    Anthropic Constitutional AI

    https://www.anthropic.com/news/claudes-constitution



    To stay in touch, sign up for our newsletter at https://www.superprompt.fm

    続きを読む 一部表示
    17 分
まだレビューはありません