『The Phront Room - Practical AI』のカバーアート

The Phront Room - Practical AI

The Phront Room - Practical AI

著者: Nathan Rigoni
無料で聴く

概要

AI for everyone – data‑driven leaders, teachers, engineers, program managers and researchers break down the latest AI breakthroughs and show how they’re applied in real‑world projects. From AI in aerospace and education to image‑processing tricks and hidden‑state theory, we’ve got something for PhD tech lovers and newcomers alike. Join host Nathan Rigoni for clear, actionable insights.  Keywords: artificial intelligence, machine learning, AI research, AI in engineering, AI ethics, AI podcast, tech news.Nathan Rigoni
エピソード
  • I Think Therefore I Am
    2026/03/02

    Chain of Thought: From Descartes to Machine Minds Hosted by Nathan Rigoni

    In this episode we travel from the candle‑lit study of 17th‑century Descartes, who stripped away every belief to find the one certainty “I think, therefore I am,” to today’s glowing screens where large language models generate their own inner monologue. How does the age‑old philosophical quest for self‑knowledge map onto a model that writes “Let’s think step‑by‑step” and then follows its own reasoning chain? Can a machine’s recursive self‑talk be considered true thought, or is it merely sophisticated pattern matching? Join us as we untangle the threads of doubt, recursion, and chain‑of‑thought prompting to ask whether AI can ever achieve a genuine inner voice.

    What you will learn

    • The origins of chain‑of‑thought prompting and its connection to the REACT framework.
    • How “system 1” fast intuition and “system 2” slow deliberation map onto LLM reasoning processes.
    • The mechanics of recursive prompting: scratch‑pad tags, tool calls, observations, and how models iterate toward a final answer.
    • Key philosophical questions about self‑awareness, consciousness, and the “I think, therefore I am” argument applied to artificial agents.
    • Practical prompt‑engineering techniques to make LLMs reason more reliably in real‑world tasks.

    Resources mentioned

    • “Chain of Thought Prompting Improves Reasoning in Large Language Models,” 2022 (arXiv).
    • “ReAct: Synergizing Reasoning and Acting in Language Models.”
    • Daniel Kahneman, Thinking, Fast and Slow.
    • Thomas Metzinger, Being No One.
    • OpenAI function‑calling guide and examples of tool‑use in REACT‑style agents.

    Why this episode matters
    Understanding how LLMs construct and follow a chain of thought bridges the gap between classic epistemology and modern AI. Grasping these recursive reasoning patterns not only improves model performance on complex tasks, but also forces us to confront deeper questions about consciousness, agency, and what it truly means to “think.” As AI systems become partners in decision‑making, having a clear picture of their inner processes is essential for responsible deployment, ethical design, and informed public discourse.

    Subscribe for more philosophical deep dives, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com.

    Keywords: chain of thought, recursion, REACT framework, large language models, prompt engineering, AI self‑awareness, consciousness, René Descartes, “I think therefore I am”, system 1 system 2, philosophical AI, artificial intelligence reasoning.

    続きを読む 一部表示
    30 分
  • Basics of Retrieval Augmented Generation (RAG)
    2026/03/01
    10 分
  • Basics of Prompting
    2026/02/21

    Prompting and Context: The Key to Great LLM Interaction Hosted by Nathan Rigoni

    In this episode we unpack the art and science of prompting large language models. Why does a simple change of context turn a generic answer into a precise, on‑target response? We explore the rise (and controversy) of “prompt engineering,” the power of zero‑shot prompting, and how contextual alignment can replace costly fine‑tuning. By the end, you’ll understand how to craft prompts that guide a model’s imagination rather than let it wander—so, are you ready to master the language of LLMs?

    What you will learn

    • The fundamental definition of prompting and why context is the driving force behind model behavior.
    • How zero‑shot prompting works: getting a model to extrapolate from its training without any additional fine‑tuning.
    • Techniques for building effective system prompts, personas, and assumed contexts.
    • Common pitfalls that lead to hallucinations and how to avoid them with clear contextual framing.
    • When to rely on contextual alignment versus full model fine‑tuning (and why >90 % of cases don’t need the latter).

    Resources mentioned

    • “Retrieval‑Augmented Generation” overview (conceptual explanation).
    • Papers on zero‑shot learning and contextual prompting (e.g., “Prompting GPT‑3 to Reason”).
    • Example system prompts for persona‑based interactions (historian, pirate, sci‑fi author).

    Why this episode matters
    Understanding prompting is essential for anyone building or using AI products today. A well‑crafted prompt can unlock a model’s hidden capabilities, reduce costs by avoiding unnecessary fine‑tuning, and dramatically improve reliability—especially in high‑stakes domains like finance, healthcare, or education. Conversely, vague prompts lead to hallucinations and mistrust. This knowledge equips you to harness LLMs responsibly and effectively.

    Subscribe for more AI insights, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com to share topics you’d like covered.

    Keywords: prompting, context, zero‑shot learning, assumed context, system prompt, prompt engineering, LLM hallucination, retrieval‑augmented generation, fine‑tuning vs. contextual alignment, large language models.

    続きを読む 一部表示
    11 分
まだレビューはありません