『Speed vs Quality, Hallucinations, and the AI Learning Rabbit Hole』のカバーアート

Speed vs Quality, Hallucinations, and the AI Learning Rabbit Hole

Speed vs Quality, Hallucinations, and the AI Learning Rabbit Hole

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

Sara breaks down perceptrons (1957!) as the tiny “matrix of lights” idea that eventually became neural networks—then we jump straight into modern AI chaos.

Oboe’s Nir Zuckerman walks us through the messy reality of building consumer-grade AI for education: every feature is a tradeoff between loading fast and being good, and “just use a better model” doesn’t magically solve it. We talk guardrails, web search, multi-model pipelines, and why learning tools should feel lightweight—more like curiosity than homework. Also: Becca’s “how does a computer work?” obsession and a book recommendation that might change your life.

🧠 AI Concepts & Foundations
  1. Perceptron (Wikipedia)
  2. Neural Networks Explained
  3. Scaling Laws for Neural Language Models
  4. FLOPS (Floating Point Operations Per Second)

🎓 Learning, Education & AI
  1. Oboe
  2. AI as a Personal Tutor (Overview)
  3. Why Tutors Are So Effective

🏗️ Building AI Products
  1. Speed vs Quality Tradeoffs in LLM Apps
  2. LLM Orchestration Patterns
  3. Retrieval-Augmented Generation (RAG)
  4. LLM Hallucinations: Causes & Mitigation

📚 Books Mentioned
  1. Code: The Hidden Language of Computer Hardware and Software
  2. Perceptrons

🧪 History of AI
まだレビューはありません