• Beyond Instructions: How Beads Lets AI Agents Build Like Engineers
    2025/11/26

    In this episode of AI Tinkerers One-Shot, Joe sits down with Steve Yegge—engineer and creator of the Beads framework—to explore how open source tools are transforming the way we build with AI. Steve shares the story behind Beads, a new framework that gives coding agents memory and task management, enabling them to work longer, smarter, and more autonomously. From his days at Amazon and Google to leading engineering at Sourcegraph, Steve reveals how Beads is already reshaping developer workflows and why it’s gaining hundreds of contributors in just weeks.

    What you’ll learn:

    - How Beads gives coding agents “session memory” and lets them manage complex, multi-step projects.

    - Why Steve believes the future of engineering is about guiding and supervising AI—rather than just writing code.

    - The evolution from chaotic markdown files to structured, issue-based workflows.

    - Techniques for multimodal prompting, automated screenshot validation, and “landing the plane” for session cleanup.

    - The challenges and breakthroughs in deploying AI tools at scale within organizations.

    - How Beads and similar frameworks are making it easier for both junior and senior developers to thrive in the age of AI.

    Whether you’re a developer, tinkerer, or just curious about the next wave of AI-assisted coding, this deep dive with Steve Yegge will show you what’s possible now—and what’s coming next.

    💡 Resources:

    Beads – https://github.com/steveyegge/beads

    Steve Yegge – https://www.linkedin.com/in/steveyegge/ & https://x.com/Steve_Yegge

    AI Tinkerers – https://aitinkerers.org

    Subscribe for more conversations with the builders shaping the future of AI and robotics!

    00:00 - Introduction to Steve Yegge and Beads Framework

    02:10 - Steve's Background and Source Graph AMP

    08:00 - Building a React Game Client with AI Agents

    15:36 - Multimodal Prompting and Screenshot Validation

    23:16 - Code Review Techniques and Agent Confidence

    32:01 - The Evolution of Beads: From Markdown Chaos to Issue Tracking

    43:11 - Landing the Plane: Automated Session Cleanup

    52:09 - Deploying AI Tools in Organizations

    58:59 - Code Review Bottlenecks and Graphite Solution

    01:02:57 - Closing Thoughts on AI-Assisted Development

    続きを読む 一部表示
    1 時間 3 分
  • The Future of Home Robotics: Axel Peytavin on Building Robots That Feel Alive
    2025/10/17

    What if your home robot didn’t just clean, but felt alive — learning, adapting, and becoming part of your family?

    In this episode of AI Tinkerers One-Shot, Joe talks with Axel Peytavin, Co-founder & CEO of Innate, about his mission to create robots that aren’t just functional, but truly responsive companions. From his early start coding at age 11 to building one of the first GPT-4 Vision-powered robots, Axel shares how his team is creating an open-source robotics kit and one of the first agentic frameworks for robots — giving developers the tools to teach, customize, and build the next generation of embodied AI.

    What you’ll learn:

    - Why Axel believes “robots that feel alive” are the future — beyond flashy demos of backflips and kung fu.

    - How Innate is making robotics accessible with an open-source hardware and SDK platform.

    - The breakthroughs (and roadblocks) in fine motor manipulation, autonomy, and real-time learning.

    - How teleoperation, deep learning, and reinforcement learning are shaping the next era of household robots.

    - Axel’s vision for robots as companions: cleaning, tidying, assisting — and even calling for help in emergencies.

    Whether you’re a tinkerer, developer, or just curious about how soon robots will fold your laundry, this deep dive shows what’s possible now — and what’s coming next.

    💡 Resources:

    - Innate Robotics – https://innate.bot/

    - Axel Peytavin’s Twitter – https://x.com/ax_pey/

    - AI Tinkerers – https://aitinkerers.org

    Subscribe for more conversations with the builders shaping the future of AI and robotics!

    0:00 Axel’s mission — building robots that feel alive

    00:57 The open-source kit that lets any tinkerer train new behaviors

    05:00 Why applied mathematics is the foundation for AI + robotics

    08:17 Early projects: Minecraft plugins with 200K+ downloads

    11:04 Innate’s vision for teachable household robots

    12:01 Why fine-motor manipulation is the real breakthrough, not backflips

    15:19 How deep learning is driving rapid robotics progress

    17:11 Teleoperation as the engine for data collection and training

    23:21 Why tidying up, laundry, and dishes are the killer apps for home robots

    32:24 Live teleoperation demo of Maurice in action

    36:08 Breaking down the system architecture — Wi-Fi, WebSockets, Python SDK

    41:40 Maurice shows delicate fine-motor skills with object pickup

    43:53 How Innate built one of the first agentic frameworks for robots

    49:50 The rise of an open-source robotics community around Maurice

    57:03 Viral GPT-4 Vision robot demo — and what it revealed about the future

    続きを読む 一部表示
    1 時間 18 分
  • Building GPT-2 in a Spreadsheet — Everything You Wanted to Know About LLMs (But Were Afraid to Ask)
    2025/10/17

    Learn how to demystify large language models by building GPT-2 from scratch — in a spreadsheet. In this episode, MIT engineer Ishan Anand breaks down the inner workings of transformers in a way that’s visual, interactive, and beginner-friendly, yet deeply technical for experienced builders.

    What you’ll learn:

    • How GPT-2 became the architectural foundation for modern LLMs like ChatGPT, Claude, Gemini, and LLaMA.

    • The three major innovations since GPT-2 — mixture of experts, RoPE (rotary position embeddings), and advances in training — and how they changed AI performance.

    • A clear explanation of tokenization, attention, and transformer blocks that you can see and manipulate in real time.

    • How to implement GPT-2’s core in ~600 lines of code and why that understanding makes you a better AI builder.

    • The role of temperature, top-k, and top-p in controlling model behavior — and how RLHF reshaped the LLM landscape.

    • Why hands-on experimentation beats theory when learning cutting-edge AI systems.

    Ishan Anand is an engineer, MIT alum, and prolific AI tinkerer who built a fully functional GPT-2 inside a spreadsheet — making it one of the most accessible ways to learn how LLMs work. His work bridges deep technical insight with practical learning tools for the AI community.

    Key topics covered:

    • Step-by-step breakdown of GPT-2 architecture.

    • Transformer math and attention mechanics explained visually.

    • How modern LLMs evolved from GPT-2’s original design.

    • Practical insights for training and fine-tuning models.

    • Why understanding the “old” models makes you better at using the new ones.

    This episode of AI Tinkerers One-Shot goes deep under the hood with Ishan to show how LLMs really work — and how you can start building your own.

    💡 Resources:

    • Ishan Anand LinkedIn – https://www.linkedin.com/in/ishananand/

    • AI Tinkerers – https://aitinkerers.org

    • One-Shot Podcast – https://one-shot.aitinkerers.org/

    👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!

    続きを読む 一部表示
    1 時間 16 分
  • From SOP to API in Seconds: Steve Krenzel on Automating Business Logic with AI
    2025/10/17

    In this episode of AI Tinkerers Global Stage, we go deep with Steve Krenzel, founder of LogicLoop and ex-CTO office at Brex. Steve shows us how his company turns standard operating procedures (SOPs) into fully functioning APIs—complete with schema generation, test cases, structured outputs, and backtesting—within seconds.

    We break down:

    1. Why Steve avoids agentic frameworks

    2. How Logic automates 100K+ tasks/month for real customers

    3. The power of structured output for reasoning and reliability

    4. How prompt caching and append-only templates unlock scale

    5. His open-source coding agent that builds software from scratch

    6. How they achieved less than 2% error rates beating human teams

    7. His famous Prompt Engineering Guide that went viral in 2023

    If you’re building with LLMs, designing autonomous workflows, or just want to see what the future of developer productivity looks like—this is a must-watch.

    続きを読む 一部表示
    1 時間 6 分
  • From Viral AI Demos to YC: Robert Lukoszko
    2025/10/17

    Discover how Robert Lukoszko, CEO of Stormy AI, is building the future of AI-powered marketing by automating influencer outreach end-to-end. This interview goes deep into his journey from viral AI demos to Y Combinator, revealing critical insights for AI builders and founders.

    You’ll learn:

    • The surprising challenges and limitations of building AI applications that deeply integrate with operating systems.

    • Why local AI models, despite their appeal, often struggle to compete with cloud-based solutions for real-world business cases.

    • Robert’s unique approach to AI-assisted development, leveraging tools like Claude 3.7 for rapid prototyping and efficient coding.

    • How Stormy AI uses advanced AI to find niche influencers, analyze engagement, and automate outreach, transforming traditional marketing.

    • The strategic importance of distribution and market fit over pure technological innovation for venture-scale AI companies.

    Robert Lukoszko, previously co-founder of Fixkey AI (acquired) and an alumnus of Y Combinator (S24 with Stormy AI, W22 with ngrow.ai), shares his extensive experience in applying AI to new modalities and building high-growth startups.

    This episode of AI Tinkerers One-Shot offers a practical look at the technical and entrepreneurial realities of building in the generative AI space.

    💡 Resources:

    • Stormy AI - https://stormy.ai

    • Robert Lukoszko’s LinkedIn - linkedin.com/in/robert-lukoszko

    • AI Tinkerers - https://aitinkerers.org

    • One-Shot Podcast - https://one-shot.aitinkerers.org/

    Social Media:

    @AITinkerers

    @stormy_hq

    @Karmedge

    👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!

    00:00 – Introduction & Background

    02:38 – Visual AI, Demos & Startup Idea

    06:27 – Local vs. Cloud Models

    10:07 – Desktop AI App & Context Importance

    14:11 – Building the App & OS Integration

    23:13 – Ambient AI & Contextual Vision

    32:17 – Stormy AI Pivot & Demo

    38:35 – AI Mindset & Content Creation

    43:57 – AI Model Comparison & Cost

    続きを読む 一部表示
    56 分
  • Talk to Your Dog: AI Unlocks Pet Minds
    2025/10/17

    Discover how AI is bridging the communication gap between humans and dogs, unlocking deeper insights into canine emotions, intentions, and even health. In this One-Shot interview, Praful Mathur, founder of Sarama, shares his groundbreaking work on a full-stack AI system designed to interpret dog vocalizations and body language. Praful, an experienced builder in AI, reveals how his innovative approach could transform our understanding of our furry companions.

    What you’ll learn:

    • The surprising history and potential of human-animal communication through AI.

    • How to build custom AI models for complex, real-world data like dog vocalizations.

    • Praful’s unique strategy for collecting and annotating large datasets from pet owners.

    • The practical applications of AI in detecting subtle health issues in dogs.

    • How AI tools can accelerate product development, from industrial design to strategic planning.

    Key topics covered:

    • The sophistication of dog understanding vs. current AI models.

    • Leveraging dogs as ‘biological peripherals’ for detection.

    • Training original AI models on novel datasets (SVMs, KNNs, transformers).

    • Hardware and software architecture for real-time animal data collection.

    • Using AI for rapid industrial design and company building tasks.

    Join Joe from AI Tinkerers One-Shot as he takes a deep dive with Praful Mathur, an innovator pushing the boundaries of AI to create meaningful connections between humans and animals. This conversation explores the technical challenges and profound implications of building AI that truly understands our pets.

    💡 Resources:

    • Sarama - https://www.sarama.ai/

    • Sarama on IG - https://instagram.com/withsarama

    • Praful Mathur’s LinkedIn - linkedin.com/in/praful-mathur

    • AI Tinkerers - https://aitinkerers.org

    • One-Shot Podcast - https://one-shot.aitinkerers.org/

    Social Media: @AITinkerers @PrafulMathur

    👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!

    00:00 Introduction

    00:15 Joe’s Reflection on the Interview

    01:32 Introducing Praful Mathur & Sarama

    00:01:55 Understanding Dog Communication

    00:03:39 Beyond Words: Dog Communication

    00:04:37 Dogs as AI Peripherals

    00:05:27 History of Animal Communication Tech

    00:06:47 Sarama: What They’ve Built Today

    00:07:47 Sarama Hardware & Data Collection

    00:09:31 Sarama App & Cloud Processing

    00:11:30 Understanding Dog Behavior & Emotion

    00:12:57 Training Original AI Models for Dogs

    00:16:32 Multimodal Data & Sensors

    00:18:49 Confidence & Data Needs for Dog AI

    00:20:19 ML Stack & Training Approaches

    00:22:06 Live Demo: Dog Bark Analysis

    00:28:20 Dog’s Vocabulary & Alerts

    00:30:13 AI for Early Health Detection

    00:32:13 AI in Meta Process & Design

    00:35:04 Open Source Strategy & Data Collection

    00:37:18 Communicating AI Insights to Humans

    00:38:36 Agentic Coding Workflow

    00:44:19 Model Comparison: Gemini vs. Claude

    00:46:39 How Praful Found AI Tinkerers

    00:47:48 Conclusion

    続きを読む 一部表示
    48 分
  • Build Better AI Agents with RL & Fine-Tuning (Kyle from OpenPipe)
    2025/10/17

    What you’ll learn:

    • How reinforcement learning can reduce AI agent error rates by up to 60% and drastically lower inference costs.

    • The critical difference between supervised fine-tuning and RL for agentic workflows, and why RL is essential for true agent reliability.

    • A practical, code-level walkthrough of building and training an email search agent that outperforms OpenAI’s GPT-3.5 on a 14-billion-parameter open-source model.

    • Strategies for generating high-quality synthetic data and designing nuanced reward functions with ‘partial credit’ to effectively train your agents.

    • Key use cases where RL fine-tuning delivers the most significant benefits, including real-time voice agents and high-volume applications.

    Kyle Corbett is the founder of OpenPipe, a platform dedicated to helping enterprises build and deploy customized AI models using advanced fine-tuning and reinforcement learning. He’s a seasoned builder who has been working at the frontier of fine-tuning since before public APIs existed.

    Key topics covered:

    • The limitations of off-the-shelf LLMs for agent reliability and how RL solves them.

    • The importance of latency and cost optimization in real-world AI deployments.

    • Detailed explanation of the agentic workflow and tool calling in an email search bot.

    • The Enron email dataset as a realistic environment for agent training.

    • OpenPipe’s open-source Agent Reinforcement Trainer (ART) library for building RL agents.

    • The iterative process of data generation, rubric-based scoring, and model updates.

    This episode of AI Tinkerers One-Shot goes under the hood with Kyle to share practical learnings for the community.

    💡 Resources:

    • OpenPipe Website - https://openpipe.ai

    • Kyle Corbett LinkedIn - https://www.linkedin.com/in/kcorbitt/

    • AI Tinkerers - https://aitinkerers.org

    • One-Shot Podcast - https://one-shot.aitinkerers.org/

    Social Media: @AITinkerers @OpenPipeAI @corbtt

    👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!

    00:00 Introduction

    01:09 Welcome Kyle Corbett, Founder of OpenPipe

    01:55 What OpenPipe Does

    02:31 OpenPipe’s Journey and YC Experience

    00:04:13 Email Search Bot Project Overview

    00:05:19 Why Fine-Tuning for Email Search

    00:06:22 Email Search Bot: Queries and Results

    00:09:23 On-Premise Deployment and Data Sensitivity

    00:10:45 Agent Trace Example and Tooling

    00:13:55 Using the Enron Dataset

    00:15:13 Reinforcement Learning Fundamentals

    00:17:01 Synthetic Data Generation with Gemini 2.5 Pro

    00:18:51 Reliable Q&A Pairs and Data Scale

    00:21:59 Fine-Tuning Impact on Model Performance

    00:22:25 RL Adoption in Industry and Community

    00:24:37 Rollout Function and Agent Implementation

    00:27:52 Rubric and Reward Calculation for RL

    00:30:39 Training Loop and Model Updates

    00:33:52 RL Fine-Tuning vs. OpenAI’s Fine-Tuning

    00:40:38 Time Commitment for RL Projects

    00:41:55 Use Cases for RL Fine-Tuning

    00:45:37 OpenPipe’s Offerings: Open Source, White Glove Service

    00:47:07 Kyle’s Side Tinkering and Future of AI

    00:49:59 Discovering AI Tinkerers

    続きを読む 一部表示
    51 分
  • Dynamic LLM Inference: Tomasz Kolinko's Effort Engine
    2025/10/17

    Discover a groundbreaking approach to optimizing Large Language Models with Tomasz Kolinko, a true OG tinkerer and entrepreneur. In this One-Shot interview, Tomasz unveils his 'Effort Engine,' a novel algorithm that dynamically selects which computations are performed during LLM inference, allowing for significant speed improvements while maintaining surprising output quality. Learn how this method goes beyond traditional quantization by dynamically managing computations and even enabling partial model loading to save VRAM.

    Tomasz shares his unique benchmarking techniques, including the use of Kullback-Leibler divergence and heat maps, offering a new lens to understand how models behave under reduced 'effort.' This conversation provides practical insights into the underlying mechanics of AI models and offers a fully open-source project for practitioners to experiment with.

    💡 Resources:

    • Tomasz Kalinko's GitHub - https://kolinko.github.io/effort/about.html

    • The Basics - https://kolinko.github.io/effort/equations.html

    • AI Tinkerers - https://aitinkerers.org

    • One-Shot Podcast - https://one-shot.aitinkerers.org/

    Social Media Tags: @AITinkerers @kolinko

    👍 Like this video if you found it valuable, and subscribe to AI Tinkerers One-Shot for more conversations with innovators building the future of AI!

    00:00 Introduction

    00:01:07 Welcome Tomasz Kalinko

    00:02:11 Introducing Effort Engine

    00:03:10 Dynamic Inference Explained

    00:05:56 How the Algorithm Works

    00:08:07 Speed vs. Quality Trade-offs

    00:11:37 Dynamic Weight Loading & VRAM

    00:15:24 Effort Engine Demo

    00:26:01 Model Breakdown Observations

    00:29:49 Architecture & Benchmarks

    00:32:17 Kullback-Leibler Divergence

    00:39:22 Heat Map Visualization

    00:41:07 Community & Future Work

    続きを読む 一部表示
    48 分