『Embedded AI Podcast』のカバーアート

Embedded AI Podcast

Embedded AI Podcast

著者: Embedded AI Podcast
無料で聴く

概要

A podcast about using AI in embedded systems -- either as part of your product, or during development.Embedded AI Podcast
エピソード
  • E14 Kwabena Ageyman on OpenMV
    2026/05/01

    We sit down with Kwabena Ageyman, co-founder of OpenMV, to explore how microcontrollers have evolved from simple 8-bit chips to AI-capable systems that rival desktop computers. Kwabena walks us through OpenMV's journey from the CMU Cam days to their latest products—the OpenMV Cam AE3 and N6—which pack neural network accelerators, image signal processors, and H.264 encoders into single-chip packages.

    What makes these systems remarkable isn't just raw performance (250+ gigaops for AI inference), but what they enable: battery-powered computer vision deployments with no infrastructure requirements. Kwabena demonstrates running a complete web server with live video streaming—all in MicroPython on a microcontroller. We discuss the practical implications: doorbell cameras that don't phone home, parking lot monitors that run on solar panels, and industrial vision systems that don't require conduit runs.

    The conversation touches on hard technical choices (why debayer images even for AI?), the underappreciated value of MicroPython for complex applications, and the infrastructure costs that kill many promising AI deployments. Kwabena also previews what's coming: transformer support on microcontrollers and WiFi HaLow for long-range, high-bandwidth connectivity. For anyone working on edge AI or embedded vision, this episode offers both practical insights and a glimpse of what's possible when hardware acceleration meets thoughtful software design.

    Key Topics:

    • [00:03] Introduction to OpenMV and the evolution from "impossible" computer vision on microcontrollers to AI-capable systems
    • [00:05] Technical specs of the AE3 and N6: 250 gigaops performance, 5-64MB RAM, image signal processors, and H.264 encoding
    • [00:12] Why image processing steps like debayering matter even for AI applications
    • [00:18] The infrastructure cost problem: why power, connectivity, and deployment logistics kill many AI projects
    • [00:28] Running MicroPython on microcontrollers: web servers, RTSP streaming, and complex applications without Linux
    • [00:35] Live demo: complete web interface with video streaming running on a microcontroller
    • [00:42] Real-world use cases: YOLO object detection, face tracking, drowsiness detection, and parking lot monitoring
    • [00:52] The future: transformer support on microcontrollers and WiFi HaLow for long-range connectivity

    Notable Quotes:

    "Ten years ago when we started, if you Googled for computer vision on microcontrollers, you got a single Stack Overflow reply about how that was impossible. Since then, a lot has changed." — Kwabena Ageyman

    "The product dies when you have to tell people: I want you to put 10,000 of these in the field. They look at the infrastructure cost and say, what does that look like end to end? Is that actually going to be a net benefit, or is it just sexy and looks cool?" — Kwabena Ageyman

    "Having AI on the edge actually unlocks privacy. It is a decision to collect all the data and store it forever. If you have Edge AI locally on these devices, the device manufacturer can say: we're actually not going to go into a format where we have infinite data collection of everything." — Kwabena Ageyman

    Resources Mentioned:

    • OpenMV - OpenMV's website with products, documentation, and community resources
    • Roboflow - Cloud platform for training computer vision models, partnered with OpenMV
    • Edge Impulse - Edge AI development platform, OpenMV partner for model training
    • WiFi HaLow - Long-range, low-power WiFi technology (up to 10 miles) mentioned for future connectivity
    • Embedded Online Conference - Conference where Kwabena, Luca, and Ryan will be speaking
    続きを読む 一部表示
    42 分
  • E13 Ryan visits Luca (and they talk abut spec-driven development)
    2026/04/17

    For E13 we recorded live in Luca's garden in Munich, with Ryan dropping by ahead of Embedded World week. Ryan and Luca talk about spec-driven development in the AI era: where the discipline came from, what changes when an LLM is doing the typing, and the failure modes that show up over and over again in trainings. The short version: vibe coding will get you something that demos beautifully, but the moment a stranger asks "what does this button do?", it tends to expose how little was actually thought through.

    The conversation circles around a few recurring themes — the iterative loop you cannot skip even when the AI lets you, the temptation to one-shot whole projects, and the awkward fact that the AI itself seems to actively prefer working in waterfall mode. We also get into why requirements engineering and product ownership matter more (not less) with AI in the picture, why TDD doubles as a way of describing the goal to your assistant, and why the engineer staying in the loop — with that loop running tighter and faster — is what actually makes this work in practice. Plus an honest digression about all the ditches Luca has fallen into building Claude Code skills around his daily workflow.

    Key Topics:

    • [02:24] Three artifacts of spec-driven development — and what each one means in the AI era
    • [05:45] Trainings, vibe-coded games, and the "what does this button do?" moment
    • [11:27] Long-lived branches, three months of code in an hour, and why both fail for the same reason
    • [22:25] Beningo's multiplier metaphor: what if the engineer's value is between -1 and 1?
    • [24:25] Engineering as conversations; curly brackets as a side effect
    • [27:17] Why requirements engineering and product ownership become more important with AI
    • [30:59] The AI wants waterfall — and you need to fight it
    • [33:54] One-shot prompts for whole projects: lying to yourself in five lines
    • [37:27] Staying in control: one unit of work, AI in the loop, integrate, done
    • [41:32] Filling ditches one at a time: Luca's Claude Code skills setup
    • [43:06] TDD as the act of describing the goal — to yourself and to the AI

    Notable Quotes:

    "If you've got a five-line prompt that generates 10,000 lines of code for you, then there's just going to be a lot of blank spots in there, a lot of ambiguity in there. That can't be good." — Luca Ingianni

    "I've had teammates create a long-lived branch and tell me 'I'll see you in a few months.' And I'm like — no. They don't understand how this is going to interact with the rest of the system. And you're basically doing that same thing — writing three months worth of code in an hour. Cool, now what?" — Ryan Torvik

    "Engineering is what happens when engineers talk to one another, and the differential equations and the C++ code are just side effects of those conversations." — Luca Ingianni

    Resources Mentioned:

    • embeddedai.academy — Luca's AI trainings for embedded teams
    • Agile Embedded Podcast — sister show, more on agile in the embedded world
    • Embedded World — annual embedded systems trade fair in Nuremberg, where Ryan was heading next
    • Claude Code — the AI coding tool Luca built his "skills" workflow around
    • Jacob Beningo's "multiplier" framing for AI in development teams (referenced from E12)
    続きを読む 一部表示
    45 分
  • E12 Learning AI-powered development with Jacob Beningo
    2026/04/03

    We sit down with Jacob Beningo, a real-time embedded systems consultant with 20 years of experience, to talk about what we've learned teaching engineers to use AI in their development workflows. Turns out, the hard part isn't getting AI to write code—it's all the systems engineering that comes before and after. We discuss common mistakes people make when starting out, like treating AI as a magic code generator instead of a pair programming partner, and why you absolutely cannot skip requirements, architecture, and critical thinking just because the AI can type faster than you.

    Jacob shares stories from his training sessions, including an AI that refused to follow test-driven development because "that would take too long." We explore why AI actually forces you to become a better engineer by taking away the dopamine hit of typing code yourself, and why IDE plugins might be leading people astray by keeping them at the wrong level of abstraction. The conversation gets real about costs—both in tokens and electricity bills—and why the "set it and forget it" YouTube hype doesn't match reality. If you're skeptical about AI in embedded systems, good—keep that skepticism. You're going to need it.

    Key Topics:

    • [03:15] The real challenge: systems engineering, not code generation
    • [08:45] Why requirements engineering skills matter more than ever with AI
    • [14:20] The push-button module exercise: spending a full day on design before any code
    • [16:30] When AI refuses to follow TDD: "That would take too long"
    • [22:40] The temperature sensor exercise: when tests pass but the code isn't production-ready
    • [28:15] Code is cheap, but experiments aren't free: finding the balance
    • [35:50] The hidden costs of AI: token budgets and rising electricity bills
    • [42:10] Why IDE plugins might be the wrong interface for AI-assisted development
    • [48:30] Using AI as a pair programming partner, not a code completion tool
    • [53:20] Keep your skepticism: why critical thinking is more important than ever

    Notable Quotes:

    "The AI finished writing the code and all the tests in like four seconds. I'm like, how was that so fast? Well, I just wrote all the code. You didn't follow the TDD process? No, that would take too long." — Jacob Beningo

    "If you throw in such a vague requirement, the thing can't read their mind and neither can they read its mind. So it's really just a matter of luck what you're going to get." — Luca Ingianni

    "The AI is the best puppy you'll ever have. It'll go pick up the stick for you. And you're like, no, not that stick. What do you mean not that stick? You didn't tell me which stick. I grabbed you the stick." — Ryan Torvik

    Resources Mentioned:

    • Jacob Beningo on LinkedIn - Daily posts about embedded systems development and modernization
    • Beningo.com - Jacob's consulting and training services for embedded systems
    • Embedded Software Academy - Training courses including AI for embedded systems development
    • Embedded Online Conference - Annual May conference for embedded systems education and community
    • Agile Embedded Podcast Slack - Community discussion channel mentioned by the hosts
    続きを読む 一部表示
    55 分
adbl_web_anon_alc_button_suppression_c
まだレビューはありません