エピソード

  • Enhancing Field Oriented Control of Electric Drives with tiny Neural Network
    2025/11/04

    Ever wondered how the electric vehicles of tomorrow will squeeze every last drop of efficiency from their batteries? The answer lies at the fascinating intersection of artificial intelligence and motor control.

    The electrification revolution in automotive technology demands increasingly sophisticated control systems for permanent magnet synchronous motors - the beating heart of electric vehicle propulsion. These systems operate at mind-boggling speeds, with control loops closing every 50 microseconds (that's 20,000 times per second!), and future systems pushing toward 10 microseconds. Traditional PID controllers, while effective under steady conditions, struggle with rapid transitions, creating energy-wasting overshoots that drain precious battery life.

    Our groundbreaking research presents a neural network approach that drastically reduces these inefficiencies. By generating time-varying compensation factors, our AI solution cuts maximum overshoots by up to 70% in challenging test scenarios. The methodology combines MatWorks' development tools with ST's microcontroller technology in a deployable package requiring just 1,700 parameters - orders of magnitude smaller than typical deep learning models.

    While we've made significant progress, challenges remain. Current deployment achieves 70-microsecond inference times on automotive-grade microcontrollers, still shy of our ultimate 10-microsecond target. Hardware acceleration represents the next frontier, along with exploring higher-level models and improved training methodologies. This research opens exciting possibilities for squeezing maximum efficiency from electric vehicles, turning previously wasted energy into extended range and performance. Curious about the technical details? Our complete paper is available on arXiv - scan the QR code to dive deeper into the future of smart motor control.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    17 分
  • Transforming Human-Computer Interaction with OpenVINO
    2025/10/28

    The gap between science fiction and reality is closing rapidly. Remember when talking to computers was just a fantasy in movies? Raymond Lo's presentation on building chatbots with OpenVINO reveals how Intel is transforming ordinary PCs into extraordinary AI companions.

    Imagine generating a photorealistic teddy bear image in just eight seconds on your laptop's integrated GPU. Or having a natural conversation with a locally-running chatbot that doesn't need cloud connectivity. These scenarios aren't futuristic dreams – they're happening right now thanks to breakthroughs in optimizing AI models for consumer hardware.

    The key breakthrough isn't just raw computational power but intelligent optimization. When Raymond's team first attempted to run large language models locally, they didn't face computational bottlenecks – they hit memory walls. Models simply wouldn't fit in available RAM. Through sophisticated compression techniques like quantization, they've reduced memory requirements by 75% while maintaining remarkable accuracy. The Neural Network Compression Framework (NNCF) now allows developers to experiment with different compression techniques to find the perfect balance between size and performance.

    What makes this particularly exciting is the deep integration with Windows and other platforms. Microsoft's AI Foundry now incorporates OpenVINO technology, meaning when you purchase a new PC, it comes ready to deliver optimized AI experiences out of the box. This represents a fundamental shift in how we think about computing – from tools we command with keyboards and mice to companions we converse with naturally.

    For developers, OpenVINO offers a treasure trove of resources – hundreds of notebooks with examples ranging from computer vision to generative AI. This dramatically accelerates development cycles, turning what used to take months into weeks. As Raymond revealed, even complex demos can be created in just two weeks using these tools.

    Ready to transform your PC into an AI powerhouse? Explore OpenVINO today and join the revolution in human-computer interaction. Your next conversation partner might be sitting on your desk already.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    43 分
  • Applying GenAI to Mice Monitoring
    2025/10/14

    The AI revolution isn't just for tech giants with unlimited computing resources. Small and medium enterprises represent a crucial frontier for edge generative AI adoption, but they face unique challenges when implementing these technologies. This fascinating exploration takes us into an unexpected application: smart laboratory mouse cages enhanced with generative AI.

    Laboratory mice represent valuable assets in pharmaceutical research, with their welfare being a top priority. While fixed-function AI already monitors basic conditions like water and food availability through camera systems, the next evolution requires predicting animal behavior and intentions. By analyzing just 16 frames of VGA-resolution video, this edge-based system can predict a mouse's next actions, potentially protecting animals from harm when human intervention isn't immediately possible due to clean-room protocols.

    The technical journey demonstrates how generative AI can be scaled appropriately for edge devices. Starting with a 240-million parameter model (far smaller than headline-grabbing LLMs), the team optimized to 170 million parameters while actually improving accuracy. Running on a Raspberry Pi 5 without hardware acceleration, the system achieves inference times under 300 milliseconds – and could potentially reach real-time performance (30ms) with specialized hardware. The pipeline combines three generative neural networks: a video-to-my model, an OPT transformer, and a text-to-speech component for natural interaction.

    This case study provides valuable insights for anyone looking to implement edge generative AI in resource-constrained environments. While currently limited to monitoring single mice, the approach demonstrates that meaningful AI applications don't require supercomputers or billion-parameter models – opening doors for businesses of all sizes to harness generative AI's potential.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    17 分
  • Simple Cost Effective Vision AI Solutions at the edge
    2025/10/07

    Sony's revolutionary IMX500 stands at the forefront of a quiet revolution in edge computing and smart city technology. This isn't just another image sensor—it's the first to integrate AI processing directly on the chip, transforming how visual data becomes actionable intelligence while preserving privacy and minimizing infrastructure requirements.

    The power of this innovation lies in its elegant simplicity. Rather than sending complete images to cloud servers or external GPUs for processing, the IMX500 performs AI inference locally and transmits only the resulting metadata. This approach slashes bandwidth requirements to mere kilobytes, dramatically reduces power consumption, and—perhaps most critically—protects individual privacy by ensuring that identifiable images never leave the device. For urban environments where surveillance concerns often clash with safety imperatives, this represents a breakthrough compromise.

    Real-world deployments already demonstrate the technology's transformative potential. In Lakewood, Colorado, where a one-mile stretch of road had become notorious for traffic fatalities, Sony's solution achieved 100% performance in identifying dangerous situations—outperforming three competing technologies while costing less. Through partnership with ITRON, these sensors can be seamlessly deployed using existing streetlight infrastructure, creating mesh networks of intelligent sensors without requiring expensive new installation work or dedicated power sources. This practical approach to deployment makes citywide implementation financially viable even for budget-constrained municipalities.

    The implications extend far beyond traffic monitoring. From retail analytics to manufacturing quality control, the same core technology can be applied wherever visual intelligence provides value. By bringing AI to the edge in a form factor that addresses privacy, power, and practical deployment challenges, Sony has created a foundation for the next generation of smart infrastructure. Explore how this technology could transform your environment—whether an urban center, commercial space, or industrial facility—by leveraging the power of visual intelligence without the traditional limitations.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    19 分
  • Low Code No Code Platform for Developing AI algorithms
    2025/09/30

    Revolutionizing edge computing just got easier. This eye-opening exploration of ST Microelectronics' ST-IoT Craft platform reveals how everyday developers can now harness the power of artificial general intelligence without writing a single line of code.

    The modern IoT landscape presents a paradox: billions of devices generate zettabytes of valuable data, yet transforming that raw information into intelligent systems remains frustratingly complex. ST's innovative low-code/no-code platform elegantly solves this problem by distributing intelligence across three key components: smart sensors with embedded AI algorithms, intelligent gateways that filter data transmission, and cloud services that handle model training and adaptation.

    At the heart of this revolution is truly remarkable in-sensor AI technology. Imagine sensors that don't just collect data but actually think – detecting whether a laptop is on a desk or in a bag, whether an industrial asset is stationary or being handled, or whether a person is walking or running. These decisions happen directly on the sensor itself, dramatically reducing power consumption and network traffic while enabling real-time responses. The platform offers 31 different features including mean, variance, energy in bands, peak-to-peak values, and zero crossing that can be automatically selected and applied to your data.

    What makes ST-IoT Craft truly accessible is its browser-based interface with six pre-built examples spanning industrial and consumer applications. Users can visualize sensor data in real-time, train models with a single button click, and deploy finished solutions directly to hardware – all without diving into complex code. The platform even handles the intricate details of filter selection, feature extraction, window length optimization, and decision tree generation automatically.

    Ready to transform your IoT projects with embedded intelligence? Visit stcom, search for ST-IoT Craft, and discover how you can teach your sensors to think – no coding required.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    20 分
  • Stochastic Training for Side-Channel Resilient AI
    2025/09/23

    Protecting valuable AI models from theft is becoming a critical concern as more computation moves to edge devices. This fascinating exploration reveals how sophisticated attackers can extract proprietary neural networks directly from hardware through side-channel attacks - not as theoretical possibilities, but as practical demonstrations on devices from major manufacturers including Nvidia, ARM, NXP, and Google's Coral TPUs.

    The speakers present a novel approach to safeguarding existing hardware without requiring new chip designs or access to proprietary compilers. By leveraging the inherent randomness in neural network training, they demonstrate how training multiple versions of the same model and unpredictably switching between them during inference can significantly reduce vulnerability to these attacks.

    Most impressively, they overcome the limitations of edge TPUs by cleverly repurposing ReLU activation functions to emulate conditional logic on hardware that lacks native support for control flow. This allows implementation of security measures on devices that would otherwise be impossible to modify. Their technique achieves approximately 50% reduction in side-channel leakage with minimal impact on model accuracy.

    The presentation walks through the technical implementation details, showing how layer-wise parameter selection can provide quadratic security improvements compared to whole-model switching approaches. For anyone working with AI deployment on edge devices, this represents a critical advancement in protecting intellectual property and preventing system compromise through model extraction.

    Try implementing this stochastic training approach on your edge AI systems today to enhance security against physical attacks. Your valuable AI models deserve protection as they move closer to end users and potentially hostile environments.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    19 分
  • Beyond TOPS: A Holistic Framework for Edge AI Metrics
    2025/09/16

    Beyond raw computational power lies the true measure of AI system effectiveness. Austin Lyons, founder of ChipStrat and analyst at Creative Strategies, challenges us to rethink how we evaluate Edge AI technologies in this thought-provoking talk on metrics that truly matter.

    For too long, the industry has obsessed over Trillion Operations Per Second (TOPs) as the gold standard measurement. Lyons expertly deconstructs this limited view, introducing us to a more nuanced framework that considers what users actually experience. As generative AI moves to edge devices, shouldn't we care more about tokens per second—how quickly systems respond to our prompts—than abstract computational capabilities?

    But speed alone doesn't tell the whole story. What happens when your lightning-fast AI assistant drains your battery in an hour? Lyons presents "tokens per second per watt" as a crucial metric for practical, everyday AI use. He also introduces the concept of "vibes"—those harder-to-quantify qualities like perceived intelligence and personality that make or break user adoption, drawing a compelling parallel to why people choose Apple products despite comparable technical specs from competitors.

    The most valuable insight comes from Lyons' call for cross-functional collaboration in AI system design. When hardware engineers, software developers, designers, and product managers work in isolation, optimizing for their preferred metrics, the end result often disappoints users. By approaching AI development holistically, teams can make informed trade-offs that deliver better overall experiences—sometimes with less powerful but more efficient models.

    Ready to transform how you think about AI performance? Subscribe to Austin's newsletter at chipstrat.com where he regularly shares insights on the evolving intersection of semiconductors, AI, and product strategy.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    12 分
  • How AI Zip Is Shrinking Models for a Device-First Future
    2025/09/09

    What if we could put powerful AI anywhere—from underwater cameras monitoring fish to the phone in your pocket? That's the vision driving AI Zip, a company building ultra-efficient AI models that can run on virtually any device.

    During this fascinating conversation, we explore how AI Zip is pioneering a different path in artificial intelligence by focusing on extreme compression and efficiency. While most companies pursue ever-larger models requiring massive cloud infrastructure, AI Zip is shrinking intelligence to fit where it's needed most—at the edge where 99% of data originates.

    The numbers are staggering: edge devices collectively possess about 100 times more computing power than all cloud resources combined, yet 95% of AI workloads run in the cloud. This disconnect represents an enormous untapped opportunity that AI Zip is addressing through innovations in model compression and deployment.

    We dive into real-world applications, including an award-winning smart fish farming solution developed with SoftBank that uses underwater computer vision to optimize feeding and dramatically reduce waste. This practical example shows how specialized AI can deliver enormous value when deployed directly at the data source.

    Perhaps most thought-provoking is the efficiency gap between artificial and natural intelligence. While our most advanced AI systems require thousands of watts of power, the human brain operates on just 20 watts—about the same as a smartphone. Similarly, a jumping spider can navigate complex 3D environments with millions of neurons, while autonomous vehicles need billions of parameters. Closing this three-orders-of-magnitude efficiency gap represents an exciting frontier for AI research.

    The future of AI won't just be about bigger models—it will be hierarchical, with specialized intelligence at every level. Subscribe now to hear more conversations with pioneers who are reimagining what's possible at the intersection of AI and edge computing.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    30 分