『EDGE AI POD』のカバーアート

EDGE AI POD

EDGE AI POD

著者: EDGE AI FOUNDATION
無料で聴く

このコンテンツについて

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community.

These are shows like EDGE AI Talks, EDGE AI Blueprints as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics.

Join us to stay informed and inspired!

© 2025 EDGE AI FOUNDATION
エピソード
  • Enhancing Field Oriented Control of Electric Drives with tiny Neural Network
    2025/11/04

    Ever wondered how the electric vehicles of tomorrow will squeeze every last drop of efficiency from their batteries? The answer lies at the fascinating intersection of artificial intelligence and motor control.

    The electrification revolution in automotive technology demands increasingly sophisticated control systems for permanent magnet synchronous motors - the beating heart of electric vehicle propulsion. These systems operate at mind-boggling speeds, with control loops closing every 50 microseconds (that's 20,000 times per second!), and future systems pushing toward 10 microseconds. Traditional PID controllers, while effective under steady conditions, struggle with rapid transitions, creating energy-wasting overshoots that drain precious battery life.

    Our groundbreaking research presents a neural network approach that drastically reduces these inefficiencies. By generating time-varying compensation factors, our AI solution cuts maximum overshoots by up to 70% in challenging test scenarios. The methodology combines MatWorks' development tools with ST's microcontroller technology in a deployable package requiring just 1,700 parameters - orders of magnitude smaller than typical deep learning models.

    While we've made significant progress, challenges remain. Current deployment achieves 70-microsecond inference times on automotive-grade microcontrollers, still shy of our ultimate 10-microsecond target. Hardware acceleration represents the next frontier, along with exploring higher-level models and improved training methodologies. This research opens exciting possibilities for squeezing maximum efficiency from electric vehicles, turning previously wasted energy into extended range and performance. Curious about the technical details? Our complete paper is available on arXiv - scan the QR code to dive deeper into the future of smart motor control.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    17 分
  • Transforming Human-Computer Interaction with OpenVINO
    2025/10/28

    The gap between science fiction and reality is closing rapidly. Remember when talking to computers was just a fantasy in movies? Raymond Lo's presentation on building chatbots with OpenVINO reveals how Intel is transforming ordinary PCs into extraordinary AI companions.

    Imagine generating a photorealistic teddy bear image in just eight seconds on your laptop's integrated GPU. Or having a natural conversation with a locally-running chatbot that doesn't need cloud connectivity. These scenarios aren't futuristic dreams – they're happening right now thanks to breakthroughs in optimizing AI models for consumer hardware.

    The key breakthrough isn't just raw computational power but intelligent optimization. When Raymond's team first attempted to run large language models locally, they didn't face computational bottlenecks – they hit memory walls. Models simply wouldn't fit in available RAM. Through sophisticated compression techniques like quantization, they've reduced memory requirements by 75% while maintaining remarkable accuracy. The Neural Network Compression Framework (NNCF) now allows developers to experiment with different compression techniques to find the perfect balance between size and performance.

    What makes this particularly exciting is the deep integration with Windows and other platforms. Microsoft's AI Foundry now incorporates OpenVINO technology, meaning when you purchase a new PC, it comes ready to deliver optimized AI experiences out of the box. This represents a fundamental shift in how we think about computing – from tools we command with keyboards and mice to companions we converse with naturally.

    For developers, OpenVINO offers a treasure trove of resources – hundreds of notebooks with examples ranging from computer vision to generative AI. This dramatically accelerates development cycles, turning what used to take months into weeks. As Raymond revealed, even complex demos can be created in just two weeks using these tools.

    Ready to transform your PC into an AI powerhouse? Explore OpenVINO today and join the revolution in human-computer interaction. Your next conversation partner might be sitting on your desk already.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    43 分
  • Applying GenAI to Mice Monitoring
    2025/10/14

    The AI revolution isn't just for tech giants with unlimited computing resources. Small and medium enterprises represent a crucial frontier for edge generative AI adoption, but they face unique challenges when implementing these technologies. This fascinating exploration takes us into an unexpected application: smart laboratory mouse cages enhanced with generative AI.

    Laboratory mice represent valuable assets in pharmaceutical research, with their welfare being a top priority. While fixed-function AI already monitors basic conditions like water and food availability through camera systems, the next evolution requires predicting animal behavior and intentions. By analyzing just 16 frames of VGA-resolution video, this edge-based system can predict a mouse's next actions, potentially protecting animals from harm when human intervention isn't immediately possible due to clean-room protocols.

    The technical journey demonstrates how generative AI can be scaled appropriately for edge devices. Starting with a 240-million parameter model (far smaller than headline-grabbing LLMs), the team optimized to 170 million parameters while actually improving accuracy. Running on a Raspberry Pi 5 without hardware acceleration, the system achieves inference times under 300 milliseconds – and could potentially reach real-time performance (30ms) with specialized hardware. The pipeline combines three generative neural networks: a video-to-my model, an OPT transformer, and a text-to-speech component for natural interaction.

    This case study provides valuable insights for anyone looking to implement edge generative AI in resource-constrained environments. While currently limited to monitoring single mice, the approach demonstrates that meaningful AI applications don't require supercomputers or billion-parameter models – opening doors for businesses of all sizes to harness generative AI's potential.

    Send us a text

    Support the show

    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

    続きを読む 一部表示
    17 分
まだレビューはありません