『Have we finally figured out how to make efficient AI?』のカバーアート

Have we finally figured out how to make efficient AI?

Have we finally figured out how to make efficient AI?

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

A fantastic research paper published in this month's Nature Computational Science suggests a solution may be in our midst for the incredible inefficiency in generative AI.

Large Language Models' (LLMs) transformer architecture requires the next token (generally part of a word) be predicted based on all the output tokens before it.

Power demands for this process are huge. Shuffling data between memory and processors isn't an easy pipeline, and when you need it to work quickly, those energy demands quickly stack up.

And in an AI arms race, where everyone wants bigger and better models, requiring increasingly powerful compute is required to stretch their limits, the dependence on energy to power, and cool, those processing units grows exponentially.

But what if there was a different, and better, way, to make AI work? That's the driving force behind work of Nathan Leroux and his team proposing a totally different paradigm: analog in-memory computing.

And that's exactly what we're discussing today.

Zip yourself in that flame retardant suit: things are about to get hot in here...

Ping me at dave@wordandmouth.com to get on the show or talk about AI in your world.

まだレビューはありません