
Beyond the Buzz: How AI Really Learns and How to Get Your Content Noticed
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
The episode "AI: How do models get updated?" explores how artificial intelligence (AI) models and search engines obtain and process information, demystifying the idea that AIs scan the web in real-time for every answer. The author presents detailed responses from Gemini, ChatGPT, and Claude, which explain their respective training processes. The AIs reveal that they are trained on vast, pre-processed datasets (like Common Crawl or C4) offline and periodically, unlike traditional search crawlers (like Googlebot), which continuously index the web. For generative searches, a process like Retrieval-Augmented Generation (RAG) is used, where relevant information is retrieved from updated search indexes and then synthesized by the language model to generate a cohesive response, with sources cited. In short, there is a clear distinction between data collection for AI training and the algorithms used for traditional and generative searches, although all rely on the web's vast "library."