The AI Chip Revolution: Why Hyperscalers Are Ditching NVIDIA for Custom Silicon
カートのアイテムが多すぎます
ご購入は五十タイトルがカートに入っている場合のみです。
カートに追加できませんでした。
しばらく経ってから再度お試しください。
ウィッシュリストに追加できませんでした。
しばらく経ってから再度お試しください。
ほしい物リストの削除に失敗しました。
しばらく経ってから再度お試しください。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
概要
The artificial intelligence sector is experiencing a significant shift in infrastructure strategy as major technology companies move away from singular reliance on NVIDIA. Over the weekend, The Information reported that Google is in talks with Marvell to co-develop two custom AI chips: a memory processing unit designed to work alongside Google's Tensor Processing Units and a new TPU specifically built for running AI models. This signals a broader industry pattern emerging among hyperscalers.
Meta and Broadcom announced a major partnership extension on April 14, 2026, representing what analysts describe as a fundamental change in AI technology development and financing. The deal involves over one gigawatt of initial computing capacity, with Meta projecting capital expenditures between 115 and 135 billion dollars on AI infrastructure for 2026 alone. Meta's custom MTIA accelerator chips are now optimized for inference tasks like content ranking and chatbot responses, indicating a strategic pivot toward deploying AI to billions of users as efficiently as possible.
Broadcom is emerging as the leading custom chip partner for hyperscalers. Beyond the Meta agreement extended through 2029, Broadcom also expanded its partnership with Alphabet for developing future Tensor Processing Units. The company additionally secured a deal to supply Anthropic with 3.5 gigawatts of chips starting in 2027, with existing 2026 orders valued at 21 billion dollars. Broadcom targets 100 billion dollars in custom AI chip deliveries by fiscal 2027.
NVIDIA continues experiencing major momentum despite competition. The company announced partnerships with Adobe and WPP for creative AI agents and showcased physical AI integration in manufacturing at Hannover Messe 2026. NVIDIA CFO has previously called physical AI a multi-trillion dollar opportunity. CEO Jensen Huang recently discussed the possibility of NVIDIA becoming a 3 trillion dollar revenue company, citing major inflection in inference demand driven by agentic AI adoption at scale.
The market pattern shows hyperscalers maintaining NVIDIA for leading training capacity while using Broadcom for custom chips optimized for specific inference workloads at volume. This multi-layered ecosystem approach reflects maturation in AI infrastructure development, prioritizing cost efficiency and operational optimization over purely frontier model training. Market participants are witnessing significant inflection in inference demand as enterprises adopt agentic AI systems at scale.
For great deals today, check out https://amzn.to/44ci4hQ
This content was created in partnership and with the help of Artificial Intelligence AI
まだレビューはありません