エピソード

  • Is Intel Finally Back with a $300B market cap? OpenClaw can Dream?
    2026/04/10

    In this episode, Austin and Vik discuss if Intel is finally back with CPU partnerships with Google, and heterogeneous inference with SambaNova, while market cap soars above $300B. Vik tries to get his OpenClaw instance to dream every night.

    Chapters

    00:00 Anthropic's New Direction: Chip Development
    02:30 Navigating Subscription Changes and Token Costs
    05:25 Exploring Alternative AI Models
    08:10 The Economics of AI: Rent vs. Buy
    10:56 Intel's Resurgence and Market Dynamics
    15:23 Intel's Strategic Partnerships and Market Positioning
    19:37 The Role of IPUs in Modern Computing
    25:08 Coexistence of x86 and ARM Architectures
    29:55 Innovations in Chip Architecture and Future Prospects

    続きを読む 一部表示
    34 分
  • Reiner Pope (MatX): Designing AI Chips From First Principles for LLMs
    2026/04/09

    Reiner Pope is the co-founder and CEO of MatX, the startup building chips designed from first principles for LLMs. Before MatX, Reiner was on the Google Brain team training LLMs, and his co-founder Mike Gunter was on the TPU team. They left Google one week before ChatGPT was released.

    A counterintuitive throughput insight from the conversation:

    “Low latency means small batch sizes. That is just Little’s law. Memory occupancy in HBM is proportional to batch size. So you can actually fit longer contexts than you could if the latency were larger. Low latency is not just a usability win, it improves throughput.”

    We get into:

    • The hybrid SRAM + HBM bet, and why pipeline parallelism finally works

    • Overcoming the CUDA moat

    • Why frontier labs are willing to bet on an AI ASIC startup

    • Memory-bandwidth-efficient attention, numerics, and what MatX publishes (and what it does not)

    • Why 95% of model-side news is noise for chip design

    • Why sparse MoE drives MatX to “the most interconnect of any announced product”

    • How MatX uses AI for its own chip design

    • The biggest challenges ahead

    Chapters:

    00:00 “We left Google one week before ChatGPT”

    00:24 Intro: who is MatX

    01:17 Origin story: leaving Google for LLM chips

    02:21 GPT-3 and the “too expensive” problem

    04:25 Why buy hardware that is not a GPU

    05:52 Overcoming the CUDA moat

    08:46 Early investors

    09:35 The name MatX

    09:59 The chip: matrix multiply + hybrid SRAM/HBM

    12:11 Why pipeline parallelism finally works

    14:22 Reading papers and Google going dark

    15:20 Research agenda: attention and numerics

    17:06 Five specs and meeting customers where they are

    19:24 Why frontier labs are the natural first customer

    20:32 Workloads: training, prefill, decode

    22:18 Little’s law and the throughput case for low latency

    24:29 Interconnect and MoE topology

    26:35 Inside the team: 100 people, full stack

    28:32 Agentic AI: 95% noise for hardware

    30:35 KV cache sizing in an agentic world

    32:11 How MatX uses AI for chip design (Verilog + BlueSpec)

    34:23 Go to market: proving credibility under NDA

    35:12 Porting effort for frontier labs

    36:34 Biggest skepticism: manufacturing at gigawatt scale

    37:32 Hiring plug


    Austin Lyons @ Chipstrat: https://www.chipstrat.com

    Vik Sekar @ Vik's Newsletter: https://www.viksnewsletter.com/

    続きを読む 一部表示
    39 分
  • $300M for 70K Viewers | Intel x Elon, OpenAI x TBPN, Citrini's Strait of Hormuz Stunt
    2026/04/07

    Intel Foundry just partnered with Elon Musk’s Terafab. What is Terafab anyway, why vertically integrated fabs make sense but the economics don’t (yet!), and what Intel is doing here (hint: no idea).

    Then: OpenAI acquires TBPN for an estimated $100-300M. Not sure why, but the more interesting thing is the value of niche audiences when five companies control a trillion dollars in AI capex.

    And finally, Citrini Research sent an analyst to the Strait of Hormuz with a Pelican case full of spy gear, $15K cash, and Cuban cigars. The most unhinged research trip in Substack history.

    Austin Lyons — Chipstrat (https://chipstrat.com) Vik Sekar — Vik's Newsletter (https://www.viksnewsletter.com)

    Subscribe for weekly episodes on semiconductors, AI, infrastructure, and the business of chips.

    続きを読む 一部表示
    36 分
  • NVIDIA's Marvell Strategy, Is Memory Different This Time?, Intel's Ireland Fab
    2026/04/03

    In this episode, Austin and Vik analyze NVIDIA's $2 billion investment in Marvell NVLink Fusion, exploring its implications for AI infrastructure, interconnect protocols, and the broader chip ecosystem. They also discuss the current memory market surge, DRAM pricing, and Intel's strategic fab buyback, providing deep insights into industry trends and future directions.

    On Substack
    Vik: https://www.viksnewsletter.com/
    Austin: https://www.chipstrat.com/

    Chapters

    00:00 NVIDIA's $2 Billion Investment in Marvell
    20:11 The Memory Market Crisis
    20:16 The Future of Memory Pricing and Consumer Impact
    22:55 The Cycle of Supply and Demand in Memory
    27:23 AI's Impact on Memory Demand
    31:46 Long-Term Agreements and Market Stability
    35:07 Intel's Strategic Fab Buyback
    40:44 Monopoly Analogy: Intel's Market Strategy

    続きを読む 一部表示
    42 分
  • ARM AGI CPU has entered the chat, TurboQuant thrashes memory stocks
    2026/03/27

    In this episode, Austin and Vik analyze recent developments in GloFo patent lawsuits, the impact of TurboQuant on AI inference, and ARM's strategic move into silicon for agentic AI workloads.

    Read Vik's substack: https://www.viksnewsletter.com
    Read Austin's substack: https://www.chipstrat.com

    Chapters

    00:00 Patent Wars in Semiconductor Industry
    07:14 Understanding TurboQuant and Its Implications
    24:42 Innovations in Memory Management
    28:00 The Rise of ARM AGI CPUs
    32:56 Agentic AI and CPU Compatibility
    39:54 Performance Metrics in Agentic AI
    44:52 ARM's Market Timing and Challenges

    続きを読む 一部表示
    52 分
  • MicroLEDs Ain’t Dead, Micron Snags Vera Rubin
    2026/03/20

    Austin and Vik break down a packed week in semiconductors, covering GTC, OFC, and Micron earnings. The conversation kicks off with Jensen Huang's bold claim that engineers should spend $250K/year on AI tokens, and whether companies will buy tokens or token generators (i.e., on-prem hardware like the Dell Pro Max with GB300). They dig into the CapEx vs OpEx tradeoffs, data security concerns, and how sharing GPU resources might end up looking a lot like the old EDA license model.

    Next up: Micron crushed earnings and appears to be designed into Vera Rubin for HBM4 — despite months of rumors saying otherwise. Austin and Vik unpack the nuance around HBM pin speeds, memory node base dies, and what Micron's massive new fab investments in Taiwan, Singapore, Idaho, and New York mean for the memory cycle.

    The back half of the episode dives into optical interconnects for AI scale-up. A new industry consortium (OCI-MSA) has formed with Meta, Broadcom, NVIDIA, and OpenAI to standardize optical components. Vik explains why traditional indium phosphide lasers might be overkill for short-reach scale-up, and makes the case for micro LEDs — a "slow but wide" approach that could fill the gap between copper and conventional optics. They also touch on Credo's expanding product portfolio (and the infamous purple-to-orange cable saga), plus Lumentum's new VCSEL work for scale-up.

    Vik - https://www.viksnewsletter.com/
    Austin - https://www.chipstrat.com/

    CHAPTERS
    0:00 Intro & GTC/OFC Conference Overload
    2:09 Jensen's $250K Token Budget Per Engineer
    5:08 On-Prem Inference vs. Cloud Token Spending (Dell Pro Max, CapEx vs OpEx)
    6:44 Sharing GPU Resources Like EDA Licenses
    8:16 Data Security & On-Prem Privacy Concerns
    9:53 Matthew Berman's Fine-Tuned Open Claw Agent
    10:35 Vik Sets Up Open Claw on a Home Server
    11:53 Always Be Clauden (ABC) – Managing Agents from Your Phone
    13:34 Micron Earnings & HBM4 in Vera Rubin
    16:39 HBM Pin Speeds & the Micron Design-In Debate
    20:17 Micron's New Fab Investments & Memory Cycle Fears
    23:49 Why AI Drives a Step Change in Memory Demand
    26:30 Optical Compute Interconnect MSA (OCI-MSA)
    29:48 Scale-Up Optics: Do We Need New Technology?
    30:58 Micro LEDs – The "Slow but Wide" Approach
    35:45 Micro LEDs vs. Copper vs. Traditional Optics
    36:55 Credo's Product Spectrum & the Purple Cable Story
    39:31 VCSELs & Lumentum's 1060nm Scale-Up Play

    続きを読む 一部表示
    43 分
  • Quick Takes: Nvidia Keynote at GTC
    2026/03/17

    Vik and Austin unpack the Nvidia GTC keynote with fresh, top-of-mind takes while trying to breakdown key announcements, what matters and what doesn't. They discuss Groq's LPX, optics+copper for scale up, new CPU requirements, CPO for networking, and what agents means for software, and much, much, more.

    Check out Austin's substack: https://www.chipstrat.com
    Check out Vik's substack: https://www.viksnewsletter.com

    Chapters

    00:00 Introduction and Keynote Context
    03:18 Keynote Highlights and Gaming Innovations
    06:18 Generative AI: The Three Eras
    09:28 Inference: The New Revenue Generator
    12:21 NVIDIA's Tiered Approach to AI Models
    15:30 The Grok Chip and Its Role
    18:35 Vera Rubin System: A Full Data Center
    21:18 CPU Demand and Performance
    24:31 Networking Innovations and Future Directions
    32:32 Innovations in PCB Technology
    34:06 Scaling GPU Systems
    36:57 Understanding the STX Rack and AI Storage
    38:23 The Rosa CPU and Its Significance
    40:07 Digital Twin Platforms and AI Factories
    43:53 NVIDIA's New Software Innovations
    47:09 The Future of Token Budgets in AI
    54:15 Balancing CapEx and OpEx in AI Deployments

    続きを読む 一部表示
    59 分
  • Meta's Inference Accelerator & Applied Optoelectronics (AAOI)
    2026/03/13

    Austin recaps moderating an agentic AI panel at Synopsys Converge, then gives an in-depth technical breakdown of Meta's MTIA custom silicon. Why they're building it, how chiplets let them ship a new chip every 6 months, and how the roadmap is shifting toward gen AI inference. Vik digs into Applied Optoelectronics (AAOI), the vertically integrated Texas laser shop whose stock went from $1.48 to $100+, and whether history is about to rhyme.

    Austin Lyons: https://www.chipstrat.com
    Vik Sekar: https://www.viksnewsletter.com/

    Topics covered:
    • Agentic AI in chip design — how it changes roles for junior and senior engineers
    • Optical circuit switching and what it means for Arista's business model
    • Meta's ad-serving pipeline: Andromeda, Lattice, and the GEM foundation model
    • Why custom silicon (MTIA) makes sense at Meta's scale
    • MTIA chiplet strategy — 4 generations in 2 years
    • AAOI's vertical integration, Amazon's $4B warrant deal, and the 2017 parallel

    Chapters:
    0:00 Intro
    1:26 Synopsys Converge — Agentic AI Panel
    9:44 Vik's Article: Optical Circuit Switching & Arista
    14:43 Meta MTIA — A New Chip Every 6 Months
    21:32 Why Custom Silicon Makes Sense for Meta
    27:22 MTIA Chiplet Strategy & Roadmap
    33:56 Gen AI Fits Meta's Business Model
    36:31 How Meta Ships Chips So Fast
    40:30 Applied Optoelectronics (AAOI) Deep Dive
    45:02 Amazon's $4B Warrant Deal
    48:54 Can AAOI's Lasers Compete with Lumentum?
    53:16 AAOI's Aggressive Capacity Buildout
    55:35 History Rhymes: AAOI's 2017 Boom & Bust
    1:00:55 Wrap-Up

    #semiconductors #chips #tech #meta #MTIA #AAOI #optics #inference #AI

    続きを読む 一部表示
    1 時間 2 分