エピソード

  • AI Infrastructure - Owned vs Cloud - The AI Journey of Trivium
    2026/02/05

    Many organizations wait for the right hardware, the right budget, or the right moment to begin investing in artificial intelligence.

    Trivium Packaging did not.

    They launched their first AI chatbot on a single server, without a GPU. Each response took approximately ten minutes. The system was slow, inelegant, and limited—but it functioned.

    Ten months later, Trivium operates a full Kubernetes cluster and has established an AI Center of Excellence. The procurement team now uses the system daily to translate contracts, while other departments are actively requesting access. The organization is already reaching the limits of its current infrastructure—a challenge that reflects success rather than failure.

    Key insight from Sebastian Van Duin: budgets are not approved by presentations. They are approved by working demonstrations—even imperfect ones.

    In this 32-minute discussion recorded at Cisco Studio Amsterdam, Sebastiaan van Duijn (Trivium Packaging) and Boris Vermaas (Cisco) explain how Trivium built its AI capabilities from the ground up.

    Topics discussed include:

    • The rationale for choosing an on-premises approach over cloud from the outset, and how this simplified security discussions.
    • The development of an internal “AI App Store” that ensures users only access applications appropriate to their roles.
    • The procurement team’s first reaction to the chatbot (“Is it a he or a she?”).
    • Why achieving 60–70% accuracy quickly is often more valuable than waiting indefinitely for a perfect solution.
    続きを読む 一部表示
    32 分
  • Episode 1: Simplicity
    2026/02/04

    Initiating AI workloads in the cloud is straightforward. GPUs can be provisioned quickly, experiments launched immediately, and early results demonstrated to leadership—without capital expenditure or procurement delays.

    The challenge emerges at scale.

    As systems move into production, costs escalate. Finance questions why cloud spend doubled last quarter. Security teams seek clarity on where sensitive training data resides. Machine learning engineers face compute bottlenecks despite significant allocated capacity.

    When failures occur, accountability becomes fragmented. With multiple vendors involved, resolution is slow and responsibility diffuse.

    What once took hours to deploy can take weeks to stabilize.

    In this 37-minute discussion recorded at Cisco Studio Amsterdam, Raymond Drielinger (MDCS.AI) and Jara Osterfeld (Cisco) examine what happens when AI workloads outgrow the cloud sandbox and enter enterprise reality.

    Key topics include:

    • Why GPUs remain underutilized in shared cloud environments while costs continue to accrue.
    • How “noisy neighbor” effects degrade model performance—and why identical workloads often run faster on-premises.
    • The difference between assembling hundreds of disconnected components and deploying an integrated, high-performance system engineered for immediate results.
    • How a single point of accountability replaces multi-vendor finger-pointing.

    A practical perspective on what it truly takes to scale AI beyond experimentation.

    続きを読む 一部表示
    38 分