エピソード

  • The IT Dictionary: Post-Mortems, Cargo Cults, and Dropped Databases
    2025/10/02
    Episode Sponsor: Attribute - https://dev0ps.fyi/attribute

    We're joined by 20 year industry veteran and DevOps advocate, Adam Korga, celebrating the release of his book IT Dictionary. In this episode we quickly get down to the inspiration behind postmortems as we review some cornerstone cases both in software and in general technology.

    Adam shares how he started in the industry, long before DevOps was a coined term, focused on making systems safer and avoiding mistakes like accidentally dropping a production database. we review the infamous incidents of accidental database deletion, by LLMs and human's alike.

    And of course we touch on the quintessential postmortems in civil engineering, flight, and survivorship bias from World War II through analyzing bullet holes on returning planes.

    Notable Facts
    • Adam's book: IT Dictionary
    • Knight Capital: the 45 minute nightmare
    • Work Chronicles Comic: Will my architecture work for 1 Million users?
    Picks:
    • Warren - Cuitisan CANDL storage containers
    • Adam - FUBAR
    続きを読む 一部表示
    30 分
  • Vector Databases Explained: From E-commerce Search to Molecule Research
    2025/09/24
    Jenna Pederson, Staff Developer Relations at Pinecone, joins us to close the loop on Vector Databases. Demystifies how they power semantic search, their role in RAG, and also unexpected applications.

    Jenna takes us beyond the buzzword bingo, explaining how vector databases are the secret sauce behind semantic search. Sharing just how "red shirt" gets converted into a query that returns things semantically similar. It's all about turning your data into high-dimensional numerical meaning, which, as Jenna clarifies, is powered by some seriously clever math to find those "closest neighbors."

    The conversation inevitably veers into Retrieval-Augmented Generation (RAG). Jenna reveals how databases are the unsung heroes giving LLMs real brains (and up-to-date info) when they’re prone to hallucinating or just don’t know your company’s secrets. They complete the connection from proprietary and generalist foundational models to business relevant answers.

    Notable Facts
    • Episode: MCP: The Model Context Protocol and Agent Interactions
    • Crossing the Chasm
    Picks:
    • Warren - HanCenDa USB C Magnetic adapter
    • Jenna - Keychron Alice Layout Mechanical keyboard
    続きを読む 一部表示
    55 分
  • The Unspoken Challenges of Deploying to Customer Clouds
    2025/09/17
    This episode we are joined by Andrew Moreland, co-founder of Chalk. Andrew explains how their company’s core business model is to deploy their software directly into their customers’ cloud environments. This decision was driven by the need to handle highly sensitive data, like PII and financial records, that customers don't want to hand over to a third-party startup.

    The conversation delves into the surprising and complex challenges of this approach, which include managing granular IAM permissions and dealing with hidden global policies that can block their application. Andrew and Warren also discuss the real-world network congestion issues that affect cross-cloud traffic, a problem they've encountered multiple times. Andrew shares Chalk's mature philosophy on software releases, where they prioritize backwards compatibility to prevent customer churn, which is a key learning from a competitor.

    Finally, the episode explores the advanced technical solutions Chalk has built, such as their unique approach to "bitemporal modeling" to prevent training bias in machine learning datasets. As well as, the decision to move from Python to C++ and Rust for performance, using a symbolic interpreter to execute customer code written in Python without a Python runtime. The episode concludes with picks, including a surprisingly popular hobby and a unique take on high-quality chocolate.

    Notable Facts
    • Fact - The $1M hidden Kubernetes spend
    • Giraffe and Medical Ruler training data bias
    • SOLID principles don't produce better code?
    • Veritasium - The Hole at the Bottom of Math
    • Episode: Auth Showdown on backwards compatible changes
    Picks:
    • Warren - Switzerland Grocery Store Chocolate
    • Andrew - Trek E-Bikes
    続きを読む 一部表示
    53 分
  • How to build in Observability at Petabyte Scale
    2025/09/07
    We welcome guest Ang Li and dive into the immense challenge of observability at scale, where some customers are generating petabytes of data per day. Ang explains that instead of building a database from scratch—a decision he says went "against all the instincts" of a founding engineer—Observe chose to build its platform on top of Snowflake, leveraging its separation of compute and storage on EC2 and S3.

    The discussion delves into the technical stack and architectural decisions, including the use of Kafka to absorb large bursts of incoming customer data and smooth it out for Snowflake's batch-based engine. Ang notes this choice was also strategic for avoiding tight coupling with a single cloud provider like AWS Kinesis, which would hinder future multi-cloud deployments on GCP or Azure. The discussion also covers their unique pricing model, which avoids surprising customers with high bills by providing a lower cost for data ingestion and then using a usage-based model for queries. This is contrasted with Warren's experience with his company's user-based pricing, which can lead to negative customer experiences when limits are exceeded.

    The episode also explores Observe’s "love-hate relationship" with Snowflake, as Observe's usage accounts for over 2% of Snowflake's compute, which has helped them discover a lot of bugs but also caused sleepless nights for Snowflake's on-call engineers. Ang discusses hedging their bets for the future by leveraging open data formats like Iceberg, which can be stored directly in customer S3 buckets to enable true data ownership and portability. The episode concludes with a deep dive into the security challenges of providing multi-account access to customer data using IAM trust policies, and a look at the personal picks from the hosts.

    Notable Links
    • Fact - Passkeys: Phishing on Google's own domain and It isn't even new
    • Episode: All About OTEL
    • Episode: Self Healing Systems
    Picks:
    • Warren - The Shadow (1994 film)
    • Ang - Xreal Pro AR Glasses
    続きを読む 一部表示
    46 分
  • The Open-Source Product Leader Challenge: Navigating Community, Code, and Collaboration Chaos
    2025/08/24
    In a special solo flight, Warren welcomes Meagan Cojocar, General Manager at Pulumi and a self-proclaimed graduate of “PM school” at AWS. They dive into what it’s like to own an entire product line and why giving up that startup hustle for the big leagues sometimes means you miss the direct signal from your users. The conversation goes deep on the paradox of open-source where direct feedback is gold, but dealing with license-shifting competitors can make you wary. From the notorious HashiCorp kerfuffle to the rise of OpenTofu, they explore how Pulumi maintains its commitment to the community amidst a wave of customer distrust.

    Meagan highlights the invaluable feedback loop provided by the community, allowing for direct interaction between users and the engineering team. This contrasts with the "telephone game" that can happen in proprietary product development. The conversation also addresses the recent industry shift and then immediate back-peddling from open-source licenses, discussing the subsequent customer distrust and how Pulumi maintains its commitment to the open-source model.

    And finally, the duo tackles the elephant in the cloud: LLMs, and extends on the early MCP episode. They debate the great code quality vs. speed trade-off, the risk of a "botched" infrastructure deployment, and whether these models can solve anything more than a glorified statistical guessing game. It's a candid look at the future of DevOps, where the real chaos isn't the code, but the tools that write it. The conversation concludes with a philosophical debate on the fundamental capabilities of LLMs, questioning whether they can truly solve "hard problems" or are merely powerful statistical next-word predictors.

    Notable Links
    • Veritasium - the Math that predicts everything
    • Fact - Don't outsource your customer support: Clorox sues Cognizant
    • CloudFlare uses an LLM to generate an OAuth2 Library
    Picks:
    • Warren - Rands Leadership Community
    • Meagan - The Manager's Path by Camille Fournier
    続きを読む 一部表示
    59 分
  • FinOps: Holding engineering teams accountable for spend
    2025/07/31
    In this episode of Adventures in DevOps, we dive into the world of FinOps, a concept that aims to apply the DevOps mindset to financial accountability. Yasmin Rajabi, Chief Strategy Officer at CloudBolt, joins us to demystify, as we acknowledge the critical challenge of bringing together financial accountability and engineering teams who often are not paying attention to the business.

    The discussion further explores the practicalities of FinOps in the context of cloud spending and Kubernetes. Yasmin highlights that a significant amount of waste in organizations comes from simply not turning off unused systems and not right-sizing resources. She explains how tools like Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA) can help, but also points out the complexities of optimizing across horizontal and vertical scaling behaviors. The conversation touches on "shame back reporting" as a way to provide visibility into costs for engineering teams, although the conversation emphasizes that providing tooling and insights is more effective than simply telling developers to change configurations.

    The episode also delves into the evolving mindset around cloud costs, especially with the rise of AI and machine learning workloads. While historically engineering salaries eclipsed cloud spending, the increasing hardware requirements for ML and data workloads are making cost optimization a more pressing concern. Spending-conscious teams are increasingly asking about GPU optimization, even if AI/ML teams are still largely focused on limitless spending to drive unjustified "innovation". The conclude by discussing the challenges of on-premise versus cloud deployments and the importance of addressing "day two problems" regardless of the infrastructure choice.

    Picks
    • Warren - Lions and Dolphins cannot make babies
    • Aimee - The Equip Protein Powder and Protein Bar
    • Yasmin - Bone Broth drink by 1990 Snacks
    続きを読む 一部表示
    55 分
  • The Auth Showdown: Single tenant versus Multitenant Architectures
    2025/07/17
    Get ready for a lively debate on this episode of Adventures in DevOps. We're joined by Brian Pontarelli, founder of FusionAuth and CleanSpeak. Warren and Brian face off by diving into the controversial topic of multitenant versus single-tenant architecture. Expert co-host Aimee Knight joins to moderate the discussion. Ever wondered how someone becomes an "auth expert"? Warren spills the beans on his journey, explaining it's less about a direct path and more about figuring out what it means for yourself. Brian chimes in with his own "random chance" story, revealing how they fell into it after their forum-based product didn't pan out.

    Aimee confesses her "alarm bells" start ringing whenever multitenant architecture is mentioned, jokingly demanding "details" and admitting her preference for more separation when it comes to reliability. Brian makes a compelling case for his company's chosen path, explaining how their high-performance, downloadable single-tenant profanity filter, CleanSpeak, handles billions of chat messages a month with extreme low latency. This architectural choice became a competitive advantage, attracting companies that couldn't use cloud-based multitenant competitors due to their need to run solutions in their own data centers.

    We critique cloud providers' tendency to push users towards their most profitable services, citing AWS Cognito as an example of a cost-effective solution for small-scale use that becomes cost-prohibitive with scaling and feature enablement. The challenges of integrating with Cognito, including its reliance on numerous other AWS services and the need for custom Lambda functions for configuration, are also a point of contention. The conversation extends to the frustrations of managing upgrades and breaking changes in both multitenant and single-tenant systems and the inherent difficulties of ensuring compatibility across different software versions and integrations. The episode concludes with a humorous take on the current state and perceived limitations of AI in software development, particularly concerning security.

    Picks
    • Warren - Scarpa Hiking shoes - Planet Mojito Suade
    • Aimee - Peloton Tread
    • Brian - Searchcraft and Fight or Flight
    続きを読む 一部表示
    53 分
  • Should We Be Using Kubernetes: Did the Best Product Win?
    2025/06/24
    Episode Sponsor: PagerDuty - Checkout the features in their official feature release: https://fnf.dev/4dYQ7gL

    This episode dives into a fundamental question facing the DevOps world: Did Kubernetes truly win the infrastructure race because it was the best technology, or were there other, perhaps less obvious, factors at play? Omer Hamerman joins Will and Warren to take a hard look at it. Despite the rise of serverless solutions promising to abstract away infrastructure management, Omer shares that Kubernetes has seen a surge in adoption, with potentially 70-75% of corporations now using or migrating to it. We explore the theory that human nature's preference for incremental "step changes" (Kaizen) over disruptive "giant leaps" (Kaikaku) might explain why a solution perceived by some as "worse" or more complex has gained such widespread traction.

    The discussion unpacks the undeniable strengths of Kubernetes, including its "thriving community", its remarkable extensibility through APIs, and how it inadvertently created "job security" for engineers who "nerd out" on its intricacies. We also challenge the narrative by examining why serverless options like AWS Fargate could often be a more efficient and less burdensome choice for many organizations, especially those not requiring deep control or specialized hardware like GPUs. The conversation highlights that the perceived "need" for Kubernetes' emerges often from something other than technical superiority.

    Finally, we consider the disruptive influence of AI and "vibe coding" on this landscape, how could we not? As LLMs are adopted to "accelerate development", they tend to favor serverless deployment models, implicitly suggesting that for rapid product creation, Kubernetes might not be the optimal fit. This shift raises crucial questions about the trade-offs between development speed and code quality, the evolving role of software engineers towards code review, and the long-term maintainability of AI-generated code. We close by pondering the broader societal and environmental implications of these technological shifts, including AI's massive energy consumption and the ongoing debate about centralizing versus decentralizing infrastructure for efficiency.

    Links:​
    • Comparison: Linux versus E. coli
    Picks​
    • Warren - Surveys are great, and also fill in the Podcast Survey
    • Will - Katana.network
    • Omer - Mobland and JJ (Jujutsu)
    続きを読む 一部表示
    1 時間 7 分