エピソード

  • Canva and the Thundering Herd
    2025/05/14

    Greetings fellow incident nerds, and welcome to Season 2 of The VOID podcast. The main new thing for this new season is we’re now available in video—so if you’re listening to this and prefer watching me make odd faces and nod a lot, you can find us here on YouTube.

    The other new thing is we now have sponsors! These folks help make this podcast possible, but they don’t have any say over who joins us or what we talk about, so fear not.

    This episode’s sponsor is Uptime Labs. Uptime Labs is a pioneering platform specializing in immersive incident response training. Their solution helps technical teams build confidence and expertise through realistic simulations that mirror real-world outages and security incidents. When most of investment these days in the incident space goes to technology and process, Uptime Labs focuses on sharpening the human element of incident response.

    In this episode, we talk to Simon Newton, Head of Platforms at Canva, about their first public incident report. It’s not their first incident by any means, but it’s the first time they chose as a company to invest in sharing the details of an incident with the rest of us, which of course we’re big fans of here at the VOID.

    We discuss:

    • What led to Canva finally deciding to publish a public incident report
    • What the size and nature of their incident response looks like (this incident involved around 20 different people!)
    • Their progression from a handful of engineers handling incidents to having a dedicated Incident Command (IC) role
    • Avoiding blame when a known performance fix was ready to be deployed but hadn't yet, which contributed to the incident getting worse as it progressed
    • The various ways the people involved in the incident collaborated and improvised to resolve it


    続きを読む 一部表示
    37 分
  • Episode 8: A Tale of A Near Miss
    2025/02/28

    On this episode of the VOID podcast, I’m joined by Nick Travaglini, who is a Technical Customer Success Manager at Honeycomb. Nick wrote up a near miss that his team tackled towards the end of 2023, and I’ve been really wanting to discuss a near miss incident report for a very long time. What’s a Near Miss you might ask, or how is that an incident, or is it? What IS an incident? Keep listening, because we’re going to get into those questions, along with discussing whether or not it’s a good idea to say nasty things about other companies in your incident reports.

    Related Resources

    • Preempting Problems in a Sociotechnical System (the incident report)
    • Work as Imagined vs Work as Done
    • Resilience in Software Foundation
    • On the Mode of Existence of Technical Objects
    • Hitting the Brakes
    • 2024 VOID Report

    続きを読む 一部表示
    36 分
  • Episode 7: When Uptime Met Downtime
    2025/01/30

    We took a bit of a hiatus from recording last year, but we're back with an episode that I think everyone is really going to enjoy. Late last year, John Allspaw told me about this new company called Uptime Labs. They simulate software incidents, giving people a safe and constructive environment in which to experience incidents, practice what response is like, and bring what they learn back to their own organizations.

    For the record, this is not a sponsored podcast. I legitimately just love what they do. And I had the sincere privilege to meet Uptime's cofounder and CEO, Hamed Silatani at SRECon EMEA in November, where he gave a fantastic talk about some of the things they've learned about incident response for running hundreds of simulations for their customers.

    They recently had their first serious outage of their own platform. And so Hamed is joined by Joe McEvitt, cofounder and director of engineering at Uptime to discuss with me the one time that Uptime met downtime.

    続きを読む 一部表示
    52 分
  • Episode 6: Laura Nolan and Control Pain
    2023/04/25

    In the second episode of the VOID podcast, Courtney Wang, an SRE at Reddit, said that he was inspired to start writing more in-depth narrative incident reports after reading the write-up of the Slack January 4th, 2021 outage. That incident report, along with many other excellent ones, was penned by Laura Nolan and I've been trying to get her on this podcast since I started it.

    So, this is a very exciting episode for me. And for you all, it's going to be a bit different because instead of just discussing a single incident that Laura has written about, we get to lean on and learn from her accumulated knowledge doing this for quite a few organizations. And she's come with opinions.

    A fun fact about this episode, I was going to title it "Laura Nolan and Control Plane Incidents," but the automated transcription service that I use, which is typically pretty spot on (thanks, Descript!), kept changing "plane" to "pain" and well, you're about to find out just how ironic that actually is...

    We discussed:

    • A set of incidents she's been involved with that featured some form of control plane or automation as a contributing factor to the incident.
    • What we can learn from fields of study like Resilience Engineering, such as the notion of Joint Cognitive Systems
    • Other notable incidents that have similar factors
    • Ways that we can better factor in human-computer collaboration in tooling to help make our lives easier when it comes to handling incidents

    References:
    Slack's Outage on Jan 4th 2021
    A Terrible, Horrible, No-Good, Very Bad Day at Slack
    Google's "satpocalypse"
    Meta (Facebook) outage
    Reddit Pi-day outage
    Ironies of Automation (Lissane Bainbridge)


    続きを読む 一部表示
    28 分
  • Episode 5: Incident.io and The First Big Incident
    2023/02/14

    What happens when you use your own incident management software to manage your own incidents but said incident takes out your own incident management product? Tune in to find out...

    We chat with engineer Lawrence Jones about:

    • How their product is designed, and how that both contributed to, but also helped them quickly resolve, the incident
    • The role that organizational scaling (hiring lots of folks quickly) can play in making incident response challenging
    • What happens when reality doesn't line up with your assumptions about how your system(s) works
    • The importance of taking a step back and making sure the team is taking care of each other when you can get a break in the urgency of an incident
    続きを読む 一部表示
    32 分
  • Episode 4: Emily Ruppe and The Inaugural LFI Conference
    2023/01/12

    In this episode we take a delightful detour from our usual VOID programming to have Emily Ruppe, a Solutions Engineer at Jeli.io and member of the Learning From Incidents (LFI) community, on the program to discuss the upcoming LFI Conference happening in Denver in February. Find out more about the goals and some of the featured speakers for the event, and we hope to see you there!

    Discussed in this episode:
    Jeli.io
    Learning From Incidents
    The LFI Conference (Feb 15-16, 2023 in Denver, CO)

    続きを読む 一部表示
    12 分
  • Episode 3: Spotify and A Year of Incidents
    2022/10/20

    If you or anyone you know has listened to Spotify, you're likely familiar with their year end Wrapped tradition. You get a viral, shareable little summary of your favorite songs, albums and artists from the year. In this episode, I chat with Clint Byrum, an engineer whose team helps keep Spotify for Artists running, which in turn keeps well, Spotify running.

    Each year, the team looks back at the incidents they've had in their own form of Wrapped. They tested hypotheses with incident data that they've collected, found some interesting results and patterns, and helped push their team and larger organization to better understand what they can learn from incidents and how they can make their systems better support artists on their platform.

    We discussed:

    • Metrics, both good and bad
    • Moving away from MTTR after they found it to be unreliable
    • How incident analysis is akin to archeology
    • Getting managers/executives interested in incident reviews
    • The value of studying near misses along with actual incidents
    続きを読む 一部表示
    32 分
  • Episode 2: Reddit and the Gamestop Shenanigans
    2021/12/01


    At the end of January, 2021, a group of Reddit users organized what's called a "short squeeze." They intended to wreak havoc on hedge funds that were shorting the stock of a struggling brick and mortar game retailer called GameStop. They were coordinating to buy more stock in the company and drive its price further up.

    In large part, they were successful—at least for a little while. One hedge fund lost somewhere around $2 billion and one Reddit user purportedly made off with around $13 million. Things managed to get even weirder from there, when online trading company Robinhood restricted trading for GameStop shares and sent its values plummeting losing three fourths of its value in just over an hour. But that's less relevant to this episode.

    What matters is that while all this was happening, traffic to a very specific page on Reddit, called a subreddit, r/wallstreetbets went to the moon. Long after the dust had settled, and the team had a chance to recover and reflect, some of the engineers wrote up an anthology of reports based on the numerous incidents they had that week. We talk to Courtney Wang, Garrett Hoffman, and Fran Garcia about those incidents, and their write-ups, in this episode.

    A few of the things we discussed include:

    • The precarious dynamic where business successes (traffic surges based on cultural whims) are hard to predict, and can hit their systems in wild and surprising ways.
    • How incidents like these have multiple contributing factors, not all of which are purely technical
    • How much they learned about their company's processes, assumptions, organizational boundaries, and other "non-technical" factors
    • How people are the source of resilience in these complex sociotechnical systems
    • Creating psychologically safe environments for people who respond to incidents
    • Their motivation for investing so much time and energy into analyzing, writing, and publishing these incident reviews
    • What studying near misses illuminated for them about how their systems work


    Resources mentioned in this episode include:

    • Reddit's r/wallstreetsbets incident anthology, which links to all the reports we discuss.
    • "Work as imagined and work as done" by Steven Shorrock (video)



    続きを読む 一部表示
    44 分