Who owns this outage? Building intelligent escalation chains for modern SRE
If your organization is running code on a production server 24/7, you’re going to need a process to handle when that code—or the infrastructure it runs on—fails. No code is bug free, so failures will happen. That means that your SREs and developers are going to have to spend some time on call and ready to respond to when the application breaks down.
On this sponsored episode of the podcast, we talk to Eric Maxwell, a solution architect at xMatters. He took a winding road to get to where he is: after a computer engineering education, he held jobs as field support engineer, product manager, SRE, and finally his current role as a solutions architect, where he serves as something of an SRE for SREs, helping them solve incident management problems with the help of xMatters.
When he moved to the SRE role, Maxwell wanted to get back to doing technical work. It was a lateral move within his company, which was migrating an on-prem solution into the cloud. It’s a journey that plenty of companies are making now: breaking an application into microservices, running processes in containers, and using Kubernetes to orchestrate the whole thing. Non-production environments would go down and waste SRE time, making it harder to address problems in the production pipeline.
At the heart of their issues was the incident response process. They had several bottlenecks that prevented them from delivering value to their customers quickly. Incidents would send emails to the relevant engineers, sometimes 20 on a single email, which made it easy for any one engineer to ignore the problem—someone else has got this. They had a bad silo problem, where escalating to the right person across groups became an issue of its own. And of course, most of this was manual. Their MTTR—mean time to resolve—was lagging.
Maxwell moved over to xMatters because they managed to solve these problems through clever automation. Their product automates the scheduling and notification process so that the right person knows about the incident as soon as possible. At the core of this process was a different MTTR—mean time to respond. Once an engineer started working to resolve a problem, it was all down to runbooks and skill. But the lag between the initial incident and that start was the real slowdown.
It’s not just the response from the first SRE on call. It’s the other escalations down the line—to data engineers, for example—that can eat away time. They’ve worked hard to make escalation configuration easy. It not only handles who’s responsible for specific services and metrics, but who’s in the escalation chain from there. When the incident hits, the notifications go out through a series of configured channels; maybe it tries a chat program first, then email, then SMS.
The on-call process is often a source of dread, but automating the escalation process can take some of the sting out of it. Check out the episode to learn more.partnercontent, runbook, sponsored, sre, the stack overflow podcast
So we can put in the definition of MTTR in an article, but we can’t put in the definition of an SRE?