Features

Achieving Reliable Observability Part 1 – Making Cloud-Native Observability More Robust

I was having a conversation with a CxO level customer as part of an AIOps/Observability workshop, and from what I could tell, most are confused about how to properly operationalize cloud-native production environments – especially the monitoring/observability portion. Here is how the conversation went.

“Andy, we are thinking about getting [vendor] to use for our observability solution based on your recent research. What do you think?”

“Well, I don’t want to endorse any specific vendor, as they are all good at what they do. But let’s talk about what you want to do, and what they can do for you, so you can figure out whether or not they are the right fit for you.” The conversation continued for a while, but the last piece is worthy of being called out specifically.

“So, we will be running our production microservices in AWS in the ____ region. And we are planning to use this particular observability provider to monitor our Kubernetes clusters.”

“Couple of items to discuss. First, you realize that this particular provider you are speaking of also runs in the same region of the same cloud provider as yours, right?”

“We didn’t know that. Is that going to be a problem?”

“Not particularly. However, you may get into a ‘circular dependency’ situation.”

“What is that?”

“Well, as an enterprise architect, I always call for separation of duties as a best practice. For example, having your developer testing the code is a bad idea, having your developer figuring out how to deploy is a bad idea. In much the same way as when your production services run in the same region as your monitoring software – how would you know about a production outage if the cloud region takes a hit, and your observability solution goes down at the same time your production services do?”

“Should we dump them and go get this other solution instead?”

“No, I am not saying that. Figure out what you are trying to achieve and have a plan for it. Selection of an observability tool should fit your overall strategy.”

For those who don’t understand the above conversation, here is the reason why this scenario could be a problem.

Coming from an enterprise architecture background, we were taught, as a best practice, to operationalize production systems to avoid circular dependencies. This includes not having two services depend on each other, or not to colocate monitoring, governance and compliance systems as part of the production systems themselves. If you were to monitor your production system, you would do it from a separate and isolated sub-system (server, data center rack, sub-net, etc.) to make sure that if your production system goes down, the monitoring system doesn’t go down, too.  The same goes for public cloud regions – although it’s unlikely, individual regions and services do experience outages. If your production infrastructure is running on the same services in the same region as your SaaS monitoring provider, not only won’t you be aware that your production systems are down, but you also won’t have the data to analyze what went wrong. The whole idea behind having a good observability system is to quickly know when things went bad, what went wrong, where the problem is and why it happened so you can quickly fix it. You can check out this blog where I explain this in detail.

The best practice would be to either:

  1. Consider an observability solution that runs in a different region than your production workloads. Better yet, consider something that runs on a different cloud provider altogether. Although it is exceptionally rare, there have been instances of cloud service outages that cross regions. The chances of both cloud providers going down at the same time would be slim.For example, if the cloud region goes down (a region-wide outage in the cloud is quite possible, and seems to be more frequent of late), then your observability systems will also be down. You wouldn’t even know your production servers are down to switch to your backup systems unless you have a “hot” production backup. Not only will your customers find out about your outage before you do, but you won’t even be able to initiate your playbook, as you are not even aware that your production servers are down.
  2. Consider having your observability solution in a different location/region, yet still close enough to your production services so latency is very low. Most cloud providers operate in close proximity, so it is easy to find one.
  3. Another option is to get a solution that gives you deployment flexibility. For example, there are a couple of observability solutions that allow you to deploy it in any cloud and observe your production systems from anywhere – both in the cloud and on-premises.
  4. You can also consider sending the monitoring data from your instrumentation to two observability solution locations, but that will cost you slightly more. Or, ask what the vendor’s business continuity/disaster recovery plans are. While some think the costs might be much higher, I disagree, for a couple of reasons. First, because monitoring is mainly time-series metric data, so the volume and the cost to transport is not as high as logs or traces. Second, unless your observability provider is VPC peered, the chances are your data will be routed through the internet even though they are hosted in the same cloud provider. Hence, there will not be much more additional cost. Having observability data, all the time, about your production system is very critical during outages.
  5. A very commonly overlooked consideration is monitoring your full-stack observability system. While it is preferable to have the monitoring instance in every region where your production services run, it may not be feasible either because of cost or manageability. On such occasions, monitor the monitoring system. You could do synthetic monitoring by checking the monitoring API endpoints (or do random data inputs and check to ensure it worked) to make sure that your monitoring system is properly watching your production system. Better yet, find a monitoring vendor that will do this on your behalf.

When/if you get the dreaded 2 a.m. call, what is your plan of action? Just think it through thoroughly before it happens and have a playbook ready, so you won’t have to panic in a crisis.

Andy Thurai

Andy Thurai is an accomplished IT executive, strategist, advisor, and evangelist with 25-plus years of experience in executive, technical, and architectural leadership positions at companies such as IBM, Intel, BMC, Nortel, and Oracle. He also advises many start-ups. He has been a keynote speaker at major conferences and served as host for many webcasts, podcasts, webinars, and video chats. Andy has written more than 100 articles on emerging technology topics for publications such as Forbes, The New Stack, AI World, VentureBeat, DevOps.com, APM Digest, and Wired magazine. Andy’s topics of interest and expertise include AIOps, ITOps, observability, artificial intelligence, machine learning, cloud, edge, and other enterprise software. His strength is selling technology to the CxO audience with value proposition rather than a technology pitch. You can find more details and samples of Andy’s work on his website at www.thefieldcto.com

Recent Posts

Building an Open Source Observability Platform

By investing in open source frameworks and LGTM tools, SRE teams can effectively monitor their apps and gain insights into…

12 hours ago

To Devin or Not to Devin?

Cognition Labs' Devin is creating a lot of buzz in the industry, but John Willis urges organizations to proceed with…

13 hours ago

Survey Surfaces Substantial Platform Engineering Gains

While most app developers work for organizations that have platform teams, there isn't much consistency regarding where that team reports.

1 day ago

EP 43: DevOps Building Blocks Part 6 – Day 2 DevOps, Operations and SRE

Day Two DevOps is a phase in the SDLC that focuses on enhancing, optimizing and continuously improving the software development…

1 day ago

Survey Surfaces Lack of Significant Observability Progress

A global survey of 500 IT professionals suggests organizations are not making a lot of progress in their ability to…

1 day ago

EP 42: DevOps Building Blocks Part 5: Flow, Bottlenecks and Continuous Improvement

In part five of this series, hosts Alan Shimel and Mitch Ashley are joined by Bryan Cole (Tricentis), Ixchel Ruiz…

1 day ago