Under pressure to innovate and deliver rapid updates to their applications, developers often place monitoring as an after-thought. In today’s world, where a slowdown can translate to customer attrition and an outage can lead to millions of dollars in losses and negative press, ensuring satisfactory response time has evolved to be one of the most critical needs in the early stages of development and is a common goal to ensure DevOps success.
The Shift-Left of Monitoring: To study emerging patterns of how and when DevOps teams start monitoring, we interviewed several DevOps squads. Almost unanimously the responses were:
- They care most about knowing, Is the application up and loading quickly? Are all critical user interactions with the application responding within acceptable standards?
- They would like to get this information proactively throughout the development life cycle so they can determine if any code check-in impacted the performance/
- They would like to monitor with minimum setup and configuration and don’t have the time to learn complex tools/
To satisfy these needs, synthetic monitoring becomes the starting point for monitoring.
With synthetic monitoring, DevOps teams can simulate user behavior and proactively detect performance problems from around the world and around the clock, before they impact their users. They can maximize uptime and ensure good response by pinging the application URLs and APIs and running simulations to check if the most common paths traversed by users are responding fine.
Some of the key capabilities of synthetic monitoring that are important to DevOps squads:
- Reports availability, response time and user satisfaction score (such as Apdex) for all web pages, URLs and critical user interactions from around the world, 24X7
- Runs synthetic tests as often as once per minute or even less than that, to test:
- Webpage loads
- REST API calls
- Runs simulated user interactions through scripts once every few minutes
- Runs synthetic tests as often as once per minute or even less than that, to test:
- Requires minimum setup and maintenance with preferably a SaaS solution with globally hosted probes. Private probes are required for monitoring internal applications.
- Incorporated into the DevOps life cycle as part of the delivery pipeline such that synthetic tests can be run in dev/test, staging and then in production.
- Provides granular controls on what tests to run, how often, from where and what to validate in the HTTP response code.
- Easy-to-use scripting tools that do not have a learning curve and preferably open-source. Often, automated regression test scripts used in development can be reused to run synthetic simulations in production.
- Alerts users of issues through not only their preferred channel of notification, such as email or SMS but also through their primary collaboration channel.
- Provides relevant alerts and avoids alert noise by appropriate alert filtering and retries on failure.
- Identifies whether an issue was caused by an application deployment or change by presenting an auto-correlated view of alerts, metrics and deployment activities.
- Isolate quickly whether the problem is in the application code or one of its dependencies such as a network latency issue or a third-party service slow down.
- Diagnoses the exact step of failure by providing waterfall analysis of all page assets, pinpointing slow requests, broken links, large images, slow external API calls and more.
- Accelerates diagnosis with an automated browser screenshot of the failure.
- Compares two versions of applications to enable A/B testing.
- Provides daily, weekly and monthly scores and reports to ensure that target SLAs for uptime and user satisfaction are being met.
Stay tuned for the next blog on DevOps and log analytics in this five-part blog series and don’t forget to register for an upcoming interactive webcast, “Learn Why We Must Shift APM Left in the DevOps Lifecycle.”
About the Author / Payal Chakravarty
Payal Chakravarty is offering leader for the Application Performance Management portfolio at IBM. She has nine years of experience in enterprise technology in various roles including product management, strategy, DevOps and engineering management and software development. She defined and delivered software as a service (SaaS) and on-premises software for Application Performance Management, Hybrid Cloud Monitoring, Data Center Management, and Operational Analytics. Payal has an MBA from Duke University and a MS in Computer Science from North Carolina State University. You can find her on Twitter, where she tweets about all things DevOps and cloud. Connect with Payal on LinkedIn / Twitter