One of the biggest challenges of DevOps today is lack of visibility—especially shared visibility—into application delivery process and outcomes. A common data fabric allows all stakeholders to share visibility into DevOps processes, so they can more easily collaborate and communicate, use objective data to make decisions, and deliver better code faster.
The DevOps Visibility Challenge
A typical DevOps process relies on a loosely coupled tool chain. It includes separate tools for discrete functions including project management, source code control, provisioning and configuration, test execution and workflow automation. A tool chain approach adds much value, but becomes complex quickly. Also, it makes it difficult for stakeholders to get accurate insight into application delivery.
This challenge is especially endemic to large and distributed enterprises. It’s even more difficult when there are thousands of developers and dozens of data centers across multiple geographies. Having duplicate tools, specialized teams, varying processes, distributed locations and multiple business units makes it even harder.
As a result of this poor visibility into application delivery, stakeholders from business and IT:
- Cannot manage the velocity of feature delivery. They cannot detect and fix slowdowns or bottlenecks in the process or allocate and prioritize resources. They also cannot plan for downstream activities or triage slowdowns and stoppages in the software development life cycle.
- Cannot make objective decisions on quality. They risk releasing error-ridden software, slowing down response times, introducing security vulnerabilities, oversaturating infrastructure, damaging reputation and adding unnecessary costs.
- Cannot properly collaborate on detection, diagnosis and remediation of production problems. They also cannot share an understanding of the impact of code changes on business key performance indicators. Customer satisfaction, customer signups, cart fulfillment rates and revenue all could be affected.
Data Mining Enables Visibility
To overcome these obstacles, successful DevOps teams are using objective metrics mined from machine data. That helps them gain insight into the end-to-end delivery cycle, from idea to release.
Collecting, correlating and analyzing machine data from multiple sources gives dev and ops (and others) continuous insight. They can see the velocity, quality and business impact of application delivery. It provides individual contributors the data they need to be responsible for their own work, accountable to each other and aligned with business objectives. It enables managers to make data-driven decisions about workload prioritization, resourcing, scheduling, performance and value delivery.
How to Get to Data-Driven DevOps
Some of the key actions you can take to start on a journey of data-driven DevOps include:
Measure your speed of application delivery: The machine data generated by your DevOps tool chain provides a wealth of information about all phases of the development cycle. It shows what is in development, close to release, in backlog and slowing you down. It shows which stages in the process are failing and what teams are most efficient. Consider mining and sharing velocity-related measures such as commit rates, sprint times and “idea to cash” time, to name a few.
Measure the quality of your application code: Even if you are already using testing tools to understand release quality, there is more to quality than test coverage. Again, choosing the right KPIs is critical, as is sharing those measurements throughout your team(s). Some other key metrics may include the number and importance of defects, defect variance from release to release (or from team to team) and build/integration/deployment failures.
Measure the business impact of application code: All too often organizations measure activity, but fail to align application delivery with business goals. As one customer recently told me, “Shipping crappy releases faster won’t help.” Business stakeholders especially are looking for business KPIs such as:
- user signups or cancellations
- customer satisfaction
- cart fulfillment or abandonment
- time on site
- transaction failure rate
- sales volume
- application revenue
- app downloads
- offer open rates
- customer satisfaction; and
- customer acquisition costs
Measure the human impact of DevOps: DevOps is mainly about people, so you must find ways to measure culture change, too. You can mine various applications for objective metrics such as number of sick days, “work from home” days and variance in team and individual productivity. However, machine data will not provide all the relevant KPIs; you will need subjective measures too, such as from “stay interviews,” feedback forms or surveys.
Data-Driven DevOps with a Common Data Fabric
A data-driven DevOps approach more effectively drives continuous improvement and higher levels of agility. It also helps achieve greater collaboration, improved security and compliance, and better alignment with business KPIs. These all occur while enabling rapid iteration and innovation.
A common data fabric provides the objective measurement and shared visibility critical for a data-driven DevOps approach. With comprehensive and continuous visibility into key performance measures, provided through a shared data fabric, DevOps teams can:
- Isolate “waste,” detect and correct slowdowns, and deliver applications faster
- Correlate test and QA outcomes to find more problems sooner and improve code quality
- React faster to detect and address problems that do get through to production; and
- Use real-time insight to measure business impact and iterate faster on good change