One of my favorite quotes about artificial intelligence (AI) isn’t from a data scientist or tech industry analyst. It’s from a doctor.
When asked if AI would eventually replace radiologists, Dr. Curtis Langoltz of Stanford University pithily replied “No, but radiologists who use AI will replace those who do not use it.”
This great insight is highly applicable to IT. Yes, there are many ways we can use AI methods to fully automate a broad range of DevOps tasks. Companies, including CA Technologies, are actively involved in the development of solutions that enable our customers to do exactly that.
Algorithmic machine learning, however, doesn’t just empower systems to perform tasks and solve problems autonomously. It also makes them great active partners with human beings. In fact, much of what machines learn they also wind up teaching.
The synergy between AI and human radiologists, for example, stems in part from the fact that digital systems can differentiate about 200 levels of gray in a diagnostic image—compared to only about 16-20 that are discernable by the human eye. Train an AI system with enough images, and that precise cognitive power can more effectively detect that something is going on.
But for the most effective diagnostic process, you don’t just depend on that detection alone. You use that detection to empower a human diagnostician who can apply a broad understanding of pathologies and deep experience with the complexities of individual patients to deliver the highest quality care.
In DevOps, we can do the same. We can use AI to capture insights that teach us how to continuously optimize our workflows and processes. We can also use our AI learnings to push our work up higher on the value chain.
More specifically, the synergy between AI and human intellect can:
Make development smarter. The speed, quality, and efficiency of development pipelines can be affected by all kinds of subtle factors. A less-than-optimally designed API, for example, can be a small but chronic stumbling block to everyone who has to use it. Scrum outcomes can be undermined by anything from a particular type of technical challenge to a nascent personality conflict.
By capturing a rich set of DevOps metrics and applying machine learning to those metrics, development leaders can discover process bottlenecks and skilling shortfalls. They can better coach individuals and promote team collaboration. The result: a better working environment that facilitates digital agility for the enterprise and higher satisfaction/retention for valuable employees.
Make ops smarter. Enterprises are running increasingly volatile and complex workloads on increasingly hybridized infrastructure. At the same time, the tolerance of internal and external users for latency and outages continues to approach zero. There are also real costs associated with performance problems.
The elastic capacity of public and private cloud does much to help with workload volatility. But adding cloud capacity also has its costs—and end-to-end application performance often depends on back-end systems that are not cloud-based. So not every performance issue can be solved by simply throwing more capacity at it. Nor should it be, if a rearchitecting can fix a bottleneck less expensively.
Here again, AI can teach us a lot. We can uncover opaque interdependencies in processing load and data throughput. We can spot conditions when it may make business sense to throttle cloud costs that aren’t cost-justified. We can even better understand the real-world conditions—whether patterns in customer behaviors or our own marketing programs—that are driving our demand spikes and troughs. All of this helps us deliver consistently responsive digital experiences at a cost that makes good business sense.
Make security smarter. AI is already being broadly implemented in security solutions such as endpoint protection and threat response to automate the detection and neutralization of anomalous activities in the enterprise environment. But effective multi-layer security isn’t just about finding and stopping exploits. It’s also about building applications that are themselves inherently less vulnerable to hacking. This is the essence of DevSecOps.
AI has huge potential value here. We are writing a rapidly growing volume of increasingly sophisticated code. It is very easy for subtle vulnerabilities to hide in that code. As our development practices become more complex—often including multiple contractors—it becomes more difficult to understand exactly where and why these vulnerabilities were introduced into our code. Machine learning can teach us the answers to these questions, so we can more proactively secure our data and our businesses.
It’s especially interesting to consider what may happen as we start to apply AI to DevSecOps across our organizations, as well as within them. More diverse inputs enable machine learning to discover more factors that impact code pipeline performance. By aggregating our knowledge about how we build, deliver and secure our code, we are all likely to benefit with better practices and stronger guardrails.