Hyperscale and automation seem to go hand-in-hand, at least when it comes to the data center. Synergy Research reported that, at the end of 2017, there were 390 web-scale data centers worldwide, up from 300 a year earlier. IDC identified that hyperscale data centers “have a minimum of 5,000 servers and are at least 10,000 sq ft in size but generally much larger.”
Today, hyperscale data centers are driving a sense of urgency for automation, but many smaller corporate data centers are just as complex, even if they are less physically expansive. In either case, automation takes data from across the data center, applies analytics and machine learning to that data and helps to identify both inefficiencies and trouble spots in the data center–as well as their origins.
Some data center functions are well on their way to becoming automated, making new deployments and changes to existing deployments simpler, faster and more efficient than they are today. Among those functions, orchestration, change management, software-defined architectures and DevOps stand out.
Meanwhile, another data center function is lagging: problem resolution. The challenge revolves around the siloed nature of the data center, where the multiple, disparate tools and methods used to sleuth out inefficiencies and pinpoint trouble spots lack a unified view of the data center infrastructure. In addition, silo-centric infrastructure monitoring tools lack application context. Short of automation, today’s data centers are stuck in the past. Especially as a data center grows and must interact with cloud-based services, no amount of human intuition in using those tools and methods can overcome the barriers of siloed systems.
However, data center infrastructure planners are making strides in identifying the requirements for accelerating the move to the automated data center.
Three Keys to the Automated Data Center
For years, the automated data center was just a dream. And as recently as late 2018, Forrester spoke of islands of automation.
“For the enterprises I’ve talked to, it’s not like any of them are completely manual at this stage, but they’re all at varying degrees of trying to integrate all these pieces,” said Chris Gardner, Forrester senior analyst. He added that fragmentation was due to a lack of holistic solutions that could connect all aspects of the data center.
But now, three forces are shaping up to help make the automated data center a reality. Let’s look further.
Real-time infrastructure visibility has become a critical element of automating the data center. Why? Because an issue can occur anywhere on the virtualized, hybrid infrastructure, and the issue can ripple to create problems elsewhere, making issue resolution a perennial challenge.
For that reason, your best defense is to build deep, real-time visibility into the compute/virtualization stack and the storage/storage network stack. This will help you quickly ensure the performance of mission-critical applications accessing the data from those stacks.
For ITOps, it is about proactively pinpointing the source of the issue and resolving it before application users are impacted. That requires stitching data across the full stack of technologies supporting the business: application services and infrastructure services, both on-premises and in the cloud.
Why is real-time analytics crucial in the data center? In short, it’s because real-time responsiveness is critical for supporting anomaly detection, root cause analysis, remediation, prevention and planning use cases, where processing and correlating massive amounts of information is critical.
For ITOps, it means distributed real-time architectures must combine cloud-based and on-premises analytics that leverage AI and machine learning. It’s the analytics and correlation that helps you make sense of all that real-time data so it becomes actionable.
IT organizations are acutely aware of the perils of introducing changes to critical production environments. In fact, they take great pains to establish a governance of change. Closing the loop is about humans delegating decision-making to technology.
For ITOps, that means ensuring integrated control of change processes across the entire technology stack. Data and analytics should lead to optimal real-time decision-making that empowers the automation for closed-loop operation.
How Automated Is Automated?
In 2016, researchers at the University of Pisa argued that hyperscale data centers would have to rely completely on analytical, automated systems to have to any chance to manage the huge number of day-to-day issues their servers generate. They added that “large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations.”
Now, more than three years later, critical technologies and methodologies are in place to make the automated data center a reality, even if humans are still required for guidance, if not for sleuthing out infrastructure issues.