DevOps in the Cloud

Why Integrated Infrastructure Is the Key to IT Success

At first glance, your IT infrastructure might seem like your house’s basement. It supports everything, but you don’t pay much attention to it or spend much time or energy trying to make it beautiful. Instead, you treat it as a generic component and lavish your attention on other parts of your IT landscape (or your house, as it were).

But the fact is that infrastructure is not so simple and generic. The way you design your infrastructure, as well as your ability to integrate it with the ever-changing set of tools that you need to keep workloads running efficiently, plays a critical role in defining overall business success.

To prove this point and encourage you to give your infrastructure some much-needed love, keep reading for an overview of why well-integrated infrastructure is so crucial, and why implementing an infrastructure that integrates seamlessly with the rest of your solution stack is an important best practice.

What Is Integrated Infrastructure?

By integrated infrastructure, we mean infrastructure that seamlessly connects with the entire ecosystem of tools you use to deploy and manage applications.

To put that into context, consider some examples of how infrastructure interfaces with tools:

  • Your CI/CD pipeline needs to be able to integrate effectively with the infrastructure that hosts CI servers, testing tools and build tools.
  • Release automation software must be able to deploy seamlessly to the infrastructure that will host live applications.
  • Monitoring and performance optimization tools need to work easily with whichever infrastructure hosts your workloads, since monitoring the underlying infrastructure is one critical component of application monitoring.
  • Security and access-control tools must be able to apply security policies and restrictions to host infrastructure to help keep applications secure.

The list could go on, but you get the idea: an integrated infrastructure is one that can easily interface with all of the tools you use to create, deploy, manage and secure the workloads you host on that infrastructure.

Agnostic Infrastructure Versus Integrated Infrastructure

You may be thinking, “It’s 2019, and most infrastructure is agnostic. My tools don’t care which cloud I use to host my workloads.”

You’d be right that most modern, cloud-native infrastructures are designed to be agnostic toward the tools that are used with it. Although there are certainly exceptions, the majority of infrastructure solutions available today—whether they are from a public cloud vendor, or local infrastructures built using standard operating systems and hypervisors—can be configured to work with pretty much any mainstream tool you throw at them.

But agnostic infrastructure is not the same as integrated infrastructure. An integrated infrastructure is one in which the connections between your tools and your infrastructure are not only possible to create, but happen seamlessly.

This difference is easiest to see if you compare public cloud vendors’ native tools with tools from third-party vendors that work with those clouds, but don’t always integrate seamlessly with them. For example, pretty much every modern APM tool can be configured to work with AWS. But setting them up takes time, and there is no guarantee they will always be compatible.

In contrast, there is CloudWatch, AWS’s native monitoring tool. It falls far short of being a complete APM solution, but it is seamlessly integrated with any AWS infrastructure. You don’t need to configure or install anything to use it, and it scales automatically with your AWS footprint.

Achieving Infrastructure Integration

My point here is not that you need to choose between a feature-limited native toolset and a third-party one that offers more features but does not integrate as well. Instead, you should strive to build an infrastructure and a toolset that offers the best of both worlds: rich features and tight integration.

The best way to do that is to take a toolset-first approach. By that, I mean identifying which tools you need and designing your infrastructure around them. That is preferable to the strategy that most organizations use by default, which is to choose an infrastructure (usually a cloud provider) and then figure out how to reconcile their toolset with it. Not only does this approach limit your choice and inflate your costs (because you may end up having to use native tools from your cloud vendor that are less cost-effective than third-party alternatives), but it also makes integration between your tools and your infrastructure more difficult.

Before you go and migrate all of your workloads to one public cloud or another, identify the various tools you depend on and evaluate how well each of them integrates with the clouds (or other infrastructure solutions) available to you. You may find that you should choose an infrastructure strategy built on multi-cloud or hybrid cloud to achieve better integration with your toolsets. This type of approach will make infrastructure architecture more complex, but it enables better infrastructure integration, which delivers more value in the long run.

Integrate for Today and the Future

Keep in mind, too, the tools you use today can, and probably will, change in the future. Thus, you want to ensure that your strategy enables tight infrastructure integration over the long term.

Here again, a multi-cloud or hybrid cloud strategy is probably the best approach for maximizing flexibility and integrability, although you’ll need to assess your specific needs by looking at your tools first, and then plan an infrastructure strategy that will accommodate them as they evolve and grow.

Conclusion

The bottom line is this: Although infrastructure might seem like a mundane and largely interchangeable part of your IT landscape, the extent to which it is well integrated with your tools can make or break the overall effectiveness of your IT strategy. Don’t choose infrastructure first and then shoehorn your tools to fit it. Instead, design your infrastructure architecture around the tools you need, and make sure it can continue to accommodate them as your strategy evolves.

This sponsored article was written on behalf of Eplexity.

Chris Tozzi

Chris Tozzi

Christopher Tozzi has covered technology and business news for nearly a decade, specializing in open source, containers, big data, networking and security. He is currently Senior Editor and DevOps Analyst with Fixate.io and Sweetcode.io.

Recent Posts

Building an Open Source Observability Platform

By investing in open source frameworks and LGTM tools, SRE teams can effectively monitor their apps and gain insights into…

17 hours ago

To Devin or Not to Devin?

Cognition Labs' Devin is creating a lot of buzz in the industry, but John Willis urges organizations to proceed with…

18 hours ago

Survey Surfaces Substantial Platform Engineering Gains

While most app developers work for organizations that have platform teams, there isn't much consistency regarding where that team reports.

1 day ago

EP 43: DevOps Building Blocks Part 6 – Day 2 DevOps, Operations and SRE

Day Two DevOps is a phase in the SDLC that focuses on enhancing, optimizing and continuously improving the software development…

1 day ago

Survey Surfaces Lack of Significant Observability Progress

A global survey of 500 IT professionals suggests organizations are not making a lot of progress in their ability to…

1 day ago

EP 42: DevOps Building Blocks Part 5: Flow, Bottlenecks and Continuous Improvement

In part five of this series, hosts Alan Shimel and Mitch Ashley are joined by Bryan Cole (Tricentis), Ixchel Ruiz…

1 day ago