Multi-cloud environments can be a double-edged sword. On the one hand, there are clear benefits to using whatever cloud or tool is best for your particular needs, regardless of vendor – in theory. In practice, however, the more choices you have, the more complex and expensive your environment becomes. In fact, Andreessen Horowitz postulates that multi-cloud architectures are oftentimes costing enterprises more than if they had stayed with on-premises systems.
Enterprises need a multi-cloud environment to compete in such a rapidly expanding digital economy. For the majority of companies, completely repatriating to on-prem isn’t an option–the train has already left that station for most. But what do you do when the costs of staying competitive become more and more prohibitive, ironically making you less competitive?
Hybrid Clouds Became Multi-Cloud
When hybrid cloud–a mix of public cloud services and on-prem systems–became table stakes for enterprises, companies only needed one public cloud service provider. Most put all of their eggs in the AWS basket, splitting their workloads between AWS public cloud services and on-premises systems.
This setup provided more agility and flexibility to enterprises and lowered costs by reducing dependence on expensive, cumbersome on-prem hardware. A lot of capex was replaced by opex; with cloud services, you only pay for what you use as you use it without the physical infrastructure and maintenance costs.
Inevitably, the cost and efficiency benefits of the cloud drove companies to migrate more and larger workloads from their on-premises systems, increasing cloud costs over time. And the more workloads you had with a single vendor, the more beholden to that vendor you became. If AWS raised their prices, you had no alternative but to pay more.
Soon, hybrid clouds grew into multi-clouds, with other public cloud providers such as Azure and GPC in the mix. Sometimes this was by choice–why be locked into one vendor and suite of tools when you could choose from multiple vendors that might offer better pricing or have some products that are better suited for your needs?
More often, though, it was involuntary–developers spinning up their own clouds and resources for their own purposes without consulting IT, creating shadow IT. Or companies acquiring other organizations whose cloud choices varied. Either way, the enterprise is using multiple clouds whether they planned to or not.
Regardless of how it happened, once more clouds were introduced, complexity multiplied.
The Multi-Cloud Hydra
Too many organizations underestimate the complexity they’re introducing with each additional cloud. Operating systems, networking, cost monitoring and tracking, workload methodologies and data aggregation are all different for each vendor. How many people in your organization know what all these differences are and how to navigate between them? How do you effectively manage, govern and monitor usage, costs and security and compliance issues when each public cloud does it differently? You have to manually collect data from each cloud and aggregate it in spreadsheets for analysis. How accurate can that be, how long does it take and at what cost in terms of people and time?
Furthermore, how will you be able to accurately pinpoint waste and inefficiency? If you’re chasing down engineers to turn off resources or having to choose between one manual cost-efficiency action versus another, the waste will add up and your budget will be inaccurate.
As if that’s not chaotic and complicated enough, development teams spread out across the enterprise adopt different tools and use them for their own purposes, without any alignment or coordination. The same tool is purchased multiple times. Best practices between teams aren’t shared. Everything needs to be custom-coded, creating layers of “spaghetti code” that blur visibility and are ripe for misconfigurations, which results in security vulnerabilities. And let’s not forget shadow IT, which IT departments don’t even know about, let alone monitor.
You started your journey with a unified cloud strategy, but now you’re struggling with an ever-growing number of different clouds and strategies for different groups. Each group prefers different tools, clouds, methods and processes, and this has left you stuck with siloed teams, unending tool sprawl, infinite lines of custom code, worsening security risks and no clear visibility. And the meter keeps spinning, costing you millions.
So, what do you do now?
Taming the Chaos
To survive and thrive in the new cloud order, enterprises can’t just write off exploding cloud costs as “the cost of innovation,” or otherwise justify their cloud chaos as par for the course. Too many are doing that now, and it won’t end well, as we’re likely headed into recession and growing uncertainty and turbulence in markets.
First and foremost, taming cloud chaos and getting multi-cloud costs under control requires visibility and interoperability. You must be able to view, monitor, manage and analyze all clouds from one place; manually collecting that data with spreadsheets won’t do anymore.
Development teams can no longer be siloed off from each other. There should be a single repository where all teams can access and utilize all tools, scripts, and previously coded integrations. When everything is viewable and available in a single place, no one has to buy the same tool twice or reinvent the wheel when using it. Once a tool has been integrated, any other team that wants to use that tool can simply re-use the integration, configuration, and policies that have already been created and vetted. If multiple groups are doing the same thing, an approved automation process can be created and shared.
True cost optimization also requires organizational awareness and shared responsibility. If people don’t know they’re wasting resources, how will they be able to stop? Enterprises need a system that detects waste and cost overruns and notifies stakeholders accordingly.
For example, a development environment may be up and running even though it’s no longer being used. An automatic alert could be sent to the appropriate team leader, reminding them to shut it down. A developer provisioning cloud resources might not choose the most cost-effective option, but the alert system would flag their activity and notify them of a cheaper alternative. Cloud resources that are left on overnight or on weekends would also be detected and shut down. Commitments would be conducted automatically every day and no longer require individuals to remember to do them manually.
There’s just no going back to the traditional whack-a-mole style of cloud management and cost control. Enterprises that understand this and create a holistic framework for cloud management capabilities based on simplicity, visibility, unity and awareness will be in a much better position to navigate the choppy economic waters to come.