There is still much debate surrounding the merits of Enterprise DevOps, some say that DevOps is great for the highly scalable startups but not feasible for the enterprise, while others say that DevOps is mandatory for a company’s survival. When enterprises evaluate the potential impact of DevOps adoption, they should consider some key points:
1. Enterprise DevOps Adoption: One Bite at a Time
How do you make sure your developers get instant feedback? How can you go back and forth between versions? How do you integrate your entire CI/CD pipeline? While CI/CD for dummies is a good place to start, the biggest problem is how quickly you will get bogged down by the details. Whether you participating in the NASA challenge to find all asteroid threats to the human population and save the planet, or you are at an enterprise figuring out the DevOps strategy – first things first – define the problem and break it down into actionable chunks which may be done in parallel or sequentially.
When an enterprise is trying to tackle the DevOps challenge, the common scenario is that lots of people will have lots of interpretations on how and what it means to implement DevOps methodologies. It’s easy for projects to get stalled in the initial planning stages. The only way to take on this challenge is to break it down into smaller workstreams, for example, by defining the workflows in terms of infrastructure automation, build & test automation and configuration management – and deciding which technologies are the best fit to solve the specific challenges in each stream. So, basically, how do you eat an elephant? A spoonful at a time.
2. Dead in the water without on-demand cloud capacity
The limited enterprise data center infrastructure simply cannot keep pace with agile teams that need to be able to spin up multiple parallel environments with a click of a button or an API call without getting the message “ran out of capacity”. This issue becomes even more crucial when the project cycle peaks as a new application or updates need to be deployed in no time. To maintain agility in enterprise IT, enterprises must forever break free from capacity constraints, including compute and storage resources. Without a doubt the public cloud is a great fit – you must be able to use AWS and Google Cloud on-demand. To build a massive software lab entirely on premises, knowing fully well the bursty nature of the workloads, is downright foolish. If you want production on VMware and test environments in AWS, you need to ensure high fidelity environments to have real confidence in the test results. The environments must be exact replicas – identical VMs and identical networking in the data center and the cloud. The cloud gives you the capacity but how can you make the temporary test environment identical to production? Today it is indeed possible to clone your IT infrastructure in the cloud at the push of a button.
3. Death by Scripting for Complex Multi-tier Environments
Continuous integration and deployment are great methods increasing effectiveness and efficiencies. However in a multi-tier environment with complex networking elements, the overhead associated with the use of manual scripting to provision the relevant resources (multiple VMs, networking, storage), configure them and deploy all the components of an application each time you need a new test environment is very high. In addition, in today’s highly dynamic environment, scripts become outdated frequently, increasing the pain factor. Design for simplicity and scale: easily repeatable deployments, being able to snapshot and rollback to earlier versions are all requirements that must be carefully considered and met. For example, what if Jenkins could use a single API call to spin up multiple clones of your entire environment in the cloud, run all your tests in parallel, automatically release resources when completed and snapshot the environment in case of errors? Simplifying the automation and designing for complexity is a key requirement for enterprises.
4. Destroying team silos is easier said than done
One of the greatest enterprise DevOps challenges is to make the operations team fully integrated with the dev team and be able to plan and execute despite plenty of unexpected events. True integration of the development and operations teams cannot be stressed enough. For example, the ops team must be part of the SCRUM process. Creating a DevOps team is a start, but creating that DevOps team does not actually bring dev and ops together. Barriers between teams need to come down over time – the different teams need to cooperate and collaborate towards a larger shared vision.
5.Managing infrastructure as code is do-able and totally worth the effort
Delivering on the promise of infrastructure as code does not have to be back breaking. Minimizing delivery cycles including creating just-in-time environments becomes a matter of an API call, hours become minutes. A wide variety of modern technologies are at your fingertips to help you achieve your mission. The most common scenario we see today may be the modern startup where its R&D organization was built with the DevOps notion from the get-go – but that doesn’t mean Enterprises will miss the bus. The enterprises that are adopting modern DevOps methodologies are doing so quickly, without a rip-n-replace of their entire infrastructure. For example, Ram Akuka, Director of DevOps at Deutsche Telekom shared their Enterprise DevOps journey describing how they used VMware, AWS, Chef, Jenkins and Ravello to set up a fully integrated CI/CD pipeline that accelerated their software delivery cycle like never before.
About the Author
Shruti Bhat, Director product marketing at Ravello Systems, the industry’s leading nested virtualization and software-defined networking SaaS. Prior to Ravello, Shruti was a virtualization junkie at VMware, where she managed the software-defined storage product line. She combines her MBA at UCLA Anderson with a bachelor’s in computer science engineering and has previously led R&D teams at IBM and HP as well as immersed herself in start-ups.