Industry studies and market indicators show the exploding growth of the DevOps movement, and it’s for one simple reason – in today’s application driven economy, DevOps teams have proven that they can build, test and ship applications faster than the traditional IT approach.
Businesses compete on the efficiency of delivering applications to meet the demands of the critical mass. Enterprises need the latest business applications to meet new service demands of their customers, accessed in new ways such as over mobile devices. In an application-driven economy, apps are developed and delivered via the cloud more rapidly than ever before; fast time to market is key for success and staying a step ahead of the competition.
As a result of this incredibly dynamic environment, IT teams have adopted new processes such as “continuous integration, continuous development” (CI/CD) – a radical departure from the waterfall model of application development that’s been around for decades. These people and groups within organizations that are leading the charge on CICD or agile development – to meet economic demands for the exploding cloud and mobile market place – are called “DevOps”.
However there are three big barriers that are still slowing down the time to market for apps. Unfortunately, there’s not much the DevOps folks can do since the problem lies in how underlying enterprise IT infrastructure, and specifically the networking products such as application delivery controllers (ADCs), have not kept pace with this evolution.
The reasons application delivery to market is still too slow, as well as recommended solutions, include:
- Network admin “Human Latency”: ADCs are usually purchased by the network admins that sit within the CIO–led IT teams. As a result, the DevOps and application developers don’t have access to ADC devices. Since app developers need ADCs to test the efficacy of the apps they are building, they have to connect with the networking teams and request access to ADCs…and wait for permissions. This typically takes weeks because of the complexity involved with setting up a custom environment per app developer per app!
In a world where it takes 20 seconds to fire up a server, the human latency (measured in weeks) associated with provisioning an ADC is simply unacceptable.
The solution: A fully programmable , self-service ADC that the app developer/DevOps folks can themselves setup within seconds using RESTful API and scripting tools. DevOps needs an ADC that has zero touch provisioning, is able to auto-discover the compute/app environment, and can auto-create and auto-scale.
- Distributed Microservices-based app architectures: Microservices have fundamentally changed how apps are built, maintained and scaled. Instead of a monolithic “black box” app that sits in one location with a “reusable or even disposable lego-block” architecture, Microservices can be written in different programing languages, loaded on various hardware and hypervisor platforms, and, most importantly, deployed across various on-premise and cloud locations.With this heterogeneity comes complexity in managing all these different environments which leads to cost and delays associated with rolling out a new application.
The solution: An application delivery architecture that can span, serve and scale all locations, while remaining one device to manage from a management perspective.
- Geographically dispersed Dev, Test and Production teams: Many companies have the development and testing of apps done in a public cloud, but because of data sensitivity issues (or compliance) they have to run production copy of the app on on-prem data centers or private clouds.Imagine for a minute what this means – completely different environments across various cloud/on-prem locations!When doing development or testing, a certain ADC can be used on-prem – but it’s a different ADC for the real world. There can be unexpected surprises in terms of how the app actually behaves, performs and scales in a real deployment.
Solution: DevOps teams need the same consistent environment across public, private and on-prem data centers. More importantly, if the setup and operations of ADCs across environments is identical, it saves a lot of complexity and costs.
The goal of the DevOps group is to make the organization successful by increasing IT performance and delivery. It aims to make life of an app developer and tester simpler, allowing enterprises to develop and roll out their own apps faster than the competition. From a recent study by Puppet Labs, analysts noted: “We wanted to test the hypothesis that IT performance actually does make a difference to organizational performance. We found that IT performance strongly correlates with well-known DevOps practices such as use of version control and continuous delivery. The longer an organization has implemented — and continues to improve upon — DevOps practices, the better it performs. Companies with high IT performance are twice as likely to exceed their profitability, market share and productivity goals, giving them a strong competitive edge. “
Now more than ever, companies need to be committed to the DevOps model to keep up with the constantly changing business landscape. New approaches to DevOps addressing the challenges of today’s dynamic environment are available for companies to leverage for the benefit of their customers. The timing could not be better – it is now. However, an evolution of the underlying IT infrastructure such as the application delivery controllers (ADC) is not just desirable but mandatory.
About the Author/Ranga Rajagopalan
Ranga Rajagopalan, CTO and Co-Founder of Avi Networks. Over the last 15 years prior to co-founding Avi Networks, Ranga has been an architect and developer of several high-performance distributed operating systems as well as networking and storage data center products. Before his current role as CTO, he was the Senior Director at Cisco’s Data Center business unit, responsible for platform software on the Nexus 7000 product line. Joining Cisco through the acquisition of Andiamo where he was one of the lead architects for the SAN-OS operating system, Ranga began his career at SGI as an IRIX kernel engineer for the Origin series of ccNUMA servers. Beginning his journey with a Master of Science degree in electrical engineering from Stanford University and a Bachelor of Engineering in EEE from BITS, Pilani, India, he now has several patents in networking and storage.