To have a successful DevOps transformation, teams on both the development and the operations side need fast storage infrastructure and high-performing tools to maximize its performance and the people that work with it.
Over the last decade, a major operational problem in on-premises enterprise data centers have emerged, as the fundamental mismatch between the infrastructure and the needs of increasingly virtualized applications has become glaringly apparent. There is a major contrast between what organizations want their staff to focus on—strategic projects, new application deployment, new customer acquisition and releasing new products—and the things they end up spending too much time on. This includes infrastructure “plumbing” and reconfiguring, re-architecting or redeploying across all stacks.
The widespread adoption of virtualization has introduced increasing complexities to IT infrastructure. It is also spurring businesses to accelerate development efforts and deliver applications and services through a DevOps model. The arrival of cloud infrastructure and cloud-native workloads has made addressing the mismatch an even more pressing issue. As they seek to overcome the shortcomings of outdated traditional infrastructures, organizations are adopting a cloud strategy that includes the public cloud and, in many cases, their own private cloud built from all-flash storage and intelligent software.
This move to a hybrid cloud platform, a mix of public and private cloud seeks to blend the benefits of public cloud for workloads that need agility and scale, while retaining control of workloads and data that would be better served on-premises because they are too valuable to entrust to other providers.
An effective storage platform should combine the performance, control and management of an internal data center with the agility and scale of public cloud. This provides organizations with the ability to build and run agile environments for cloud-native and mission-critical applications in their own data centers. It also helps to solve the fundamental mismatch between infrastructure and virtual applications and helps prepare organizations to adopt DevOps practices, which cannot be fully supported by traditional infrastructure.
Choosing the Right Storage Platform for DevOps
The growing emphasis on DevOps is placing an extra burden on infrastructure teams, so choosing the right platform to underpin the IT infrastructure is highly significant. This will ensure the DevOps model works for the organization and makes life easier for the IT infrastructure team.
Traditional storage systems often are unable to match the requirements for a simple, flexible and automated enterprise infrastructure. A number of issues must be taken into consideration for the enterprise cloud infrastructure to work with the DevOps model, including:
- Copy data management
- Data protection
- Disaster recovery (DR)
- Quality of service (QoS) and performance guarantees
- Monitoring and troubleshooting
The problem for copy data management in DevOps is that refreshing and updating testing/development environments is time-consuming, and rapid test cycles require data synchronization with potentially hundreds of servers. A system is required that provides up-to-date virtual copies to the DevOps team, eradicates the need for physical data duplication, lifts the load on production storage and protects the performance of applications. Modern storage systems in conjunction with next-generation software can create data copies using snapshots, cloning and replication—and they can utilize the same technologies to protect and manage the development side of the environment in the event of failure or data corruption.
When seeking to adopt DevOps, quality of service (QoS) and guaranteed performance is a major factor. QoS is vital to control the performance allocation of a storage system to different workloads. It gives organizations the capability to set a maximum and minimum on the number of input/output operations per second (IOPS) or bandwidth consumed. Using the higher performance of all-flash and QoS, DevOps can be consolidated on the same platform as production, making it easier to access production data sets while reducing the storage footprint.
Organizations that want to provide data at a speed for DevOps need the ability to monitor the infrastructure and correct problems and misconfigurations rapidly. Continuous monitoring and end-to-end visibility across the IT stack, along with predictive analytics, integration with other monitoring tools and VM-level monitoring, are particularly valuable attributes when seeking to provide the basis for data-driven decisions in a DevOps environment.
It is also critical that the storage platform provides the capability to provision and manage IT infrastructure programmatically. With the increased use of automation, organizations can create and break down environments as necessary, incorporate snapshots and cloning as part of daily workflows and eliminate the potential for error from manual or interactive configuration.
The Case for Fast, Virtualization-Aware Storage
Implementing a storage system that is able to match the requirements of a simple, flexible and automated enterprise infrastructure delivers many benefits to the organization and to the DevOps environment. Virtualization-aware storage eradicates the need for logical unit numbers (LUNs) and volumes and enables enterprises to work at the VM and container level. With clean REST APIs, they can connect all-flash to compute, network and other elements of the cloud and see and share across the entire infrastructure.
Using virtualization-aware storage, application test and development can be accelerated from a matter of days or weeks to minutes. By retaining custom settings and automating with APIs, rather than having to rebuild the entire environment after every data refresh, enterprises can radically speed up the release cycle and integration testing.
For example, using this approach, one financial services company was able to reduce development update time from five hours to five minutes and slash latency by 82 percent. With the visibility into files, VMs and vDisks provided by virtualization-aware storage, it is possible to recover these in fewer than five clicks and response times can be reduced dramatically.
Data at DevOps Speed
As enterprises move inexorably toward a mixture of virtual and physical, on-premises and cloud-based IT, it is imperative that their storage platform can seamlessly work across physical and virtual environments. This will facilitate DevOps environments to meet its goals while eliminating concerns around performance, scalability, manageability, resilience and flexibility.
The storage system underpinning their IT infrastructure platform needs to support businesses in their desire to replicate the agility of public cloud within the data center while making it easier to manage enterprise and cloud-native applications. Adopting virtualization-aware storage can ensure their data is ready for the DevOps model. Enterprises wanting to deliver data at DevOps speed can’t afford to keep their storage in the slow lane.