When everything is firing on all cylinders, DevOps can be awesome. Teams work together seamlessly. Routine tasks are automated. Continuous delivery is continuously integrated, continuously tested, continuously deployed and continuously monitored. All is well, and right with the world—unless your DevOps needs begin exceeding the capacity of the platforms and tools you depend on. Scalability is a crucial element of DevOps success.
Wikipedia defines “scalability” as the capability of a system, network or process to handle a growing amount of work, or its potential to be enlarged to accommodate that growth.
The need for scalability is one of the primary reasons that DevOps and cloud computing are so integrally linked. While it’s possible to do DevOps on some level using physical hardware or virtual servers running on hardware in a local data center, it can quickly become impractical—if not impossible. Cloud computing using platforms such as AWS, Rackspace or Microsoft Azure enables an organization to increase server capacity exponentially at the push of a button.
“Many of the issues and challenges experienced before the advent of cloud are no longer issues: Engineers now working on cloud implementations remember working at companies that feared the Slashdot effect—a massive influx of traffic that would cause servers to fail,” explains Jason McKay, VP of Engineering for Logicworks, in a blog post.
Tackling Scalability
There are a number of factors impacting scalability in your environment, which could impact the success of your DevOps. Here are three things to keep in mind to ensure your environment is ready to scale:
1. Make sure your tools and apps are scalable
The ability to scale capacity on demand—automatically—is one of the chief advantages of cloud computing, and one why cloud computing is so important to DevOps success. There is more to scalability, however, than just server and network capacity. You must also ensure that the tools you rely on can seamlessly scale with the infrastructure, and that the apps you’re delivering through your DevOps environment are designed to take full advantage of the increased capacity as well.
2. Continuous optimization
Some tools and apps will continue to run as the underlying infrastructure scales up or down, but may not run optimally. Make sure that the tools you use and the apps you deploy are able to continuously adjust to changes in server, bandwidth and storage capacity. The tools and apps should be able to take full advantage of the resources available to ensure the best performance possible at all times.
3. Don’t forget about storage
Most apps have a data component and that means there is data storage that must be scaled as well—either as a function of the broader network infrastructure scaling or just in response to increased demand for data storage.
Dan Leary, VP of Products, Solutions and Alliances at Nimble Storage, believes that scalability is emerging as a differentiating factor between storage solutions. “Due to the high performance offered by many vendors of flash-optimized storage solutions, performance will no longer be the primary differentiating factor when considering one vendor over another. Instead, a much greater focus will be placed on the array’s reliability, data integrity, scalability, manageability and in-built data-protection capabilities. We are now getting to the point where performance in storage has been democratized and will change the way people purchase storage.”
There is one more thing to remember when considering scalability: cost. While it may be easy to automate scalability on cloud platforms, it’s not free, and the DevOps tools you depend on may include licensing fees that could be impacted as your usage scales, so keep that in mind.