DevOps, flexible cloud architectures and continuous delivery have all simplified a lot of equations in IT today, but there is one element that’s grown infinitely more complex: capacity planning. With the ability to spin up new environments on a dime through services like Amazon and Rackspace, the bucket of potential IT supply has grown infinitely more flexible. Meanwhile, as DevOps practices helps organizations speed up the continuous delivery cycle of designing, building and running new application instances, the demand for computer power and storage space is constantly changing. This week at Camp DevOps Houston (link), Steve Wilson with VMTurbo will explore what this means for capacity planning in his talk “Is Capacity Planning Dead?”
“In this cloud day and age, it really isn’t about capacity anymore, because capacity becomes a figurative thing with IT borders and the data center really just melting away,” Wilson says. “I can rent if I need to. Really this idea of capacity goes away, and what you really want to understand is where’s the cutoff where I stop buying and I start renting?”
Wilson believes that while capacity planning isn’t necessarily DOA in today’s DevOps world, it does need to be completely turned on its head. Rather than focusing on supply-side issues, planners need to concern themselves much more with demand.
“For eons and eons, we have been really good at managing supply—because it never changed. It was finite. It was like ‘OK, I’m going to put alerts, and I’m going to put thresholds, and I’m going to set the bar, because I know that when my supply runs out, that’s bad,'” Wilson says, explaining that capacity planning was all about moving thresholds to ensure IT never ran out of supply. “But now, if it’s a truly software-defined data center, it’s no longer defined by your infrastructure. So what if I could actually manage demand — not supply.”
The idea, he explains, is to gather a deep and continuous understanding of what demand is in order to buy infrastructure for the average demand and then ‘rent’ cloud infrastructure to soak up the peaks of demand when they present themselves.
“If I understand demand, I can buy for my 50th percentile plus maybe one or two standard deviations, and then rent above that and scale out,” he says. ” then all I really want to do is collapse that line of getting a good understanding of where that financial line is where it is economical for me to extend CapEx internally, and then that tipping point where it’s actually cheaper for me to extend OpEx to handle the peaks.”
With DevOps, he believes it may be easier for organizations to truly understand the demand profile of different applications, due to greater collaboration between developers and operations staff. The trick is to ensure that capacity planners are roped into the design, build, run cycle from the beginning, as they are currently operating in the fringes, frequently using spreadsheets that are outdated as quickly as they are made when an organization is delivering continuously.
“Suddenly, every time you try to make a change, you try to deploy, you’re playing whack a mole, and then you start to break down. One place you’re hot, the other place you’re cold, and you don’t understand the demand workload and you can’t move it around and you’re just managing supply,” he says. “If I can always understand and map demand properly, I can automate the crap out of that, because now I’m basically creating just-in-time compute. If I can run computes super lean, then at peak I can just rent.”