This is the first article in a series of pieces on digital innovation, accelerating change and the rise of the coded business from innovators in the Chef (click here) community.
How does the need for speed and scale affect businesses today?
Let’s start with scale. As a technology company grows, its infrastructure grows – more machines (or machine images), more networking, more data storage needs, more code, more developers, and more users. The only way to scale with growth is with automation. And, as more and more of the world’s interactions happen online (be it commercial transactions, email, social networking, video streaming, etc.), that scale of infrastructure is breaking records all the time. With these new levels of scale come ever-increasing automation requirements.
On the speed side, first-to-market has always been an important part of business – but what that means has evolved. It’s no longer simply about producing that new cool item before the other guys. Now it also means you must be able to iterate quickly, adjust quickly for changing user tastes, and adapt quickly to the ever-increasing pace of new technologies. To be able to do this effectively, instead of just developing fast, you need to be able to deploy quickly, react quickly, troubleshoot quickly, and update quickly. For many companies, this is no longer icing on the cake, but instead, the structure underpinning their business – in the same way that having an e-commerce site was 10 years ago.
Speed is becoming ever more critical for business in the digital age. With the rise of social media, a lot of customer feedback is public or semi-public, allowing people to unite behind specific complaints or desires and provide more clear and unified messages to companies. This helps companies get more feedback of higher quality and – hopefully – to react to it more quickly. This is also dramatically accelerating the cycle of getting features/products/releases to users, and, perhaps more significantly, prompting users to provide more useful feedback faster. As I mentioned, the scale and speed of business operations in every industry are growing daily, and, because of that, automation has gone from being beneficial to critical. In the technology world, this automation comes in the form of code.
Pushing your code faster and faster means you have to be able to update and deploy infrastructure faster, which in turn means infrastructure/systems teams must also be able to deploy, react, troubleshoot and fix faster.
But code, in and of itself, isn’t necessarily sufficient to meet the challenge of escalating speed and scale. How quickly you deliver that code is what’s important. Continuous Delivery is one technique used to address this need.
Continuous Delivery (CD)
People talk a lot about CD and it’s important to remember that this phrase refers to a methodology that has pros and cons.
For most technology companies, especially those whose product is primarily Web-based, CD is often the best choice to enable speed of delivery. The main benefit is that the closer to “continuous” your release-cycle gets, the easier it is to have an iterative process. Faster feedback from users means you can more rapidly deliver updates based on that feedback. As you approach CD, you approach near-real-time feedback, which allows you to hone your product or feature far faster than if it were sitting in long dev/qa cycles. Another benefit is best described with a comparison. In troubleshooting, a key rule is make one change at a time, so you know which change fixed the problem – or made it worse. CD is an extension of this idea, although it comes earlier in the lifecycle. If you are pushing/delivering a codebase more often, there are less changes per push, and so breakages are easier to track down – and it’s easier to push fixes.
A common misconception is that an organization either achieves CD or it does not. CD is a spectrum with frequent releases on one end and less frequent on the other. A CD solution for a bank may mean something very different than a CD solution for a gaming Web site. The idea of the CD methodology is continuing to push toward faster deployments, in order to reap the benefits that come from doing so.
In the end, no matter what part of the CD spectrum a company aims to achieve, they must be willing to adapt and change.
From an internal IT perspective, this means realizing that this isn’t the old world, where businesses lock users into small boxes to make things simpler. We’re in a world where everyone has the power of the Internet in his or her pocket and there’s enough information out there for anyone to break out of his or her box. Businesses need to provide engineers, designers, assistants, VPs, etc. the flexibility and freedom to work and think in ways they do best, so they can take the business to the next level.
From an external (production) IT perspective, companies must be able to quickly adapt to new growth and new technology. It took the Internet a (relatively) long time to start adapting to the small displays on mobile devices, and it adapted only slightly faster to tablets. But new form-factors are just the beginning. Broadband is becoming ubiquitous in first-world countries, and companies are just starting to rely on that fact. Yet there are still many users the Web isn’t taking into account very well. Mobile is becoming the default interface for a variety of locales and ages, and many countries have limited bandwidth and/or high latency. Further, it seems likely that we will see drastically different user input mechanisms, as well as display mechanisms, in the near future.
Technologists need to be agile and readily adapt to technology and infrastructure changes, as well as speed and approach changes, such as CD, which will be required for the next generation of user and technology demands.
About the author
Phil Dibowitz has been working in systems engineering for 12 years and is currently a production engineer at Facebook. Initially, he worked on the traffic infrastructure team, automating load balancer configuration management, as well as designing and building the production IPv6 infrastructure. He now leads the team responsible for rebuilding the configuration management system from the ground up. Prior to Facebook, he worked at Google, where he managed the large Gmail environment, and at Ticketmaster, where he co-authored and open sourced a configuration management tool called Spine. He also contributes to, and maintains, various open source projects and has spoken at conferences and LUG’s on a variety of topics from Path MTU Discovery to X509.