A dive into the technologies that help networks adapt to changes in market needs, improve service quality and reduce the costs of developing new services
The benefits of DevOps are clear, but the way to successfully roll out and foster a DevOps initiative is not as well-defined. As explained by DevOps Topologies, “The primary goal of any DevOps setup within an organization is to improve the delivery of value for customers and the business.” However, despite DevOps being “a change in IT culture,” as Gartner puts it, there must be certain technologies in place to improve this delivery of value. But what are they?
For the purpose of this article, we’ll take a look at the unique challenges and opportunities DevOps brings traditional service providers. For example, they are facing increased customer expectations around how fast changes can be made, which have grown exponentially with the rise of the cloud. As a result, this dynamic has propelled the deployment of innovative technologies, such as network functions virtualization (NFV) and software-defined networking (SDN), and the adoption of DevOps methodologies to remove barriers created from legacy, siloed technologies and processes.
To improve collaboration between the Dev and Ops side of the equation, it takes a group of the right technologies, and an understanding of how they fit into the overall IT architecture. Below we will look at three fundamentally important tools, how they impact the DevOps shift, and how they enable service providers to compete in a world where expectations have been set by cloud providers.
Docker: Building Applications with the Ability to Make Rapid Changes to Meet Demand
As companies adopt collaborative DevOps processes, they are simultaneously re-evaluating and redesigning the software applications used in their in the back-office operations—effectively, all the tools that enable a business to run effectively, from service delivery and assurance to billing. This is where Docker comes into play. It is an open-source technology that enables software applications to be built using combinations of many small, function-specific components known as containers that each focus on doing one thing well. This development approach, also referred to as a microservices architecture, allows applications to support rapid changes and scale more easily.
Typically, with applications developed using traditional monolithic architectures, changes made to a small part of the application require the entire monolith to be rebuilt and redeployed. Scaling equates to scaling the entire application rather than just the components that require more resources. Conversely, a Docker-based microservices architecture allows changes to be made to isolated software containers instead of the whole software stack. Thus, applications are easier to enhance, maintain and scale, making the technology prevalent in cloud environments. It also greatly speeds development and regression testing, allowing new services or enhancements to get to market faster at lower costs. Applications built using Docker and microservices play an important role in the traditional service providers’ modernized back-office that needs to make rapid changes—whether it is adding new devices or turning up new services.
TOSCA: Enabling Automation
While traditional service providers are looking to evolve to new cloud-based technologies, you can’t overlook the large physical networks already in place. This is the role Topology and Orchestration Specification for Cloud Applications (TOSCA) fills. TOSCA was developed by the Organization for the Advancement of Structured Information Standards (OASIS) and is an open standard that provides a common definition of virtualized services and applications, including their components, relationships, dependencies, requirements and capabilities.
TOSCA makes it possible to automate complex processes involved in the provisioning of a service that combines resources from the physical network with virtualized components. To make this a reality, TOSCA relies on templates. Think of a template as a recipe; it will order the relationship between one component of a service to another. For example, if you are provisioning a virtualized firewall in relationship to an Ethernet service, it would know to turn up the Ethernet service before instantiating the virtualized firewall.
Network Management Protocols: Getting All Platforms on the Same Page
Devices on a network—routers, switches, transport platforms, etc.—all talk different languages. A wide variety of interfaces and protocols are used to configure, manage, and control these network elements from different vendors and their related resources, including CLI, TL1, SNMP, NETCONF/ YANG and OpenFlow. As traditional service providers adopt DevOps methodologies, understanding how to communicate with these network elements allows network architects and developers to collaborate and automate operational tasks such as configuration and service provisioning.
The transition to a DevOps mentality takes time and will require continuous support. However, it has already enabled traditional service providers to meet customer expectations, which would never have been possible without a shift in how IT departments—personnel and technology—work together.
About the Author / Kevin Wade
Kevin Wade is Senior Director of Product Marketing within Ciena’s Blue Planet division, responsible for leading the Blue Planet portfolio marketing team. He has more than 20 years of experience with successful startups and public companies in the networking industry, targeting both the service provider and enterprise markets. Kevin joined Ciena through the Cyan acquisition, where he was responsible for the company’s product marketing and field marketing activities.