DevOps will, under whatever moniker you want to give it – NetOps, DevOps for Networks, operationalization – continue to make inroads into the network infrastructure because ultimately it’s part of the application deployment lifecycle. While DevOps folks call it “application delivery” the reality is that an application is not ready to deliver to a user (internal or external) until all its requisite services have been provisioned and configured.
Yes, infrastructure and network devices are increasingly API-enabled and supportive of a variety of tools and frameworks most often associated with DevOps – Puppet, Chef, OpenStack, VMware – and those more commonly associated with just the network – Arista, Cisco ACI, and OpenDaylight.
But it takes more than APIs. APIs do not inherently bestow upon devices the ability to support multi-tenancy. That is, the isolation of services unique to a team (or application) required to effectively deploy application-supporting infrastructure services within what is traditionally a shared environment.
Many of our customers have made DevOps a reality in their organization with delivery teams that build, deploy, and support their own applications and services. Unfortunately, a regular roadblock on that journey is allowing teams to have superuser privileges in production environments. In most organizations, the production environment is shared, and therefore risky to provide access widely. It is effective when we can partition infrastructure along team bounds,so that those teams can have safe isolated access to do their work, without risking impact to other systems. Where cloud environments are used, this is much easier to implement, aligning account structures to team boundaries. [emphasis mine]
This has always been an issue where shared services are relied upon. A change to support one application (or team) can be disruptive or cause issues for another application or team. Thus, most infrastructure and network teams staunchly refuse to let anyone in the application or operations side of the house play with their toys.
They are the network ninjas; guarding their configurations with their very lives.
It’s this kind of conflict that needs resolution if we – as a unified IT organization – reach goals of improving the time to market and frequency of deployment. It doesn’t matter if the app team can deliver an app in 3 weeks if it takes 3 months to deliver it to the customer because of overloaded network and infrastructure staff.
There are two good solutions to this problem*.
1. The governance over more application-affine infrastructure services – load balancing, caching, proxies, etc… – is necessarily moving closer to the application and under the control of app and operations teams. This is supported by concepts like Network Service Virtualization, proposed in Lippis Report 217 as well as the increasing adoption of open source proxy-based (virtual and software) services. This model assumes continued shared network and infrastructure services as appropriate while migrating per-application services to the more agile, operational architecture.
2. The introduction of multi-tenancy in infrastructure architecture provides a similar approach while still maintaining shared resources (hardware). A single, high capacity system is able to host multiple, virtual instances of its services, each dedicated to either teams or applications, as is the organization’s need. This approach spreads the cost of the hardware resources across the entire organization while allowing a more per-application approach to provisioning, configuration and ultimately, cost. Isolation addresses the concerns of shared resources and configurations that can cause consternation among network and infrastructure teams because the underlying shared system is managed by them while role-based control over application services is offered to teams or designated responsible individuals.
Both approaches are valid and in many cases both architectural solutions will be used to resolve the issues caused by traditionally shared infrastructure.
The key takeaway here is that there are answers to the need for partitioned infrastructure that are supportive of DevOps-driven workflows and processes and can certainly provide a path toward ensuring a more seamless, time-sensitive end-to-end application deployment process.
* Oh, I’m sure there’s more, but right now these appear to be the best two options.
While strategies like continuous delivery and continuous deployment are eminently possible – and advisable – for mainframe development, applying them to mainframe infrastructure sometimes requires a cultural change to ensure everyone gets on board the agile release train. These are all challenges that can be solved – and must be ... Read More