Bimodal IT is a Gartner concept created to help CIO’s understand that IT organizations need to support both traditional and agile modes of IT solution delivery and operation in order to make wise choices about infrastructure, process, people, and tools. Much attention has been given and rightfully so to the disruptive and innovative “Mode 2” teams building on the practices established by web-scale giants like Amazon, and Google. However, in enterprises there still exists a sprawling majority of “Mode 1”, traditional, industrialized applications run by teams that must focus on risk aversion, security and compliance. While these will likely not match the innovative velocity of agile IT teams and applications, traditional IT teams are still under pressure to get faster, more agile, innovative, productive and cost-efficient. Yet, the Mode 2 playbook can’t be applied wholesale to Mode 1 transformation work because it often assumes a greenfield situation—kind of like a new home construction. By contrast, Mode 1 modernization is more like renovating an existing home that must proceed in steps that address the present realities of applications, legacy infrastructure, business mandates, organizational interactions, and technical skills.
Understanding Bimodal IT
The Bimodal IT concepts is meant to distinguish between applications and teams that are traditional in their focus, and those that are agile. In her blog post on Bimodal IT, Gartner analyst Lydia Leong explains Bimodal IT thusly:
“Traditional IT is focused on “doing IT right”, with a strong emphasis on efficiency and safety, approval-based governance and price-for-performance. Agile IT is focused on “doing IT fast”, supporting prototyping and iterative development, rapid delivery, continuous and process-based governance, and value to the business (being business-centric and close to the customer).”
The reason that Gartner promotes the idea of Bimodal IT to CIO’s is to help them see that it can be dangerous trying to optimize processes, human resources, infrastructure, and tools to cover such different modes, and as a result, optimizing to neither. Rather, the right move according Gartner is to allow agile teams room to develop a brand new set of capabilities that can then help transform the whole organization. Lydia continues in her blog:
“We’ve found that organizations are most successful when they have two modes of IT — with different people, processes, and tools supporting each. You can make traditional IT more agile — but you cannot simply add a little agility to it to get full-on agile IT. Rather, that requires fundamental transformation. At some point in time, the agile IT mode becomes strategic and begins to modernize and transform the rest of IT, but it’s actually good to allow the agile-mode team to discover transformative new approaches without being burdened by the existing legacy.”
The Contrast Between Agile and Traditional IT
Traditional IT and Agile IT are different in a number of ways. Perhaps an effective way to explain the difference is to picture what these applications and teams look like.
The classic agile IT initiative in the enterprise is around a mobile, e-commerce or web-centric application that is aimed at moving into new markets, improving engagement with new customer segments, or disrupting a current market. The application is fully virtualized, and the preferred infrastructure is a public cloud such as AWS. The team is conceived as a unified, DevOps team of line of business product managers, developers, testers, and operations folks working collaboratively on an agile development methodology. Automation is built deeply into the process, with infrastructure as code tools used to manage public cloud virtual machines and related infrastructure components via well-documented RESTful API’s, plus other DevOps toolchain elements automating and managing most processes, including continuous integration, continuous delivery and continuous deployment of code into production.
Traditional IT stands in pretty stark contrast. Let’s consider an industrialized, mission-critical application that has evolved slowly for years. The application (which is usually composed of anywhere from 5 to 10 or more components) is likely running on a mix of legacy (think RISC UNIX), dedicated (x86-based) and virtualized servers (VMWare and perhaps Hyper-V) running in a private or hosted private datacenter, using a mix of traditional and software-defined storage and connected by Ethernet and Fibrechannel switches. The application components themselves may be legacy code, making them difficult to quickly port to more modern infrastructure. For example, I was talking recently with a colleague who told me about his sister who still writes code in Fortran for a major bank. With such a large, complex, long-lived and mission-critical application suite, the IT organization maintaining it and the developers creating updates and upgrades are highly concerned with quality, reliability and predictable performance. The nature of the overall team working on an industrialized application is a far cry from the sort of “two pizza” DevOps teams on the agile IT side. Rather, it’s more of a loose consortium of teams such as developers, testers, security and compliance, plus outsourced contractors and vendors. The development process is waterfall-based, managed by an application lifecycle management (ALM) suite of some sort. Automation is scant, and in particular it takes a long time to pull together infrastructure for those dev, test and other teams to perform their jobs. Despite that, a production-like environment for testing, security, compliance, etc. is considered necessary to minimize downtime and other quality risks. As a result, release cycles can stretch into many months.
Modernizing Traditional IT
When my wife and I remodeled our 1908 Craftsman-style house about 15 years ago, we discovered that an earlier renovation had actually cut out a third of a key supporting beam, leaving one part of the house literally hanging on a beam that was only attached to one side of the house. Needless to say, we were a bit alarmed to find that out. When dealing with mission-critical applications and infrastructure, you can’t be careless. As mentioned earlier, simply taking the playbook for a greenfield, agile IT application and trying to apply wholesale it to a legacy, industrialized application usually won’t work. Making traditional IT more agile requires a step-wise renovation—a risk-mitigated path that leads towards agile and DevOps practices, but without a great leap into the dark. The steps need to establish the building blocks to agile/DevOps practice without disrupting the application, without introducing downtime, and while delivering solid returns along the way. Successful legacy application and service ‘remodeling’ efforts observed in highly traditional organizations such as telecommunications and cable operators provides good ideas on how to take those steps:
- Identify or Procure Automation Skills: The initial steps to remodeling Traditional IT are about incrementally improving current practice with automation and at the same time making it ready for agile and DevOps. This means that to be successful, you need to get some automation skills into the mix. A team that can’t take hold of its own destiny in modernizing a Traditional IT application is not going to have a sustainable change, and that means identifying or procuring some automation expertise and protecting that person’s time to focus on the renovation rather than on firefighting.
- Private Cloud IaaS: One of the most serious obstacles to agility is the painful amount of time and effort it takes to allocate and provision infrastructure for developers, testers, etc. and the lack of standardized, consistent infrastructure environments, due to manual processes. This is exactly the issue that drives some development teams to go shadow IT and start developing on AWS. Often times, these development teams then discover that integrating that software into the production environment is plagued with translation issues. Nonetheless, the first order of business then is for the infrastructure team to establish IaaS that can serve up full production-like infrastructure environments to devs, testers and other constituents. This isn’t trivial when you’re dealing with a combination of legacy servers, bare metal x86, VMs, and perhaps networking, security and other devices.
The IaaS delivers a number of key improvements. First of all, it speeds up access because developers, security, compliance and other personnel can now get access to infrastructure in the space of minutes rather than the space of days or weeks. More than just speed, IaaS enables asynchronous productivity by various teams to happen 24×7, so that quality progresses in an even fashion, unimpeded by office hours. This is especially helpful if there are global teams, contractors or vendors involved. IaaS also provides standardization and consistency across all efforts. Finally, IaaS saves costs. Significant amounts of operations personnel time that was devoted to painstaking manual infrastructure assembly can be repurposed to higher value activities, and infrastructure utilization rates can easily double or triple almost immediately, which reduces the need to purchase more capital equipment.
A word here about GUI’s. One of the key notes of dissonance that often occurs is when people hear developers say that they don’t want GUI’s, they want API’s. That in my view is absolutely valid, but more so for Mode 2 Agile IT teams. It turns out that IaaS GUI portals can be rather helpful in providing an easily understood reference point to multiple teams that may have highly varying understandings of the end-to-end picture of the application and its infrastructure. For example, if an IaaS self-service portal provides a full visualization of the end-to-end infrastructure involved, a developer who might assume that latency is always near zero might become more aware that there is a WAN in the middle of the end-to-end infrastructure.
- Test as a Service: Many development and testing teams have taken some stab at test automation. However, test automation tends to live and be maintained at the individual level. The natural follow-on to IaaS is to build a standardized regression or certification of automated tests that can be offered from the IaaS portal—test as a service (TaaS). This allows teams to access production like environments, do their development work, then easily launch a comprehensive certification test. Tests naturally should report their results into the ALM tool so that individuals and teams can review results in near real-time to keep the process moving. The benefits are similar to IaaS: increased velocity, standardization, productivity and efficiency.
- DevOps Evolution: With a combination of IaaS and TaaS, the stage is now set to deploy a DevOps toolchain and start doing cross-functional technical collaboration, and agile methodologies. For example, the IaaS and TaaS can be launched by a unified continuous integration server upon software development check-ins from any team. Of course, DevOps doesn’t just mean tools and automation, but the process of coming around collaborative devtest infrastructure and common certification tests to build a common, agile development practice is one of the steps towards a more broadly responsive cycle.
Traditional IT may not be the sexiest beast in the technology forest, but it’s often mission-critical. Gartner’s Bimodal IT concept encourages CIO’s to let go and let Mode 2 Agile IT teams do their thing to create new culture and practices. Translating that to Traditional IT requires a more step-wise approach. Progressing from manual processes to IaaS to systematized, self-service test automation as the foundation for DevOps evolution is a sound way to proceed. With these steps. Traditional IT can “do IT right” and do IT faster.
About the Author/ Alex Henthorn-Iwane
Alex Henthorn-Iwane joined QualiSystems in February 2013 and is responsible for worldwide marketing and public relations. Prior to joining QualiSystems, Alex was vice president of marketing at Packet Design, Inc., a provider of network management software, and has 20+ years of experience in senior management, marketing, and technical roles at networking and security startups.
Through his roles at QualiSystems, Packet Design, CoSine Communications, Corona Networks and Lucent Technologies he has acquired expertise in cloud computing and the opportunities presented through virtualization. He has written for Embedded Computing, Virtual Strategy Magazine, Datamation, SDN Central, Datacenter Knowledge and InformationWeek.