When an organization takes on a new initiative, it typically develops a procedure to optimize execution and limit blowback from the transition. Whether it’s the introduction of new processes, personnel, structures or technologies, there are a number of established ways—often gleaned from experience—to support the individual, the team and the wider organization.
That said, while this procedure may work for one department or team in a certain discipline—say, when marketing invests in a new marketing automation tool—what if the initiative impacts multiple teams, processes, structures and technologies, as it does with enterprise software delivery? A standardized procedure won’t work; there are just too many variables to consider.
And as organizations continue to provide digital services and products for their discerning customers, the number of variables will only increase as enterprises look to ramp up their software delivery to gain a competitive edge in a digital world. How do you manage all this change?
Moreover, what if this change disrupts some of your best teams, many of whom who are fiercely protective of their own unique processes? Much time and thought has been committed to hone a process that is right for them. Who are you to tell them to pull up the floorboards and start again? Sure, the wider organization may want everyone operating from within one end-to-end system, but that’s just not how software delivery works. Nor, for that matter, is it how practitioners think.
You Can’t Please Everyone — or Can You?
Very rarely do practitioners look beyond their core function and specialism, unless it truly impacts their job. For instance, developers want clear requirements to write accurate code that is tested, validated and out the door. They have to write and maintain reams of code, meaning they have little time to think about the rest of the value stream. Typically, if you’re not a business analyst, tester or in ops, then you’re unlikely to be on their radar. They certainly won’t welcome any interruption or change to the way they operate from someone “outside” their world.
This dilemma is exactly why organizations need to think long and hard about how they deal with these differences within software delivery value stream. Not only can practitioners be hostile and resistant to change, but it’s extremely complicated to deconstruct or adjust these processes. They’re highly detailed in practice (although rarely documented due to the frequency with which they’re modified!), covering:
- Work items to utilize/prioritize
- How backlog items vs. issues are represented on their backlogs
- How items flow through day-to-day processes
- Whether they use Scrum and/or Kanban
- The required information that developers need to submit to management level
- The person responsible for submitting certain elements of the management report
When Standardization Isn’t Enough — How to Accommodate the Cowboys
While standardization is possible (and preferred) across organizations, e.g., ITSM for managing multiple systems for multiple functions from one centralized point of control, unifying teams across the software value stream isn’t so simple. There always will be one or two rogue teams that will reject any new process because they know better and only want to do things their tried-and-tested way. We call these teams “cowboy teams.”
Cowboy teams, while problematic, are a force for good as they can be leaders in experimentation and innovation: Consider Alan Turing and his “Enigma” team. Nor are these teams stubborn sticks in the mud. Often it just boils down to the fact that introducing a new process comes with a lot of baggage that people are naturally going to want to avoid, including:
- Financial cost (new tools, staff and accompanying training)
- Time cost (training etc.)
- Reduced productivity during implementation period
- No guarantee of success
- Potential of negative impact on function
- Detrimental influence on employee satisfaction
- Unforeseen complications and conflicts such as mismatch with tools used by the team
Last but not least, haphazardly introducing a new process can undermine your software delivery value stream. We know that large-scale software development and delivery requires all teams, tools and disciplines within the value stream to be connected for end-to-end visibility, traceability and governance over the whole process.
And in an ideal world, the process would be underpinned by a single cohesive system. But in reality, this is not the case. Teams work in silos because their purpose-built tools aren’t connected to other stages in the value stream since these tools are unable to automatically flow information between systems. Not only does this state foster siloed thinking, but there always will be one or more teams that will resist any wholesale change to their way of working, even if they’re told it will help the organization meet its business objectives.
Fortunately, there’s a way to unify and connect your value stream while simultaneously accommodating these cowboy teams, and that’s a modular tool chain.
Modularity — The Backbone of Autonomous Teams Functioning Within the Bigger Picture
The software delivery process isn’t a conventional manufacturing process. It’s more like a network of thousands of people with different functions, tools and processes that work on different components of the same project at the same time. And these components simply must communicate in real time to share collaborative data. Supporting such a system is complex, because all the best-of-breed tools for planning, building and delivering software do not naturally integrate—there’s no interconnection, the lifeblood of any network.
However, you can connect these tools via model-based integration through a modular value stream; i.e., not point-to-point integration, but an adaptable, integrated network that communicates through a model in the center of this complex universe. Not only does this account for all different tools, teams, disciplines and processes, but this paradigm enables organizations to plug new elements in and out without disrupting the all-important flow of information across the value stream.
This means any information that is shared throughout the value stream is done so in a normalized format—a common language, if you will—that allows teams to operate in their preferred tool and methodology. For example, if your management wants to know the number of tickets/defects submitted per program during each release, they don’t want to know there were 10 “high priority” ones from one team and 15 “important” from the other team. If “high” and “important” mean the same thing, priorities should be normalized and reported back to the management in a standardized fashion.
Thanks to the automated flow of information into a language everyone understands, the whole value stream, even the cowboys, are on the same page without the need for laborious time-consuming manual forms of communication, such as email threads, status meetings, phone calls, spreadsheets and duplicate data entry. Not only does this improve the working conditions for practitioners, enhancing employee engagement, but it also can save up to $10 million a year in productivity-related savings.
So, the choices for organizations are either to:
- Force all teams to use the same system, creating a litany of individual, team and organizational problems due to disruption, or
- Use a modular value stream method that maps each team to the standardized model, while at the same time allowing teams to continue working in their preferred system using their carefully devised processes.
Change is constant, inevitable and vital for progression and innovation. At the same time, change can be a threat to those important parts of your business model that are working and delivering business value. With value stream integration, you can have the best of both worlds: Your pioneering cowboys can continue to work in the ways that make them happy and productive, while the rest of the stream can continue to collaborate, evolve and adjust to the fast-paced digital landscape.
About the Author / Mara Puisite
Mara Puisite is a Pre-Sales Engineer at Tasktop, bringing with her years of industry and technical experience. Prior to her current role, Mara worked as a Product Manager running Tasktop’s connector program and managing the development and maintenance of 50+ integration systems. Today, she helps companies solve various bottlenecks in their software development and delivery process through the power of value stream integration. Connect with her on LinkedIn.