Gene Kim, Jez Humble and other experts say that understanding and improving your value stream is really what it takes to bring your software delivery organization to the next level. While almost every IT organization has this goal, actually knowing how to make it happen can be paralyzing, because it seems like such a daunting undertaking when trying to connect all of your systems tools and processes.
Recently, we participated in an innovative approach that proved that you can take concrete and measurable steps and see real outcomes in only two days.
Nationwide constantly strives to innovate and bring real change to its IT organization. The insurance and financial services company has zeroed in on improving its value stream. To begin the initiative, Nationwide developed a two-day workshop where the right disciplines would come together and determine the best way to make this happen. But the real innovation in this idea was that the organizers did not just talk about what to do. Instead, the end goal was to come up with at least one key metric the team could show to the Nationwide CIO by the conclusion of the workshop. This approach made the effort real, concrete and focused. While it was difficult, it worked.
Here are nine important lessons learned:
Lesson 1: Start with one artifact—specifically, defects. Measuring cycle time is extremely difficult because there are so many moving parts. To have a tangible outcome, start with defects because they tend to be the most well-understood artifacts and have relatively defined workflows and processes that people know and understand.
Lesson 2: Commit to having a real metric based on real data. All too often organizations take the “boil the ocean” approach. When asked what metric they want to start with, the answer is typically, “Everything.” That is precisely the wrong approach. Aiming for everything is likely to get you nothing. Focus on a tiny amount to start. First understand, then improve, your software delivery value stream. Not only should you start small, but for IT value streams, start specifically with a certain artifact (see Lesson 1). And chose only one metric for that artifact, because less is truly more in this case.
Lesson 3: Without a multi-disciplinary approach, you will fail. This part is extremely difficult. To be successful you will need the following mix of skills:
- Domain Expert: This person knows this part of the value stream inside and out. She is a practitioner in that area of IT and truly understands the who, what, when, where and why of that particular area of the value stream.
- Process Analyst: This person knows how the various people, systems and resulting artifacts are intended to operate together for the complete process. Often this person is closely aligned with the domain expert, but typically is not one of the practitioners actually participating in the process.
- Data Warehouse Specialists: These are the people responsible for populating and managing the data marts or data warehouses. They need to have significant data transformation capabilities and intimate knowledge of the required data structures and formats the organization requires.
- Report Writers: As often happens with the expertise required to wrap up a process, this role is sometimes overlooked at the beginning. This is not the right approach. The person who will report on the project should be involved from the very start.
Some people might start by asking, “Don’t we need a data scientist?” After all, that is a title of authority when it comes to metrics. If you have a data scientist on staff, by all means, get them involved. But if you don’t, you can rely on someone with database skills—primarily, skills around writing advanced SQL (as old fashioned as that may seem). Most of this work will not require advanced data scientist skills.
But the real challenge is to get all of those disciplines in the same room at the same time. This is no easy feat. Leadership must provide the necessary resources to ensure that each one of the roles is represented. Because if any of these roles are left out, you will not succeed.
Lesson 4: Work in parallel. The process and domain experts can work on identifying the artifacts that must be used and manually identifying and mapping where the artifact originates, what systems touch the artifact when, and all the requisite status changes and their meanings. That work will inform the data warehouse team and report writers, but the data and report writers are actually faced with a very different problem: They must determine how to get the data, regardless of the identified artifact, and transform the raw data into a usable format that report writers and BI analysts can work with in their visualization tools. By having both teams work in parallel, with identified checkpoints along the way, you will be more efficient when it comes to the three required deliverables.
Lesson 5: There are always three required deliverables for each value stream cycle metric you are interested in.
- The artifact(s) of interest (defect, story, requirement, code commit, etc.)
- The following data:
- All required status changes and dates of changes,
- Data to relate artifacts to each other (for the joins)
- Any ancillary information needed to be able to slice and dice the data for the final visualization.
- The final visualization showing the metric (which is typically not one metric but rather a macro metric that has drill down capabilities and other perspectives on the metric).
Lesson 6: Work both forward and backward. It is always easier to get started when you know where you want to end up. In this lesson, identifying the end goal is key. Once you have identified the “end” then you can go to the beginning and determine if the end goal is actually reachable. You also can identify any holes that exist between the beginning and your end goal. In this use case, the backward story looks like this:
- End Goal: A visualization (graph/chart) showing the metric of interest. The idea is to mock up exactly what you want the end result to look like. Create the mock up in the actual BI tool you intend to use as if you already have all of the necessary data at your fingertips.
- Step 4: Provide the data needed to create the visualization—it must be formatted in a way that will enable the report writers to easily and efficiently create the visualization that is your end goal. Hint: start by creating a spreadsheet that contains the exact data structure and formats desired. You will use fabricated data to start. The idea is to make sure the data is structured properly to reach the end goal (the visualization).
- Step 3: Create the interim views and tables that will enable you to get to Step 4
- Step 2: This step is where you collect the raw data from the actual systems involved
- Step 1: (The Beginning) Identify the artifact and all of the systems that artifact flows through.
Lesson 7: Invite outsiders to participate. It’s difficult to see the forest for the trees in your own organization. You need outsiders who are willing to ask the questions, probe in challenging areas, push the boundaries—people who have a different point of view because they are not in the game. This does not mean outsiders should be the workshop facilitators. They could be, but it is more important for them to actually participate in the sessions and help define and create the deliverables.
Lesson 8: Hold the workshop in a different space than you work in every day. Don’t use the conference room that you typically use. Bring the team to a completely different environment—one that inspires creative thought and innovation. It may seem like a fluffy requirement, but you will learn through workshop participant feedback that location has a major impact when it comes to productivity and focus.
Lesson 9: Get executive leadership buy-in. The executive VP was genuinely impressed with what the team accomplished in only two days. And the positive feedback and kudos the team received has probably fueled their innovation engines for months to come. To feel a sense of accomplishment and be recognized by the executives in your organization has significant morale impact.
Improving your value stream is no small feat. But by designing and executing on a workshop like this, you will see benefits and clear wins immediately. By the end of our two days, we had a prototype of defect cycle time and a defined repeatable process. Equally as important, the people involved were given the opportunity to fail fast and pivot where needed, which inspired innovative thinking and creativity. And, being able to present to the CIO real progress with real results in two days’ time fueled institutional buy in and momentum to drive a long-term value stream improvement program.
About the Author / Nicole Bryan
Nicole Bryan is Vice President of Product Management at Tasktop Technologies. Nicole has extensive experience in software and product development, focused primarily on bringing data visualization and human considerations to the forefront of Application Lifecycle Management. Most recently, she served as director of product management at Borland Software/Micro Focus, where she was responsible for creating a new Agile development management tool. Prior to Borland, she was a director at the New York Stock Exchange (NYSE) Regulatory Division, where she managed some of the first Agile project teams at the NYSE, and VP of engineering at OneHarbor (purchased by National City Investments). Nicole holds a Master of Science in Computer Science from DePaul University. She is passionate about improving how software is created and delivered – making the experience enjoyable, fun and yes, even delightful.