Adding to your tool stack should entail more thought than, “Start your free trial today!” It’s great when there is an application need and cloud-based tools are easily grabbed with a few clicks. However, surveying alternatives before making a tool stack change can prevent costly mistakes. Trade studies can help vet best-fit tools for evaluation.
Short for trade-off, trade studies originated in the aerospace industry as a decision-making aid. Teams of engineers (or developers) usually generate multiple proposed technical solutions for any non-trivial problem. A little searching will find the term multiple criteria decision analysis, with three popular techniques: the analytic hierarchy process, the Kepner-Tregoe method and the Pugh method.
For DevOps teams, I lean toward variants of the Pugh method because it avoids creating an overly complex weighting and scoring scheme. Pugh analysis has gained popularity among Six Sigma aficionados for the same reason. Multi-disciplinary teams find the process easy to understand and the results just as useful as more algebraic methods taking longer and needing more input.
Let’s look at three steps in a practical approach: setting criteria, ranking best-fits and starting evaluation.
Setting Trade Study Criteria
Most tool stack disasters result from discovering a tool doesn’t meet all the requirements as the project unfolds. In my previous life, our IT team went all-in on a big-name ERP application partly because it featured web-based portals for suppliers. My e-business team discovered those portals required specific port numbers be opened at the firewall, which corporate security immediately rejected. We coded a way out, using Adobe ColdFusion querying the database directly and securing it with LDAP sign-on, but it took months.
You could search for the “information I could have used YESTERDAY” gif or you can spend time setting the right criteria. Pugh analysis starts with a matrix of criteria rows versus candidate tool columns. What most sources won’t tell you about Pugh analysis is a trick in generating that criteria list:
- Start with must-have features, with current requirements and anticipated future requirements.
- Research tools (data sheets!) for near-fits against those must-have criteria and create the matrix.
- Add all the features prominently listed for each of those near-fit tools to the criteria.
- Search again for more near-fit tools matching features on the extended list, adding to the matrix.
A common mistake is stopping after the first two bullets. Almost every trade study I’ve done discovers more tools fit the must-have list but didn’t emphasize all those features on their website. It’s a bit time-consuming, because every time you find a feature of interest you must go back and recheck all the other tools on your near-fit list for it.
Adding non-requirements? You’ll miss innovation opportunities outside the Venn diagram intersection. My recent client study on marketing reporting tools found one touting support for a dream-list feature. Digging into that tool found it may not support two must-have requirements but does support the others.
That’s why the trade study criteria list should visually differentiate must-haves from extended features. I usually shade the extended feature rows, then plot how near-fit tools stack up against all criteria. You can certainly add weighting to features for a more sophisticated analysis, but I find a simple must-have versus nice-to-have usually suffices.
Now start scoring. A strict Pugh analysis uses a plus-same-minus scoring system. Plus means the tool has a clear advantage on that feature, same is same, and minus means that tool is at a disadvantage compared to the others. I sometimes simplify that to just X for a tool having the feature, blank for it doesn’t. This is mostly a data sheet comparison, and you may want to inject other information such as found reading reviews.
What you’ll find is a best-fit list, and it probably won’t be perfect. You’d like to see a couple tools capturing most or all the must-haves, and a few more capturing some of them and extended features for contrast. As an example, here’s the trade study matrix for that marketing reporting tool assessment I mentioned with the vendor names omitted.
Notice this doesn’t have cost information yet. Cost biases technical decisions if applied too early, and feature coverage doesn’t necessarily correlate with cost. Vendors also don’t price their stuff the same way. Some price by developer seats, some by data source integrations, some by monthly tracked users (MTUs) on the web.
With a best-fit list in hand, it’s time to discuss the findings with your team and add information for evaluation. Confirm the scoring, check the vendor pricing models, ask the vendors about unclear feature support and download those free trials to see how they work. Consider how to cover those uncovered must-have features, or if those even are must-haves if few tools support them. Maybe swapping out another tool in the tool stack gets better coverage. Think about innovation those extended features may enable.
Want simpler, faster, actionable analysis without a massive RFP? Trade studies do that. I think it’s valuable to have one person on your team be the keeper of trade studies, someone skilled at interviewing for requirements and refereeing debates during evaluation. You may want to bring in a consultant skilled at competitive research and analysis to help develop your process. In any case, adapt your approach until trade studies help you make better selections—there is no one right answer.