Blogs

Using Trade Studies in Vetting Tool Stacks

Adding to your tool stack should entail more thought than, “Start your free trial today!” It’s great when there is an application need and cloud-based tools are easily grabbed with a few clicks. However, surveying alternatives before making a tool stack change can prevent costly mistakes. Trade studies can help vet best-fit tools for evaluation.

Short for trade-off, trade studies originated in the aerospace industry as a decision-making aid. Teams of engineers (or developers) usually generate multiple proposed technical solutions for any non-trivial problem. A little searching will find the term multiple criteria decision analysis, with three popular techniques: the analytic hierarchy process, the Kepner-Tregoe method and the Pugh method.

For DevOps teams, I lean toward variants of the Pugh method because it avoids creating an overly complex weighting and scoring scheme. Pugh analysis has gained popularity among Six Sigma aficionados for the same reason. Multi-disciplinary teams find the process easy to understand and the results just as useful as more algebraic methods taking longer and needing more input.

Let’s look at three steps in a practical approach: setting criteria, ranking best-fits and starting evaluation.

Setting Trade Study Criteria

Most tool stack disasters result from discovering a tool doesn’t meet all the requirements as the project unfolds. In my previous life, our IT team went all-in on a big-name ERP application partly because it featured web-based portals for suppliers. My e-business team discovered those portals required specific port numbers be opened at the firewall, which corporate security immediately rejected. We coded a way out, using Adobe ColdFusion querying the database directly and securing it with LDAP sign-on, but it took months.

You could search for the “information I could have used YESTERDAY” gif or you can spend time setting the right criteria. Pugh analysis starts with a matrix of criteria rows versus candidate tool columns. What most sources won’t tell you about Pugh analysis is a trick in generating that criteria list:

  • Start with must-have features, with current requirements and anticipated future requirements.
  • Research tools (data sheets!) for near-fits against those must-have criteria and create the matrix.
  • Add all the features prominently listed for each of those near-fit tools to the criteria.
  • Search again for more near-fit tools matching features on the extended list, adding to the matrix.

A common mistake is stopping after the first two bullets. Almost every trade study I’ve done discovers more tools fit the must-have list but didn’t emphasize all those features on their website. It’s a bit time-consuming, because every time you find a feature of interest you must go back and recheck all the other tools on your near-fit list for it.

Adding non-requirements? You’ll miss innovation opportunities outside the Venn diagram intersection. My recent client study on marketing reporting tools found one touting support for a dream-list feature. Digging into that tool found it may not support two must-have requirements but does support the others.

Ranking Best-Fits

That’s why the trade study criteria list should visually differentiate must-haves from extended features. I usually shade the extended feature rows, then plot how near-fit tools stack up against all criteria. You can certainly add weighting to features for a more sophisticated analysis, but I find a simple must-have versus nice-to-have usually suffices.

Now start scoring. A strict Pugh analysis uses a plus-same-minus scoring system. Plus means the tool has a clear advantage on that feature, same is same, and minus means that tool is at a disadvantage compared to the others. I sometimes simplify that to just X for a tool having the feature, blank for it doesn’t. This is mostly a data sheet comparison, and you may want to inject other information such as found reading reviews.

What you’ll find is a best-fit list, and it probably won’t be perfect. You’d like to see a couple tools capturing most or all the must-haves, and a few more capturing some of them and extended features for contrast. As an example, here’s the trade study matrix for that marketing reporting tool assessment I mentioned with the vendor names omitted.

Starting Evaluation

Notice this doesn’t have cost information yet. Cost biases technical decisions if applied too early, and feature coverage doesn’t necessarily correlate with cost. Vendors also don’t price their stuff the same way. Some price by developer seats, some by data source integrations, some by monthly tracked users (MTUs) on the web.

With a best-fit list in hand, it’s time to discuss the findings with your team and add information for evaluation. Confirm the scoring, check the vendor pricing models, ask the vendors about unclear feature support and download those free trials to see how they work. Consider how to cover those uncovered must-have features, or if those even are must-haves if few tools support them. Maybe swapping out another tool in the tool stack gets better coverage. Think about innovation those extended features may enable.

Want simpler, faster, actionable analysis without a massive RFP? Trade studies do that. I think it’s valuable to have one person on your team be the keeper of trade studies, someone skilled at interviewing for requirements and refereeing debates during evaluation. You may want to bring in a consultant skilled at competitive research and analysis to help develop your process. In any case, adapt your approach until trade studies help you make better selections—there is no one right answer.

Don Dingee

Don Dingee

A technologist who started out working on aircraft and missile guidance systems, Don Dingee founded STRATISET in October 2018 to share his B2B marketing experience. Early in his career Don headed a product marketing team and implemented one of the first e-business strategies at Motorola. For a decade he covered embedded and edge computing, EDA, and IoT technology at Embedded Computing Design and SemiWiki.com. He’s co-author of “Mobile Unleashed”, a history of Arm chips in mobile devices. For fun, Don debates sabermetrics and wrestles his Great Pyrenees dog.

Recent Posts

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

12 hours ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

17 hours ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

22 hours ago

Exploring Low/No-Code Platforms, GenAI, Copilots and Code Generators

The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…

2 days ago

Datadog DevSecOps Report Shines Spotlight on Java Security Issues

Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…

2 days ago

OpenSSF warns of Open Source Social Engineering Threats

Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…

3 days ago