AI

A Buyer’s Guide to AI and Machine Learning

B2B software sales and marketing teams love hearing the term “artificial intelligence” (AI). AI has a smoke and mirrors effect. It sounds impressive. But, when we say “AI is doing this,” our buyers often know so little about AI that they don’t ask the hard questions.

In industries like the DevTools space, it is crucial that buyers understand both what products do and what their limitations are to ensure that these products meet their needs. If the purpose of AI is to make good decisions for humans, to accept that “AI is doing this” is to accept that we don’t really know how the product works or if it is making good decisions for us.

When we’re in the buyer role, we often don’t hold ourselves responsible for understanding AI and machine learning (ML) products because these technologies are intimidating. They’re incredibly complex.

This article addresses the limitations of AI and ML, so software buyers can ask the right questions to understand what they are buying. 

The Test Oracle Problem

One limitation of some AI or ML products is that for certain applications of the technology, there is no source of absolute truth to compare against the accuracy of the output. For example, neither humans nor machines know how to produce the perfect set of end-to-end tests for any given application. This is the test oracle problem: there is no objective standard of truth. No one wants to introduce this kind of uncertainty into their sales process. Yet, our buyers deserve well-informed answers about our products.

As a buyer, you need to understand the intended advantage of your seller’s AI product before making a purchase decision. Is it meant to make a decision that is more accurate—against an objective standard—than a human? Is it meant to make a faster decision with less cost? Or introduce an alternative methodology that uses new data in a new way? Answers to these questions influence how you will use the product and what value it provides.

AI Versus ML

Though AI is commonly accepted as “any machine that uses math to make decisions,” true AI is self-taught. AI has a neural net that mimics neurons in a human brain which allows it to teach, update and evolve itself. Because of this, true AI is difficult to build and is often experimental rather than commercial.

More often, what’s being described when we say AI is actually ML. ML is human-taught: Machines learn through human feedback using a probabilistic decision-making process that improves via ongoing correction. Machines take in data, run algorithms against it and output a decision — or series of assertions — based on probabilities. Humans correct the machine by telling it whether it was accurate in its assessment, and the machine updates. As it receives accuracy feedback, machines learn to make better decisions. And because ML is based on probabilities, it will sometimes make the wrong decisions.

Based on how you plan to use a product, you need to determine how rigorous its accuracy needs to be. How often a machine can make the wrong decisions and still serve its purpose is application-specific. Self-driving vehicles must be nearly perfectly accurate to be adopted. Paralegal ML toolsets likely need to be less accurate. How accurate does your product need to be?

Asking the Right Questions

Regardless of how you plan to use a product, it’s important to ask the right questions to understand the product and build resiliency around its accuracy levels. The next time a seller tells you “AI is doing this,” you can ask the following: 

  1. Is this product an ML product? Does it need to be ML to get a meaningful result? To be ML, a product needs to learn through human feedback, not just make decisions using probabilities. Do you just need a product that uses logic to make decisions, or a product that improves in accuracy over time?
  2. How is the accuracy of this product calculated? You won’t know if the machine is more accurate than humans if you don’t know the conditions used to calculate accuracy. If a machine is 30% more accurate than humans, who assessed this accuracy and how did they determine this?
  3. How do you know when the product makes the wrong decisions? Any ML product will sometimes produce the wrong output. Typically, a seller’s most successful customers have already adopted business processes to build resiliency to this wrong output. If so, the seller can help you adopt them as well.
  4. In its current state, how often does the product make the wrong decisions? Knowing the frequency of mistakes and the stakes of those mistakes will be crucial to deciding how you use the product, and whether it’s safe to do so at this stage in its development.
  5. How many teaching hours have been put into this product? This number will provide a simple approximation of how much effort has gone into making the product more accurate. A low number can be fine, depending on the application.
  6. How does my usage improve the accuracy of this product? As a buyer, you are an integral part of the machine testing and teaching process. You should be willing to use your data to improve their accuracy, because you want these products to improve in the future.

Why Knowing AI and ML Matters

Not only is there a lot of “AI” that isn’t AI, but there is also algorithmic technology that isn’t ML. It is thus essential for buyers to know enough to ask the right questions and understand how these products make decisions.

There are limitations to all ML products, though the limitations differ by product and the way the product is applied. When a product’s accuracy levels are unknown, all you can do is ask if its methodology is valid for decision-making: Does it have access to better data than humans? Can it make smarter and faster decisions than humans with this data? If the answer is yes, you should consider buying the product rather than having your people do the work.

Erik Fogg

Erik Fogg is the co-founder and chief revenue officer at ProdPerfect, where he oversees revenue and growth. Having initially studied mechanical engineering and political science at MIT, Erik transitioned from a consulting role with Stroud International to an operations and sales executive position in the startup HelmetHub, until he co-founded ProdPerfect in 2018. In his free time, Erik interludes as a political author, business book ghost-writer, and consultant for private equity and biotech. Erik ambitions to use artificial intelligence to help large organizations and societies consistently identify truth from falsehood, and make better fact-based decisions.

Recent Posts

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

15 hours ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

1 day ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

3 days ago

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

4 days ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

4 days ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

4 days ago