AI has been in the news a lot lately. We’ve all heard a lot about ChatGPT and you’ve no doubt encountered the end result of a variety of other advances in AI technology of late, such as Lensa. But there’s also a much quieter and less visible shift happening behind the scenes during the development of your favorite apps—AI-directed mobile app testing. Let’s take a look.
Behind the Scenes
It’s no secret that every product, no matter the industry, goes through a gauntlet of tests before it’s unleashed on the general public. This goes for everything from a simple object like a cardboard box, for example, to a far more complex product, such as the latest electric vehicle.
The same goes for the mobile apps on your phone. And if you haven’t given that much thought, then good! It means that the QA team behind the app’s functionality did its job, producing a seamless experience that functioned without a hitch, exactly as you wanted and expected.
But that’s not always the way it goes, and mobile app testing is only growing more complex. That’s a growing problem that AI will eventually resolve.
If your app didn’t perform as expected, you—and many other users like you—may delete it and move on to the next app that promises to do the same thing. While this may not seem like a big deal to you as the user, it’s a total disaster for the tireless teams working behind the scenes to ensure a successful app launch or update.
The adage ‘You only get one chance to make a first impression’ is very true in the mobile app market. It’s a big, crowded and cutthroat mobile app world out there, and users simply have no incentive to put up with an app that doesn’t do exactly what it’s supposed to do. After all, for every single app, there are often several others standing by to scoop up your unhappy (former) users.
This issue is by no means restricted to small development teams strapped for resources. I’m sure you can think of some high-profile software releases from major companies that have launched in a disastrous state over the past few years (particularly in the tremendously profitable arena of video games).
So why is it that some teams cut corners when it’s time for testing, only to dearly wish they hadn’t?
Well, the answer is pretty simple: Up until very recently, mobile app testing has traditionally been time-consuming, complex and bewildering, all too frequently delaying releases and flabbergasting QA teams.
To understand this ongoing problem, let’s take a look at what this kind of mobile app testing entails and how this landscape has recently started to shift, as well as what’s on the horizon to make it all more efficient.
Cutting Corners = Consequences
Once upon a time—in an era that seems like it was really just yesterday—software testing was performed manually, entirely by hand. This was possible because quality assurance personnel only needed to consider a very limited set of variables: There were only so many use cases and form factors to consider, and programs were only expected to do so much.
It’s hard to imagine testing a modern app manually and having the result be effective. The approach is only reasonable under very limited circumstances. Nowadays, teams use automated tests, which save them a tremendous amount of time, effort and resources.
Yet these automated tests come with their own problems. Most of these automated tests are hand-coded, meaning that they must be set up and adjusted by specialists with a background in computer programming. Such specialists are not only costly and in demand, but they’re also needed for later aspects of the software development cycle, such as developing new product features and overseeing other aspects of production.
And this is where companies start to cut corners and miss problems and issues, both large and small. Instead of what is known as shifting left (that is, testing as early in the cycle as possible), many teams, unfortunately, view testing as a burden or afterthought. Only when things go haywire and they have a full-blown crisis on their hands does the need for rigorous app testing become apparent.
Testing, Machine-Made
This is where machine learning and artificial intelligence enter the picture. For several years, we’ve known that testing performed by machines will be more and more necessary to handle increasingly complex apps, and it won’t be long before the cost associated with the complex testing scenarios will simply become too much for many developers.
Fortunately, advances continue to appear on the horizon. These include the much-discussed no-code revolution, which abstracts away much of the noise associated with software development and opens the door to a much wider testing base.
AI- and machine-learning-based testing is also on the horizon, yet there’s a significant challenge that must be overcome before this can happen at speed and scale. To develop mobile app tests with true intelligence and maximum efficiency, machines must be capable of autonomously completing the following steps:
- Comprehension: Draw from the ability to understand what’s changing in an app. For example, the machine must be able to detect differences and issues in a signup sequence or a checkout screen change.
- Impact analysis: The machine must know what test cases can cover that change. This sort of identification can be quite tricky.
- Test case generation: Next, the machine must possess the ability to generate test cases related to steps one and two. To do this, the machine must draw from several data sets. These include:
- Extent test collateral/test cases with product code
- Data from actual customer usage so that numbers one and two above may be identified
- What user environments look like, including information like how many users are active in the United States and in Europe
- Automatic execution: Perhaps the most crucial need is that the machine can perform testing entirely by itself. With the ability to self-execute tests, we’ll have reached a new era in testing.
- Reports: Before the testing cycle can begin again, the machine must be able to produce detailed reports that make sense to its human handlers. When it comes to test results, the more detail the machine can provide in as approachable a manner as possible, the better.
Problems and Solutions
Some testing platforms are now capable of many of the tasks I highlight above, and quality assurance teams that move away from self-coding tests and toward the future of testing stand to benefit from these solutions (particularly when their competitors do not).
Partially enabled by AI and ML, no-code solutions provide a stopgap solution until machines can really get a handle on the above five steps. At this point, it’s really just a matter of time: We can already see glimpses of a wholly automated testing cycle where human input is limited and machines do the heavy lifting.
It’s easier than ever to imagine a world where there’s no need for testing to be an afterthought and software disasters have become a thing of the past.