An analysis of more than 340,000 bugs collected by Applause, a provider of a testing platform designed to integrate with a range of DevOps platforms, found functional bugs accounted for 68% of issues. The research gathered data from 13,000 mobile devices and 1,000 unique desktops running 500 versions of operating systems and found that the majority of issues could be traced to functional bugs compared to visual (17%), content (9%), crashes (4%) and lags and latency (2%) issues that, collectively, only add up to 32%, the report found.
The report also found screen readers comprised 66% of all accessibility bugs compared to keyboard navigation issues and insufficient color contrast which accounted for only 12%. In terms of localization, poor and missing translations account for more than two-thirds (67%) of bugs, the report found. Based on ongoing feedback data that Applause also continuously collects from customers, nearly half of organizations (47%) identified currency and number formatting as the most valuable bugs to identify when it came to localization.
Overall, organizations ranked the discovery of crashes (75%), functional bugs (61%) and lag and latency issues (53%) as exceptionally valuable, according to Applause.
Luke Damien, chief growth officer for Applause, said when it comes to testing, in general, there is still not enough focus on user and customer journeys. That lack of focus is becoming a larger issue as more organizations invest in digital business transformation initiatives that are driven largely by mobile applications, he noted. Payments are especially problematic because they often rely on application programming interfaces (APIs) exposed by a third party in a way that results in suboptimal application experiences that have a direct impact on revenues, he added.
The challenge is that it’s not entirely clear how far left responsibility for application testing is shifting. In some cases, developers are assuming responsibility. In other cases, a dedicated testing team is still responsible. Developers, of course, will test applications as they build them but application experiences on a local machine may not always be replicated in a production environment. The issue is that most end users today are not especially forgiving when a mobile application fails to meet expectations. Months of development effort can be wasted simply because a function was not tested in a production environment.
Less clear is to what degree testing will be automated in the future. Machine learning algorithms are making it possible to automate testing of both functions and user interfaces. It’s not likely the need for humans to be involved in testing will be eliminated any time soon, but many of the routine tedious tasks that conspire to limit the rate at which applications can be tested should be sharply reduced in the months and years ahead.
In the meantime, an organization’s entire brand reputation is now tied to the quality of the application experience it enables on a mobile device. Revenue targets can now be easily missed if users of a mobile device choose one application over another simply because a function was broken—even for a short amount of time. As such, testing has never been more critical given organizations’ dependency on software.