When users turn away from applications or have critical feedback, it’s often because the application is slow, hangs/freezes or, worse, it isn’t available when needed. In this era of limited—or, zero—patience for app instability, delivery teams should ensure that their performance testing capability is sound. First though, let’s start with definitions. Despite popular opinion, performance testing is not the same as load testing. Load testing checks how systems function under a heavy number of concurrent users performing transactions over a certain period of time, while performance testing is much broader in that it checks how the system behaves and performs. Performance testing examines responsiveness, stability, scalability, reliability, speed and resource usage of your software and infrastructure.
Performance testing doesn’t always get the attention it deserves. Testers have a tendency to focus on features and component functionality, even when verifying performance on application architectures designed leveraging APIs. As software development methodologies and application infrastructure evolve, so should performance testing. We observe that in Agile teams, performance testing typically executes at least one sprint behind other testing methods. Performance testers don’t start testing until the application’s features are deemed stable. They operate separately from the delivery team. Yet, performance testers should run tests as soon as new code is available, which in turn helps developers receive real-time feedback on issues that they then can fix immediately.
To ensure a successful strategy in validating the performance of an application, here’s a look at three fundamental performance test categories:
- Endurance: This involves running a transaction or scenario continuously for a period of time to detect memory leaks or anything else that causes a system to slow down.
- Breakpoint: The goal here is to increase the load on a system until it reaches a threshold, after which it crashes. These tests are critical to understand at what point the system or underlying components will buckle.
- Scalability: By increasing the number of concurrent users, instances or database size, you can measure how applications will handle growth.
Performance Testing: Recommended Tools and Skills
Performance testing is difficult without automated tools that can simulate scenarios and user activity so that you can test comprehensively. Many good tools are free or available as open source. Some of the popular ones include JMeter, Grinder and Locust.io.
To be successful, performance testers need a DevOps mindset. Technical skills are a must, since understanding software code behaviors and interpretation of hardware and software utilization is necessary. An understanding of how to interpret the technical concerns observed from the business perspective is also a great skill to build. Tests should always answer a “what if” scenario and occur within the sprint. Hence, the ability to apply a continuous testing approach as part of the holistic testing strategy is a great skill to learn! Finally, learn how to provide real-time feedback to developers, as a form of hypothesis about changing code to support better performance tracking.
Guidelines to Make Your Performance Testing a Success
These steps will help structure your performance testing to achieve results for the business success of the delivered application:
- Map your tools and environment. You’ve got to understand the environment infrastructure that will host the application under test. The ability to quickly configure, build and tear down the environment to test is a crucial time-saver.
- Set acceptance criteria. Getting concise goals of application performance among all stakeholders will chart your path and allow you to gauge success. An example metric is page load times for your mobile application.
- Define KPIs. The acceptance criteria will dictate the KPI goals for performance tests. These may include percentile response times, throughputs, garbage collection performance, heap utilization and resource utilization.
- Set up test runs, execute and monitor. After completing the preparatory steps above, you can define test scenarios and create the test cases. Don’t forget to incorporate monitoring. This entails creating alerts in the testing environment to ensure that monitoring services are working as expected.
- Analyze, repair and assess. After each performance test cycle has completed you’ll want to assess the results against your acceptance criteria and fix any problems immediately. Over time, acceptance criteria will change and so should your test scenarios and anticipated outcomes.
As with all testing activities, early involvement and collaboration with developers, analysts and business stakeholders will save time and headaches later. The more you know about the technology stack and upcoming application changes, anticipated maturity of the application and evolving business goals, the better prepared you can be for any testing need as it arises.
In the evolving shift to a modern software development environment, performance testing has become central to building confidence in software. It’s time we give it our full attention.