How Performance Testing Stands Apart
Performance testing is fundamentally different from other types of tests, especially functional testing.
Functional testing is about designing test cases (scripted or creative) to ensure that the software meets the needs, expectations, and requirements of the customer or client. Functional test cases typically focus on what the system does—business rules, calculations, etc.—and whether it meets specified behaviors. This can also be applied to certain types of nonfunctional testing, such as usability testing, where there is an expectation of results.
Both of these types of test typically have some form of “oracle” that can be used to determine whether the system is functioning (behaving) as expected. These test oracles can take many forms: requirements, stories, use cases, architecture, design, or human expectations (such as the Web Content Accessibility Guidelines from the World Wide Web Consortium). These defined expectations are used to determine whether the test passes or fails. Once a test case passes, the continuing expectation is that it will pass every time afterward. If not, there is a problem—an incident or defect.
Performance testing does not utilize test cases as we typically see them. You cannot generate generalized bulk requests against a system or application and expect to get any useful information that can be used to adjust and tune the system. A proper performance test necessitates the design of an operational profile (OP). In an OP, the focus is not just on load and volume, but also on the characteristics that compose load and volume.
An accurate operational profile, also known as a load definition, is essential to creating a successful performance test, as it defines the levels of activity (average load, peak load, etc.) and the characteristics that compose those load levels. A profile contains the following minimum set of elements:
- Identification and quantification of the number and type of activity generators (people, devices, other applications). This would be done for each load level (average, peak, etc.) to be implemented.
- Identification of activity by each type of generator
- Frequency and duration of each activity to be used in the load test (by generator type)
- Distribution and mix of activities generated by each source for each load level defined
- Variability of the activities within the defined period (concurrency or throughput levels)
- Activities have patterns that need to be replicated. These patterns also require variable data to ensure that the test does not create artificial effects, such as overuse of cache.
A performance test utilizes a load-based test to evaluate whether an application or system meets specified performance targets (response times, resource utilization, scalability, etc.) under defined levels of activity (load and volume). Unlike functional testing, each time a load test is executed with no changes having been made to the system or application, the results should be similar to the previous execution—for example, multiple average load tests should all look alike. The results should not be identical; if they are, there is something fundamentally wrong with the load test.