Iterative Performance Testing Provides Actionable Results

More often than not, performance testing projects originally planned for a single test execution cycle yield undesired results and subsequently morph into multiple, haphazard  load test runs, where each additional load test cycle is likely less organized (and more frantic) than the previous.  Focus on the overall goal of the performance test (page response time, CPU utilization, etc) is skewed or misplaced.  An intentionally iterative performance testing strategy will ensure a greater likelihood of success for goal based performance test execution.

Iterative Performance Test Execution

The performance test deliverable is an iterative effort where the performance test is run multiple times over the course of the project.  Between test iterations, the performance test solution is refined and expanded.  In this context, “Refined” means improvements to the performance test execution & results reporting process, and “Expanded” means additional test scripts are added based on application enhancements and an increased number of virtual users to the overall load.  These attributes, plus resource plans, are reviewed and adjusted as necessary during the performance test iteration retrospective.  One benefit to this is uncovering any performance bottlenecks earlier in the project timeline, which allows sufficient time for code or infrastructure tuning.

Performance Test Goal

Defining and sharing goals of each Performance Test cycle provides stakeholders with a high level of confidence going into production deployment by  setting expectations for application performance and functional behavior.  Each performance criteria should be goal-based (e.g. page response time with x-number of concurrent users).

Performance Test Project Planning

To ensure the greatest probability of a success for the performance test, the stakeholders need to be engaged early in the performance test project for each milestone: test scheduling, resource planning, environment set up, test execution, results analysis & reporting and performance test retrospective.  Each milestone should be defined in a project schedule prior to commencement of the project.

Performance Test Execution

The performance test execution may utilize end-users as manual testers in addition to a load testing tool.  When using manual testers to support a performance test, the duration of the automated load test is generally 15 – 30 minutes. The automated load test is started with a ramp-up period from zero to the maximum number of users for this cycle.  After the ramp-up period expires, the manual testers login to the test app and execute a scripted set of steps, noting any page response times that are greater than the response time goal (e.g. 3 seconds).  The end-users continue to re-run their manual tests until the automated performance test execution concludes.

When executing the performance test solely with a test tool, the duration of the performance test can be increased and the types of load test scenarios (e.g. Duration Test, Spike Test and Load Test) can be more greatly varied.

  • Duration Test: Gradually login entire virtual user group and run the test for a duration of 2 to 6 hours
  • Spike Test: Simultaneously log on the entire virtual user group on, run the test for 15 minutes, simultaneously log entire virtual user group, the tool waits 15 minutes and repeats the process two or three more times.
  • Stress Test:  Gradually login the virtual user group 2 – 3 times the number of expected concurrent users to determine how long the application and supporting infrastructure will run before the system crashes.


Load Test Results Reporting

Results are reported in the manner that was agreed to during the planning phase and delivered at the correct level of detail.  The performance test results should be easily interpreted and there should be no metrics included in the final test results report that were not agreed to during the planning phase.  Extra data adds little value, and could add confusion and delay the “Go/No-Go” production deployment decision.  SPR makes the recommendation to the client based on the load test results reported.  The client then makes the “Go/No-Go” decision based on input which includes SPR’s recommendation.

Subsequent Load Test Iterations

Generally, the first performance test iteration is run to establish a baseline for system behavior under load.  At the client, end users may (or may not) execute manual test cases during the load test run to determine whether there is noticeable performance degradation.  Once the load test execution results are reported and reviewed and the client understands and approves results reported, attention is turned to the next performance test where planning begins and goals are formalized.  The exact number of performance test iterations to be run should be included in the Performance Test Plan.

Performance Test Solution Transfer of Ownership

When ownership of the performance test solution is transferred to the client, an official hand off will be made which will include:

  • Documentation sufficient to manage the solution going forward.
  • The Load Test solution (Visual Studio Load Test, HP Performance Center, etc.) is transitioned to the new owner.  The new owner of the solution should have been determined at the start of the load testing project.

A well-planned, goal-based iterative performance test will yield modestly predictable performance test results and a higher level of confidence for production deployment.