Baseline testing global applications on a global scale is smart. Knowing what your baseline is for each of those applications means your team can see, at a moment’s notice, when apps are not performing as they should.
A large global property management firm wanted to measure the performance of five critical applications prior to production implementation for a new major client. Understanding the performance of the five applications was essential for the firm’s “go/no go” decision.
By baselining response time in a preproduction global environment, SPR Consulting wanted to provide visibility into potential performance issues by specific regions. Performance testing would allow SPR to not only test the robustness of the applications, databases, web and application servers, but to simulate the expected user loads from worldwide locations.
Carefully designed tests simulate traffic and usage from around the world
After a detailed review of the firm’s performance objectives, SPR Consulting recommended a performance test strategy that captured baseline page load metrics of all five applications from multiple locations within Asia Pacific, Europe, and the United States. These locations were expected to have the highest usage of the applications.
Three test scenarios were identified for each of the five applications, and test cases were prepared. To do this, SPR and the client determined:
For varying user counts, the goal was to capture response time in seconds, where response time is the time that it takes the server to send all resources to the application user’s browser. Response time does not include page rendering, which is dependent upon the specific network and laptop in use.
Automated test development accelerates script development
SPR Consulting used Microsoft Visual Studio Enterprise Edition with its integrated Load Testing tool. We also applied our Visual Studio test automation framework as an accelerator for test script development. Our framework enables scripts reusability, maintenance, and reporting.
A test controller and test agents were used to execute scripts in various global offices to capture local user traffic. The performance test scripts were executed against the UAT environment during scheduled time periods.
Test results and analysis help client make “go/no go” decision
In the end, custom summary reports were created after each performance test run to highlight any response time issues that might require potential infrastructure changes, application optimization, or changes to the business process flows that would reduce the number of steps. For each step within the test script, the response time (in seconds) was given for varying user loads to identify any test steps with degradation in response time as the number of users increased. Also a comparison across test steps revealed any test step(s) with higher response time than others.
With this information, SPR’s customer determined that the performance of each of the five applications was acceptable.