Achieving One Results Dashboard for Manual and Automated Tests
Most projects involve a combination of manual and automated testing to provide a holistic approach to software quality. In some circumstances, the architecture (data, services) are automated, while the user interface functionality is all testing manually. In others, the initial functional testing is done manually, while the regression is automated after the manual test design has been “hardened” via previous execution.
Regardless of the manual and automated tests combination, the end-result is two different methodologies with two different execution cycles. How do we connect the disparate results into one dashboard for one view of testing metrics? Most of the major tools used for project and test management provide solutions to this problem.
Let’s start with Microsoft Team Foundation Server (TFS), which is now known as Visual Studio Team Services when hosted in the cloud. TFS can serve as a test repository for both manual and automated tests. When licensed as a Test Professional in TFS, each test case has fields with information related to the automated tests name, where it’s stored in a repository, and its type. This process works best when the automation is also written in a Visual Studio solution, but other code repositories (Git, Bitbucket, etc.) can be used.
When a test run is executed in TFS, the tool knows via these settings that it’s automated and runs it. After execution is completed, the pass/fail results come back to the same test suite structures and dashboards that exist for all the manual tests in the repository.
The testing suite within the HP ALM test repository (formerly known as Quality Center) provides similar seamless integration between manual and automated tests within the HP family of products. In this case, Unified Functional Test (UFT), formerly known as Quick Test Professional (QTP), automation is referenced in the same manner as manual tests within ALM. Tests can be intermingled in the Test Plan repositories, by release, functional area, etc. This means the automated test results are displayed in the same template as manual tests in Test Lab execution.
Other automation frameworks, whether open source like Selenium or scriptless products like Ranorex, can be integrated into HP ALM using their virtual test API. When creating an automated test, the VAPI-XP-TEST type is selected:
This test is defined as a Console application, meaning it requires the parameters for a command line call to trigger it. Any automation platform which provides that functionality can be integrated into the Test Plan and Test Lab repository in ALM. Since this extra layer exists, some VB Script must be written and incorporated into ALM for each test to provide a mapping for test inputs/outputs and results into the Execution and Run Grids. If you have thousands of automated tests, this might not be the most attractive option…but it is possible and highly valuable to incorporate your results.
For testing projects avoiding the licensing cost that can come with Microsoft or HP products, Jira has several test case management plug-ins. Two of the most popular are SynapseRT and Zephyr, both of which provide mechanisms to integrate external automated test cases and results into the test repository. SynapseRT works very similar to TFS by providing an Automation Test Reference to the code.
The Zephyr plug-in’s automation support works more like the HP ALM solution. Zephyr provides their own “ZAPI” RESTful interface to communicate with any external application that wishes to access or store data in Zephyr: CI/CD tools, business intelligence, custom reporting, and yes, test automation. Similar to HP ALM, additional code must be wrapped around the automated tests to process this communication. Unlike HP ALM, however, you are not tied to using VB Script. Whichever programming language you prefer for API development will be sufficient.
One of the major challenges is providing one holistic approach to software testing results, yet be stuck with different sources of information and different truths related to the results of our testing activities. By finding ways to combine manual and automated test cases and results into one source of truth, our quality messaging becomes easier to deliver and to understand.