X

This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Writing Manual Tests for Easy Automation

Author: SPR Posted In: Digital Transformation, Testing

The most technologically mature Agile teams are doing test- or business-driven development, where most (if not all) tests are automated in parallel with development, and code is written to pass the test. This requires the right staffing, right process maturity, and most important, the right software product. Web or Windows form-based software testing can be almost fully automated. An embedded system or something with an extensive manual user interface will have far more difficulty “avoiding” manual testing.

No matter the product, some amount of manual functional and regression tests can be automated. There are many architectures of automation platforms, across all platforms and operating systems, and regardless of what type of system is being tested, it is worth pursuing that goal. The more basic system functionality and repetitive regression tests can be automated, the more manual testers can focus on business workflows, defect exploration, negative conditions and other nooks-and-crannies that can turn a system from decent to robust.

In an Agile process, the velocity of development and churning out releases at the end of every sprint will overwhelm a manual tester without some sort of automation. If a tester has to manually execute tests for new functions, verify bugs, and run the system through regression every two weeks…the pace will eventually wear the test team members out and cause defects to slip through. In Agile, poor quality is a small snowball at the top of a powder-covered peak; once the ball starts to roll downhill, gaining size and speed, the odds of bringing it back to the top of the mountain quickly drop to zero. The only way to stay on track is to take manual tests and automate as much as possible.

Even with a commitment to automation, however, the path to converting manual tests can be blocked before you even begin if the manual tests are not written in a way to make them automatable. The automation developer uses manual test scripts as a blueprint to writing code. The blueprint is worthless if the manual test case is unclear, lacking detail, or overly general.

Here are some examples:

Specify Precise Test Steps

Looking through a slat at a field surrounded by trees

Specify Precise Test Steps

Everyone has written a test step where the expected result is “On-screen display is correct”. This could mean a graphic is rendered correctly in a Web browser, accurate financial data is shown in a table that fits the screen, or the right follow-up screen in a process is provided when the user clicks the “Next” button. The word correct, however, means nothing without necessary context and knowledge of the application being tested. The person writing the manual test may know what is correct; it should not be assumed that the person developing the automation script (or other manual tester, for that matter) does. Computers don’t do well with generalized context and missing information. The manual test should state precise examples of what “correct” means”, preferably something that easily translates into labels, values, metadata, etc., that can be checked by an automated script.

Supply All Data

Person cut out of a photo with the word "Missing" above it.

Supply All Data

A specific example of the above is missing test data. A test step stating “Log in as an administrative user” requires the tester to know the account name and password of an administrator. This data has to be made readily available; it is not testing “best practice” to hard-code the values into the test step, in the same way that hard-coded parameters are a no-no when developing software. A table of applicable accounts should be attached as part of the test script. Automation platforms do very well with data-driven test scripts…a manual test can, and should, be designed the same way.

Loop Through Steps Using Data

Curving stairs

Loop Through Steps Using Data

Let’s say there is a system with five different types of accounts, and there is a test to verify that each type of user sees a list of customer accounts when they click on an “Accounts” tab from the Home Page. There will be a set of test steps to verify login, navigation to the Home Page, and display of the account list. If a final step to log out is included, that’s four steps. Many times, however, this particular manual test will be written as a cut-and-paste set of slightly modified steps, rather than as a “repeat for” loop. There will be 4 X 5 = 20 steps, rather than Step 5 stating “Repeat Steps 1 through 4 as User #2”, Step 6 stating “Repeat Steps 1 through 4 as a User #3”, etc. This methodology not only streamlines data entry in a test execution tool like Microsoft Test Manager or HP Quality Center, it also clearly indicates to the automation developer that there is an easy way to loop through reading account information from a table (since the test author read the previous section of this blog) and following the same set of steps numerous times.

It takes an unappreciated amount of skill to write clear, concise, reusable manual test scripts. At SPR, we strive to encourage and develop those skills across the software testing profession, while finding new ways to support the other members of an Agile team. If we think of an automation developer as a customer while writing manual tests, both deliverables will be of higher quality.