Software Testing: Start Test Design with the Data
The term software testing, especially outside the profession, often relates to the user interface. What buttons does someone push to make sure it “works right”? What will “make it break”? This is true not only when it comes to test execution, but test case design as well.
In the days of waterfall software development, the tester received a document of requirements saying, “The system shall do <something it should do>.” It was the tester’s job to figure out the steps required to make that specific thing happen.
In modern software projects, this approach won’t be enough. Today’s tester can’t wait until the user interface is fully designed, implemented, and available before writing and executing tests. When complex software systems require multiple testing methodologies to verify and validate, the effect of this waiting period is magnified. A tester must start from another perspective – the data – and carry that perspective throughout every aspect of testing.
What data?
This starts with the initial review of requirements (or use cases, or user stories, or whatever) and their acceptance criteria. The tester should not think “What steps do I take to verify these criteria?” The tester needs to think “What data goes into the system to make this happen, and what data comes out to prove it happened correctly?” This means the test design documentation starts not with a list of functional areas, or categories and names of test cases. It begins with a set of data tables identifying things such as:
- Permissions levels, with potential user names, passwords, account settings, etc.
- Sets of data inputs for positive and negative scenarios driven by the acceptance criteria
- Expected data outputs for each input set
- Possible failure indicators; if a software function fails, what notification or error code is returned
This data set feeds testing that will be available for execution prior to the user interface. When implementing software architectures in software testing, a tester usually starts with the data store, message structures, and communications protocol between the various components. Next, the tester can perform:
- Network communication
- Data validation (using SQL or other protocols & techniques)
- API (application programming interface) testing via SOAP or REST
- Web services testing
What actions?
After the user interface is built on top of the initial layers, testing focuses back to the traditional functional model defined above. The tester will have the information needed to write actions and expected results related to using the system, with the additional benefit of the data inputs already being defined. The positive and negative conditions already exist; the tester does not have to re-invent the wheel. The functionality of the system is already modeled by the data, and the tester’s job is to put action steps on top of it.
As each new function is tested manually, it’s important in fast-paced projects to have automated regression. This ensures previous functions are not broken by extra code. Automation is typically data-driven; the software iterates through various combinations of inputs, queries the back-end server to make sure data was stored correctly. Plus, it validates the inputs through other views in the interface. Since the data sets to be used in automation have already been verified during previous test cycles, a foundation is in place to guarantee the greatest chance of creating automated regression with a meaningful return on the technological investment.
What platform?
When the project nears release of the software, it’s time for performance testing. Apply a load via a performance test platform such as Jmeter, Visual Studio Webtest or LoadRunner. Measure the results to predict how the system will behave when that many “real” people are using the software. Like automation, performance test platforms are typically data-driven, as thousands of people fill out forms, perform searches, and view results. This data has long been defined, tweaked, and vetted many times and is ready for use in performance testing.
One of the greatest challenges a software tester faces in our current landscape is knowing how to start on day 1 of a project. When the slate is the cleanest and the least information is known. Before the user interface and experience is a glimmer in anyone’s eye, one must answer, “What data will I have?”. If the tester starts with the data, the first and most important step is taken toward not only providing value to their team, but keeping up with the demand of the team’s velocity.