In Part 1 of our blog series on Test Assessment, we looked at how SPR has developed a model based on the traditional Six Sigma mode of process improvement. In the SPR model around improving software testing, the four cycles are:
- Scoping – Setting the course for the assessment
- Investigation – Gathering the necessary information to make the right decisions
- Reporting – Communicating the findings and recommendations
- Transformation – Implementing the findings and recommendations
Assessments can be focused on one specific software project team, a wide-scale application (such as an e-commerce portal that impacts many other applications), or the entire IT enterprise. The examples given in this blog series tend to discuss a specific application, but scale outward easily.
Defining the New Square Boundaries
In our Scoping phase, we look at the four corners of a square that contains a “quality factor” for a software project: Speed, Cost, Coverage and Risk. Each one of those corners represents an aspect of a project which can be modified to meet a client’s needs. Some companies are risk-averse and will make sacrifices in other areas to mitigate that factor. Others need to be able to put releases out to market as quickly as possible and are willing to absorb risk to do that.
The area of the shape between the four corners must always stay the same. Sometimes, these decision points are easy. In the case of a project where test coverage needs to be expanded, it may be as simple as adding extra testers to the project and absorbing the cost.
Most of the time, this process is not so simple, and requires a set of investigative techniques to determine where the new points along each line go. Let’s imagine a client for whom speed is of the utmost importance. They like the test coverage they have and want to keep it as-is but are willing to take on some risk and spend some extra money on people or technologies to meet their end goals. How do we determine which of the two points to move far enough? And how do we know which underlying changes to their work methods will go far enough to make great enough improvement?
Analyzing and assessing the existing testing approach is a three-pronged cycle:
In our example, when looking at ways to potentially add Cost and absorb Risk in order to improve Speed, we start with examining testing artifacts, most notably any existing test repositories and results dashboards. Is there an existing tool out there that can be purchased and integrated to turn manual test cases to usable, maintainable automation? Would a new reporting tool assist with project status and communication? Do the appropriate people have the right level and guidance to access and use the information contained in each artifact?
The second activity is to interview the testers, and everyone who works with them, to identify gaps and areas for improvement. Do the testers have the necessary skills to incorporate automation or other new technical approaches? What challenges does the project team face as it relates to quality? Are there work tasks we could stop doing with minimal impact to the project?
Finally, we find adjustments to make to the processes followed by the project. These aren’t only big changes. We live in a technical world full of Agile & DevOps transformations, and yet, the little things often provide the greatest return. If removing some of the quality gates in an old-school release process, along with the underlying burden of preparing the documentation to get through each gate, is an acceptable project Risk, then it should be considered. When the states in a defect management model do not accurately communicate when and how bugs are being fixed, then change it so they do. Does the Scrum team need one more person with a specialty in a build process, service virtualization, test data management, etc., in order to greatly improve efficiency and speed? Then try it!
Why the Feedback Loop?
Even before suggested changes are implemented, information gathered in each of the three evaluations can impact another in unexpected ways. We may look at the test artifacts and say, “That’s a great bug report!” but then interview the project team members who say they’ve never seen it before, and analysis of the defect triage process shows that the report comes out a week too late to be of any value to those who need the information. Or the automation suites implemented years ago work fine for regression testing, but in order to implement a new DevOps based approach, need the investment of time and resources to rewrite them in order to run fast enough for updated build process. It may take 2-3 cycles through each of the three categories before crystallization of which changes to make is completed.
Re-Balancing the Square
Looking back to our example, during the Scoping phase, the client said they were willing to take on some risk and additional cost in order to improve their speed. After the first go-around of activities, SPR determined that a new approach to automation was required and investment in a service virtualization tool would allow for testing to be done faster with each build, with only a slightly increased Risk due to not testing on physical servers similar to those in Production for the application. The new “square” would look like this:
Reviewing this documented approach, with a second iteration of investigation activities, may bring changes to be made. Especially when it comes to Cost, many clients and project teams are more sensitive to the weight on that axis than the three others. A second round of artifact analysis and process definition could reveal some re-use of existing automation rather than a complete reboot would bring extra Risk but reduce the Cost enough to make it palatable to the project sponsors and their budget. Updating the square looks like this:
Next time, in our final blog in this series, we’ll discuss how these findings are documented and approved to conclude an Assessment, and give our clients the roadmap to implement the changes necessary to achieve their end goals.