In many large system deployments, especially in waterfall SDLC environments, the key business users perform a true User Acceptance Test cycle. The release process mandates that the system is tested by those who best understand what the application needs to do, and their pass/fail approval is a necessary part of the release process.
This is not always the case.
On Agile teams, a Product Owner represents the business side of the project, even if they are not from the business organization and are part of IT. At the end of every sprint, the Product Owner is the key approver of the demonstrated software. They ask questions, evaluate the software against the acceptance criteria, and provide their pass/fail approval. With their blessing, the software can, per the Agile process, go to production and be available to the end user.
Where does that leave the “key business user” community? Or another common possibility, where other user groups (tech support, training, etc.) may be asked to take part in UAT?
What does User Acceptance Test mean when everything has been “accepted” by the product owner, and the target audience isn’t in a position to “test”?
By thinking of the UAT cycle as an “readiness evaluation”, new possibilities emerge. The users have new freedom to try to use the software as they see fit in their day-to-day activities, rather than thinking they need to hunt for defects and try to “break it”. The pressure of thinking they must be the gatekeepers to product quality, deciding whether this release can pass into production or not, is lifted.
There is still the need to define readiness criteria, execution procedures, and a rigorous process to the evaluation. This isn’t play time. Each company is paying people to spend time in this exercise, and expects a return of valuable information on that investment. At SPR Consulting, we’ve helped our clients run UAT cycles in a wide variety of styles and formats. In each one, some amount of scripting and feedback was necessary to make sure that the evaluation exercise generated valuable input into the software lifecycle prior to a production release. This process may include:
• Definition of the screens/pages to be evaluated, and in what order
• User roles and permission settings, matching the day-to-day responsibility of each person participating in the evaluation
• A clear feedback form to be filled out with each step in the process
• A precise, maintained execution timeline for each evaluation session
The User Readiness Evaluation (to invent a new acronym, URE) determines if the true end-user can use the new software or feature/function and is ready to start using it. If the evaluation group cannot easily figure out a feature of the software, the odds are very good the end-users will not either. In this regard, a URE cycle can highlight functions which require additional focus on training or documentation. If there is poor testing prior to this point, then this readiness evaluation can detect poor quality issues and delay the launch.
Involving the users, and/or their business representatives, is a key component in releasing software which not only works and is relatively free from bugs, but a product that that can easily be absorbed in daily work streams with minimal disruption to business activities. Those two concepts have been known in the testing profession as “verification” and “validation”, and are the ever-present goals of the Testing Practice at SPR Consulting. If you’re doing too much of one, and not enough of the other, the balance needed for a successful software product, built to last, may be missing.