Apologies for the late post – I’ve been in the middle of moving, and the dust is finally starting to settle.
Now that the system is built, integrated, and optimized, we need to ensure that the system behaves as expected. This is known formally as verification, and the intent is to answer whether the system was built ‘right’. In this case, ‘right’ means that the system operates as intended, and was built as the designers wanted the system to look, ‘feel’, and behave. We can assume that, at this point, basic testing of the system design has been done to ensure compliance to specifications, but our verification testing is intended to go beyond basic, component-level functionality.
In order to ensure that the system operates as intended, we need to first define the conditions under which the system is intended to operate. These should be defined in the requirements document, either as a set of assumptions, or as explicit design requirements for the system. Operating conditions are often referred to as the ‘intended use’ of the system. Intended use provides context for the extent to which the system’s operation must be verified, but should not totally restrict the verification of the system unnecessarily. I’ll cover this in more detail in another post, but it’s important to remember that system usage should drive system verification.
With the system’s usage well-defined, requirements should be prioritized for verification. The requirements that should be verified first are the highest-priority requirements; that is, the requirements that would prevent the system from behaving or operating as its most valuable. Requirements that are considered ‘nonessential’ or ‘nice-to-have’ can be verified later than the essential requirements, but value to the various customers (as in the inaugural post for this blog) should be considered before ordering verification testing activities.
Verification can be executed in many ways. Typically, one verifies system function by testing the system’s requirements and analyzing the data from these tests to show that the system fulfills the requirements. However, systems can be verified by documentation, inspection, or modeling behavior.
When testing systems, we often need to design and build verification test units. This implies a new system to be designed, with new system design requirements, and possibly a new set of verification tests. Ideally, such test systems are smaller and more easily defined than the system to be tested, in order to keep the design manageable. Often, ‘off-the-shelf’ solutions can be purchased for verification from outside companies, with certificates of verification or analysis. In many cases, these certificates of analysis are sufficient to ‘prove’ verification of these test systems, though the team verifying system need to understand the possible consequences of relying solely on the vendor’s analysis or verification data.
Naturally, verification can’t cover all scenarios in which a system will be used, and the tradeoff that any testing team needs to make is between completeness and reasonable timescales for testing. So, the next time you experience a temporary glitch in any system that you’re using, think about how likely it is that the manufacturer and seller needed to test the scenario in which you’re using the sytem.
Leave a comment