SOFTWARE TESTING STRATEGY – Here is a further explanation of the Strategy in Software Testing.
Software testing strategy makes it easy for designers to determine the success of the system that has been done. The thing to note is that the planning and implementation steps must be well planned and how much time, effort and resources are needed
The trial strategy has the following characteristics:
The testing starts at the lowest module level, followed by the module above it and then the results are integrated. Different testing techniques might produce a slight difference (in terms of time) Testing is carried out by software developers and (for large projects) an independent testing group. Testing and debugging are different activities, but debugging is included in the testing strategy.
Software Testing is an element of a broader topic that is often referred to as verification and validation (V&V).
- Verification: Collection of activities that guarantee the application of the Software is truly in accordance with its function.
- Validation: A collection of different activities that ensure that the Software is built
can meet customer needs. In other words :
- Verification: “Do we make the product correctly?”
- Validation: “Do we really make a product?”
- The definition of V&V encompasses a variety of activities that we refer to as Software Quality Assurance (SQA).
- Testing is one of the tasks in the current system development cycle which can be described in a spiral form:
- Unit testing (focus unit) focus on verification efforts on the smallest unit of software design, namely modules.
- Unit testing is always oriented towards white box testing and can be done in parallel or in succession with other modules.
Unit Testing Considerations
- The interface is tested to ensure information that enters or exits the program unit is correct or as expected.
- The first to be tested is the interface because it is needed for the running of information or data between modules.
Checklist for testing the interface:
- Are the number of input parameters the same as the number of arguments?
- Do the arguments and parameters match?
- Are the system parameters and argument units compatible?
- Is the number of arguments transmitted to the module called the same as the number of parameters?
Is the attribute of the argument transmitted to the module called the same as the parameter attribute?
- Is the system unit of the argument transmitted to the module called the same as the unit parameter system?
- Is the number of attributes from the argument sequence to the built-in functions correct?
- Are there any references to parameters that do not match the existing pain entries?
- Has the input-only argument been changed? Is the definition of global variables consistent with the module?
- Are the boundaries passed an argument?
If a module performs external I / O, then additional interface testing must be performed:
- The file attributes are correct? OPEN / CLOSE statement is correct?
- Does the format specification match the I / O statement?
- Does the buffer size match the recording size?
- File opened before use? Is the End-of-File condition handled?
- Error I / O handled?
- Are there textual errors in the output information?
Common mistakes in computing are:
- Misconceptions or arithmetic procedures that are not correct
- Mixed operation mode Incorrect initialization
- Accuracy of accuracy Incorrect symbolic representation of an equation.
Test cases must reveal errors such as:
- Comparison of different data types Precedent or logical logic operator
- Expectations of an equation if precision errors make equations impossible
- Incorrect comparison or variable Termination of missing or irregular loops
- Failure to exit when a divergent iteration occurs Irregularly modified loop variable.