Software Testing Tasks and Types

Software testing is the process of analyzing or operating software to identify defects. Despite the simplicity of this definition, it contains points that require further explanation. The word process is used to emphasize that testing is a planned, orderly activity. This point is very important if we are interested in quick development, because a well-thought-out, systematic approach leads to the detection of software errors faster than poorly planned testing, and also carried out in a hurry.

According to this definition, testing involves “analysis” or “operation” of a software product. Testing activities related to the analysis of software development results are called static testing. Static testing provides verification of program codes, end-to-end control and verification of the program without starting the machine, i.e. check at the table (desk checks). In contrast, a test activity involving the operation of a software product is called dynamic testing. Static and dynamic testing complement each other, and each of these types of testing implements its own approach to detect errors.

Testing Objectives

When using QA services, everyone pursues the following goals:

  • Increase the likelihood that an application designed for testing will work correctly under any circumstances.
  • Increase the likelihood that an application designed for testing will meet all the described requirements.
  • Conduct full application testing in a short time.

Testing Tasks

  • Verify that the system is operating according to specific client and server response times.
  • Check that the most critical sequences of actions with the end user system are performed correctly.
  • Test user interfaces
  • Verify that changes to the databases do not adversely affect existing software modules.
  • When designing tests, minimize test processing for possible changes to the application.
  • Use automated testing tools where appropriate.
  • Conduct testing in such a way that not only detect, but also prevent defects.
  • When designing automated tests, use development standards in such a way as to create reusable and maintainable scripts.

Comprehensive Software Testing

The purpose of complex testing is to verify that each module of the software product is correctly consistent with other modules of the product. In complex testing, top-down and bottom-up processing technology can be used, in which each module that is a leaf in the system tree is integrated with the next module of a lower or higher level until a software product tree is created. This testing technology is aimed at checking not only those parameters that are transferred between the two components, but also at checking global parameters and, in the case of an object-oriented application, all top-level classes.

Each complex testing procedure consists of top-level test scripts that simulate the user's performance of a specific task, using lower-level unit tests with the necessary parameters to verify the interface. Check top 20 software testing companies and you’ll see that most of them use this type of testing. After making decisions about all reports on problems of unit testing, the modules are combined incrementally and are already tested together based on control logic. Since modules can consist of other modules, part of the comprehensive testing work can be done during unit testing. If the scripts for unit testing were created using automated testing tools, you can combine them and add new scripts for testing intermodule communications.

Complex testing procedures are being implemented and properly updated, problem reports are documented and tracked. Problem reports, as a rule, are classified according to their severity in the range from 1 to 4 (1 is the most, 4 is the least critical). After processing these reports, the tester conducts regression testing to verify that the problems are completely resolved.

Upstream Testing

Upstream testing is a great way to isolate bugs. If an error is detected when testing a single module, then it is obvious that it is contained in it - to search for its source, you do not need to analyze the code of the entire system. And if an error occurs when two previously tested modules work together, then the point is in their interface. Another advantage of upstream testing is that the programmer performing it concentrates on a very narrow area (a single module, transferring data between a pair of modules, etc.). Thanks to this, testing is carried out more thoroughly and is more likely to detect errors.

The main drawback of upstream testing is the need to write a special wrapper code that calls the module under test. If he, in turn, calls another module, you need to write a "stub" for him. A stub is an imitation of a called function that returns the same data but does nothing else. It is clear that writing shells and plugs slows down, and for the final product they are absolutely useless. But once written, no elements can be reused every time a program changes. A good set of shells and stubs is a very effective testing tool.

In contrast to upstream testing, a holistic testing strategy assumes that until the system is fully integrated, its individual modules do not pass particularly rigorous testing. The advantage of this strategy is that there is no need to write additional code. Therefore, many managers choose this method for reasons of time saving - they believe that it is better to develop one extensive set of tests and use it to check the entire system at a time. But such a view is completely mistaken, and here's why.