In the last 50 years, a series of principles have been proposed that establish general guidelines common to all tests.

Principle 1 – Tests show the presence of defects, not their absence

Principle 2 – Exhaustive tests do not exist (or are impossible)

Principle 3 – Early Testing saves time and money

Principle 4 – Grouping defects

Principle 5 – Beware of the pesticide paradox

Principle 6 – Evidence depends on context

Principle 7 – Fallacy of absence of errors

Principle 1 – Tests show the presence of defects, not their absence

We can never say that our product is free of defects.

We have a living example in Samsung, one of the largest technology companies and among the first in the list of forbes handling billions of dollars every year.

Still one of its flagship products, the Samsung Galaxy Note 7 had to be removed from the market in October 2016 just 2 months after its launch because the device caught fire by itself, both at rest and in use.

Do you think that a company with these levels of resources did not test the device enough to be sure of its success in the market and not risk its reputation?

The tests can show that there are flaws, but they can not prove that there are none.

The tests reduce the likelihood of hidden defects in the software, but even if no defect is found, it does not constitute evidence of correction.

Principle 2 – Exhaustive tests do not exist (or are impossible)

Testing everything with all combinations of inputs and preconditions is not feasible, except in trivial cases.

Let’s take the example that we have to test a part of the system that shows the temperature by city, you enter the zip code of the city and it shows the current temperature in that city.

We currently have 195 countries in the world. The United States only has 19522 cities, so let’s think about the calculation so that we realize that testing if the correct temperature is shown in all cities is a test that is too expensive to perform.

Or imagine they need to try a functionality that includes completing 10 text fields and each of these has about 6 possible values, the number of combinations to try would be 10 raised to 6 (), that would be equal to 1 million combinations.

For this reason, instead of trying to carry out exhaustive tests, risk analysis must be carried out, test techniques must be used and priorities established to focus the efforts devoted to the tests.

Principle 3 – Early Testing saves time and money

To detect defects early, both static and dynamic test activities should be initiated as early as possible in the software development life cycle.

Early tests are sometimes called shift left (left shift), because they are moved to the left in the project timeline, that is, they begin to be performed early in the life cycle.

During the performance of early tests, we try to find defects before they pass to the next stage of software development.

According to research conducted by the IBM company, the cost of removing software defects increases over time. So a defect found in the post-production stage costs 30 times more than what it would cost if it were found in the design stage

Conducting tests early in the software development life cycle helps reduce or eliminate the cost of the changes.

Principle 4 – Grouping defects

Usually most of the defects discovered during the pre-launch tests, or the defects responsible for most of the operational failures are found in a small number of modules.

This group of defects in these modules can be given because that module has a high complexity, because it has undergone more changes than the rest and this may have resulted in the insertion of new defects or other causes.

This phenomenon is closely related to the Pareto principle or also called the 80/20 rule, which applied to this problem we could say that 80% of the problems are found in 20% of the modules.

So if you want to discover as many defects, it is useful to use this principle and focus the tests on those areas or modules where more defects have been found and use this information as input data for the risk analysis as mentioned in the beginning 2.

Of course this does not mean that the areas where the lowest density of defects have been found are neglected but that they are tested in the proper proportion.

Principle 5 – Beware of the pesticide paradox

If we repeat the same tests again and again, eventually these tests will no longer detect any new defect.

To detect new defects, we must update existing tests and test data and also write new tests. (Tests are no longer effective in finding defects, like pesticides after a while they are no longer effective in killing insects.)

In some cases, such as automated regression testing, the paradox of pesticides has a beneficial result, which is the relatively low number of regression defects.

Principle 6 – Evidence depends on context

The tests are done differently in different contexts.

For example, industrial control software for critical security is tested differently than a mobile e-commerce application.

The tests in the agile projects are carried out in a different way to the projects that adopt a sequential life cycle.

Another example, the software of a passenger plane is tested differently than a website with static information.

Here we see that the level of risk is a fundamental factor when defining the types of tests needed. The more critical the system is and the more likely it is to lose human lives or economic losses, the more we need to invest in software testing.

And the last principle,

Principle 7 – Fallacy of absence of errors

Some organizations expect testers to be able to run all possible tests and find all possible defects, but principles 2 and 1, respectively, tell us that this is impossible.

Furthermore, it is a fallacy (ie, a mistaken belief) to expect that only by finding and solving a large number of defects will the success of the system be assured

What the customer needs

What was installed in production

For example, even thoroughly testing all the specified requirements and correcting all the defects found could produce a system that is difficult to use, that does not meet the needs and expectations of users, or that is inferior in comparison with other systems. the competition