Tuesday, November 11, 2008

Software Testing Activities

Introduction: Testing is an essential activity in software engineering.In the simplest terms, it amounts to observing the execution of a software system to validate whether it behaves as intended and identify potential malfunctions. Testing can consume fifty percent, or even more, of the development costs [1], and a recent detailed survey in the United States [2] quantifies the high economic impacts of an inadequate software testing infrastructure.

Software testing is a broad term encompassing a wide spectrum of different activities, from the testing of a small piece of code by the developer (unit testing), to the customer validation of a large information system (acceptance testing), to the monitoring at run-time of a network-centric service-oriented application. In the various stages, the test cases could be devised aiming at different objectives, such as exposing deviations from user’s requirements, or assessing the conformance to a standard specification, or evaluating robustness to stressful load conditions or to malicious inputs, or measuring given attributes, such as performance or usability, or estimating the operational reliability, and so on. Besides, the testing activity could be carried on accordingto a controlled formal procedure, requiring rigorous planning and documentation, or rather informally and ad hoc (exploratory testing).
Some Common Observations:
WHY: why is it that we make the observations? This question concerns the test objective, e.g.: are we looking for faults? or, do we need to decide whether the product can be released? or rather do we need to evaluate the usability of the User Interface?
HOW: which sample do we observe, and how do we choose it? This is the problem of test selection, which can be done ad hoc, at random, or in systematic way by applying some algorithmic or statistical technique. It has inspired much research, which is understandable not only because it is intellectually attractive, but also because how the test cases are selected -the test criterion- greatly influences test efficacy.
HOW MUCH: how big of a sample? Dual to the question of how do we pick the sample observations (test selection),is that of how many of them do we take (test adequacy,or stopping rule). Coverage analysis or reliability measures constitute two “classical” approaches to answer such question.
WHAT: what is it that we execute? Given the (possibly composite) system under test, we can observe its execution either taking it as a whole, or focusing only on a part of it, which can be more or less big (unit test, component/subsystem test, integration test), more or less defined:
this aspect gives rise to the various levels of testing, and to the necessary scaffolding to permit test execution of a part of a larger system.
WHERE: where do we perform the observation?
Strictly related to what do we execute, is the question whether this is done in house, in a simulated environment or in the target final context. This question assumes the highest relevance when it comes to the testing of embedded systems.
WHEN: when is it in the product lifecycle that we perform the observations? The conventional argument is that the earliest, the most convenient, since the cost of fault removal increases as the lifecycle proceeds. But, some observations,in particular those that depend on the surrounding
context, cannot always be anticipated in the laboratory, and we cannot carry on any meaningful observation until the system is deployed and in operation.
These questions provide a very simple and intuitive characterization schema of software testing activities, that can help in organizing the roadmap for future research challenges.

References:
[1]B. Beizer. Software Testing Techniques (2nd ed.). Van Nostrand
Reinhold Co., New York, NY, USA, 1990.
[2]NIST. The economic impacts of inadequate
infrastructure for software testing,http://www.nist.gov/director/prog-ofc/report02-3.pdf.
[3]Antonia Bertolino,"Software Testing Research: Achievements, Challenges, Dreams",ACM,2007.

No comments: