Title: Software Testing
1Software Testing
- The Basics of Software Testing
2Motivation
- Test parts or final product to check for
fulfillment of the given requirements - Determine whether the product solves the required
task - However there may be differences between the
requirements and the implemented product - If the product exhibits problems, necessary
corrections must be made - Software is immaterial
- Testing is more difficult because a software
product is not a physical product - A direct examination is not possible
- Directly examine only through reading the
development document - Dynamic behavior of the software cannot be
checked through reading - It must be done through testing (executed on a
computer) - Its behavior must be compared to the given
requirements - Testing reduces the risk of use of the software
- Bugs can be found by testing
3Terminology
- A situation can be classified as incorrect only
after we know what the expected correct situation
is suppose to look like - Failure
- Non-fulfillment of a given requirement (actual
deviation of the component or system from its
expected delivery, service or result) - Discrepancy between the actual result or behavior
and the expected result or behavior - actual result identified while executing test
- Expected result defined in the specifications
or requirements) - A failure is present if user expectation is not
fulfilled adequately - Example
- Products that are too hard or too slow
- A wrong output
- An application crashes
4Terminology
- Failure occurs because of faults in software
- Fault (defect or bug)
- A flaw in a component or system that can cause
the component or system to fail to perform its
required function - Example
- Wrongly programmed or forgotten code
- Incorrect statement or data definition
- Every fault presents since sw development or sw
change - Fault materialize only during execution, becoming
visible as a failure
5Terminology
- Defect masking
- An occurrence in which one defect prevents the
detection of another - A fault is hidden by one or more other faults in
different parts of the application - A failure only occurs after the masking defects
have been corrected - Thus be careful!! (corrections can have side
effects) - A fault can cause none, one or many failures for
any number of users - A fault and its corresponding failure are
arbitrarily far away from each other - Eg small corruption of stored data which may be
found a long time after it first occurred
6Terminology
- The cause of a fault is an error or mistake made
by a person - Error
- Human action that produces an incorrect result
- Example
- Wrong programming by the developer
- Misunderstanding of a command in a programming
language
Error
fault
failure
7Terminology
- Testing is not debugging
- To be able to correct a bug, it must be localized
- Effect of bug but not the location
- Debugging
- The localization and the correction of defects
- Repairing a defect generally increase the quality
8Testing Purposes
- Executing a program in order to find failures
- Executing a program in order to measure quality
- Executing a program in order to provide
confidence - Analyzing a program or its documentation in order
to prevent defects - Test
- the whole process of systematically executing
programs to demonstrate the correct
implementation of the requirements, to increase
confidence, and to detect failures
9Terminology
- Test object
- The component or system to be tested
- Test data
- Data that exists before a test is executed
- Test management
- The planning, estimating, monitoring, and control
of testing activities - Test process
- The fundamental test process comprises test
planning and control, test analysis and design,
test implementation and execution, evaluation of
test exit criteria and reporting, and test
closure activities
10Terminology
- Test run
- Execution of a set of test cases on a specific
version of the test object - Test suite
- A set of several test cases for a component or
system under test, where the post condition of
one test case is often used as the precondition
for the next one - Test cases
- A set of input values, execution preconditions,
expected results, and execution postconditions
developed for a particular objective or test
condition, such as to exercise a particular
program path or to verify compliance with a
specific requirement
11- Several test cases can often be combined to
create test scenarios - Whereby the result of one test case is used as
the starting point for the next test case - For example a test scenario for a database
application can contain - One test case which writes a date into the
database - Another test case which manipulates that date
- A third test case which reads the manipulated
date out of the database and deletes it - Test scenario or test procedure specification
- A document specifying a sequence of actions for
the execution of a test. Also known as test script
12Test Effort
- Complete (exhaustive) testing is not possible
- Test effort between 25 and 50
13Fundamental Test Process
Begin
Planning and
control
Analysis and design
Implementation and execution
Evaluation of the test exit criteria
Post testing activities
end
14Test Planning and Control
- Planning of the test process starts at the
beginning of the software development - What should be done-
- Define and agree on mission and objectives of
testing - Estimate necessary resources
- Employees needed
- Task to be executed
- Time of execution
- How much time needed
- Equipment and utilities
- Document in test plan
- Provide necessary training
15Test Planning and Control
- Test control is
- monitoring of the test activities
- comparing what actually happens during the
project and the plan - reporting status of deviations from the plan
- taking any actions to meet mission and objectives
in the new situation - Test plan must be continuously updated by
considering the feedback from monitoring and
control - Determine test strategy priorities based on
risk assessment - The goal is the optimal distribution of the test
to the right part of the software system
16Test Analysis and Design
- To review the test basis (specification of what
should be tested) - To detail out test strategy
- To develop test cases
- Logical test cases have to be defined first
- Translate logical test cases into concrete test
cases (actual input are chosen) - The test basis guides the selection of logical
test cases with each test technique - Test cases are determined
- From object specification (black box test design
technique) - Created by analyzing the source code (white box
test)
17Test Analysis and Design
- For each test case
- Describe the precondition
- Define the expected result and behavior
- Tester must obtain information from some adequate
source - Test oracle a mechanism for predicting the
expected result - Specification can serve as test oracle
- Derive expected data from the input data by
calculation or analysis based on the
specification of the test object
18Test Analysis and Design
- Test cases can be differentiated by
- Test cases for examining the specific behavior,
output, and reaction. Included are test cases
that examine handling of exception and error
cases. - Test cases for examining the reaction of test
objects to invalid and unexpected input or
conditions, which have no specific exception
handling - Prepare the test infrastructure and environment
to execute the test object
19Test Analysis and Design
20Example test cases
- On analyzing the text the following cases for
bonus depending on affiliation -
- company affiliation lt 3 result in a bonus 0
- 3 lt company affiliation lt 5 result in a bonus
50 - 5 lt company affiliation lt 8 result in a bonus
75 - company affiliation gt 8 result in a bonus
100 - Need to create test cases
21Logical test cases
Test case number Input X (company affiliation) Expected result (bonus in )
1 X lt 3 0
2 3 lt X lt 5 50
3 5 lt X lt 8 75
4 X gt 8 100
22Concrete test cases
Test case number Input X (company affiliation) Expected result (bonus in )
1 2 0
2 4 50
3 7 75
4 13 100
23Test Implementation and Execution
- Are the activities where test conditions and
logical test cases are transformed into concrete
test cases, all the details of the environment
are set up to support the test execution
activity, and the test are execute and logged - Execution of test cases according to test plan
- Group test cases into test suites for efficient
execution and easier overview - Test harnesses, test drivers, test simulators
must be programmed, built, acquired, or set up - Test harness
- A test environment that comprises of stubs and
drivers needed to conduct a test.
24Test Implementation and Execution
- Test driver
- A software component or test tool that replaces a
program that takes care of the control and/or the
calling of a component or system - Test stub
- A skeletal or special-purpose implementation of a
software component, used to develop or test a
component that calls or is otherwise dependent on
it. It replaces a called component. - Recommended to start testing with main
functionality - If failures or deviations show up at this time,
no point to continue until the failures are
corrected
25Test Implementation and Execution
- Test execution must be exactly and completely
logged - Logging every test case run
- Logging its result (success or failure) for later
analysis - Test log
- A chronological record of relevant details about
the execution of tests - Who tested
- Which part
- When tested
- How intensive
- With which result
- Reproducibility is important
26Test Implementation and Execution
- If a difference shows up between expected and
actual results - Is it really a failure?
- If yes, document the failure and made a rough
analysis of possible cause --- may require
additional test cases - The cause can be (more of a test problem)
- Erroneous or inexact test specification
- Problems with test infrastructure or test cases
- An inaccurate test execution
- test coverage should be measured
- Appropriate tools should be used
27Test Implementation and Execution
- Invoking incident management based on the
severity of a failure the priority of fault
correction must be decided. - After the correction, it must be examined whether
the fault has really been corrected and no new
faults have been introduced - Re-execution of a test that previously failed in
order to confirm a defect fix, execution of a
corrected test, and/or regression tests - If necessary new test cases must be specified to
examine the modified or new source code
28Test Implementation and Execution
- When there is not enough time to execute all
specified test cases - Go for risk-based testing
- Select reasonable test cases to test as many
critical failures as possible - Thus prioritizes test cases
- Has the advantage that important test cases are
executed first - Important problems are found and corrected early.
29Evaluation of the test exit criteria and reporting
- Is it the end of test?
- May result in normal termination if all criteria
are met or - May decide additional test cases should be run,
or - The criteria had an unreasonably high level
- Example of exit criteria
- Test coverage criteria
- Eg 80 statement coverage executed
- Failure rate or defect detection percentage
- Must decide if exit criteria are fulfilled
- Further test must be executed if at least one
criteria is not fulfilled - Write a test summary report to stakeholders
30Test Closure Activities
- Gathering experiences to analyze and use for
further projects - When was the software system release?
- When was the test finished?
- When was a milestone reach?
- Conservation of the testware for the future
31General Principles of Testing
- Testing shows the presence of defects, not their
absence - Exhaustive testing is not possible
- Testing activities should start as early as
possible - Defects tend to cluster together
- The pesticide paradox
- Test is context dependent
- The fallacy of assuming that no failures means a
useful system