Software Testing

1 / 57
About This Presentation
Title:

Software Testing

Description:

Software Testing. Testing Strategies. Software Testing Strategies. First software ... 24 % of project development budget was for testing ... Taximeter. Taxi meter ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 58
Provided by: notesU

less

Transcript and Presenter's Notes

Title: Software Testing


1
Software Testing
  • Testing Strategies

2
Software Testing Strategies
  • First software quality assurance tools applied to
    control the software product's quality before its
    shipment
  • November 1994 survey by Perry
  • 24 of project development budget was for
    testing
  • 32 of project management budget was for testing
  • 27 of project time was schedule for testing
  • (actually they allocate 45 of their schedule
    time for testing)

3
Outlines
  • 2.1 Introduction
  • Testing Objectives
  • 2.2 Software Testing Strategies
  • 2.3 Software Test Classifications
  • 2.4 White Box Testing
  • 2.5 Black Box Testing

4
Objective
  • Explain testing objectives
  • Discuss the difference between the various
    testing strategies
  • Describe the concept of black box and white box
    testing

5
Outlines
  • 2.1 Introduction
  • Testing Objectives
  • 2.2 Software Testing Strategies
  • 2.3 Software Test Classifications
  • 2.4 White Box Testing
  • 2.5 Black Box Testing

6
Definition Revisited
  • Myers
  • Testing is the process of executing a program
    with intention of finding errors
  • Paul Jorgensen
  • Testing is obviously concerned with errors,
    faults, failures and incidents. A test is the act
    of exercising software with test cases with an
    objective of Finding failure and Demonstrate
    correct execution
  • ISO
  • Technical operation that consist of the
    determination of one or more characteristics of a
    given product, process or service according to a
    specified procedure

7
  • What is the objectives of Software Testing?

8
Direct Objectives
  • To identify and reveal as many errors as possible
    in the tested software
  • To bring the tested software, after correction of
    the identified errors and retesting to an
    acceptable level of quality
  • To perform the required tests efficiency and
    effectively, within budgetary and scheduling
    limitation

9
Indirect Objective
  • To compile a record of software errors for use in
    error prevention (by corrective and preventive
    actions)

10
Outlines
  • 2.1 Introduction
  • Testing Objectives
  • 2.2 Software Testing Strategies
  • 2.3 Software Test Classifications
  • 2.4 White Box Testing
  • 2.5 Black Box Testing

11
Stages of Testing
  • Module or unit testing.
  • Integration testing,
  • Function testing.
  • Performance testing.
  • Acceptance testing.
  • Installation testing.

12
Testing Strategies
  • We began by 'testing-in-the-small' and move
    toward 'testing -in-the-large'
  • For conventional software
  • The module (component) is our initial focus
  • Integration of module follows
  • For OO software
  • OO class that include attributes and operations
    and implies communication and collaboration

13
Unit Testing
  • Program reviews.
  • Formal verification.
  • Testing the program itself.
  • black box and white box testing.

14
Black Box or White Box?
  • Maximum of logic paths - determine if white box
    testing is possible.
  • Nature of input data.
  • Amount of computation involved.
  • Complexity of algorithms.

15
Unit Testing Details
  • Interfaces tested for proper information flow.
  • Local data are examined to ensure that integrity
    is maintained.
  • Boundary conditions are tested.
  • Basis path testing should be used.
  • All error handling paths should be tested.
  • Drivers and/or stubs need to be developed to test
    incomplete software.

16
Generating Test Data
  • Ideally want to test every permutation of valid
    and invalid inputs
  • Equivalence partitioning it often required to
    reduce to infinite test case sets
  • Every possible input belongs to one of the
    equivalence classes.
  • No input belongs to more than one class.
  • Each point is representative of class.

17
Integration Testing Strategies
  • Test methodologies may vary but two basic testing
    strategies applied
  • Test the software in its entirety
  • Big Bang Testing
  • Test the software in modules - Unit Tests,
    Integration Tests, Systems Tests
  • Incremental testing
  • Two strategies for incremental testing
  • Bottom - up testing (test harness).
  • Top - down testing (stubs).
  • Sandwich Testing

18
Top-Down Integration Testing
  • Main program used as a test driver and stubs are
    substitutes for components directly subordinate
    to it.
  • Subordinate stubs are replaced one at a time with
    real components (following the depth-first or
    breadth-first approach).
  • Tests are conducted as each component is
    integrated.
  • On completion of each set of tests and other stub
    is replaced with a real component.
  • Regression testing may be used to ensure that new
    errors not introduced.

19
Top Down Integration
A
top module is tested with
stubs
B
F
G
stubs are replaced one at
a time, "depth first"
C
as new modules are integrated,
some subset of tests is re-run
D
E
  • Stub is a piece of code emulating a called
    function

20
Top down Advantages Disadvantages
  • Top-down integration supports fault isolation
  • Major design flaws show up early. Modules of a
    product can be divided into two groups
  • The logic modules and operational modules.
  • By coding and testing the logic modules before
    the operational modules, top-down integration
    will exploit any major design faults early in the
    development process
  • The main disadvantage of top-down integration is
    that potentially reusable modules may not be
    adequately tested.

21
Bottom-Up Integration Testing
  • Low level components are combined in clusters
    that perform a specific software function.
  • A driver (control program) is written to
    coordinate test case input and output.
  • The cluster is tested.
  • Drivers are removed and clusters are combined
    moving upward in the program structure.

22
Bottom-Up Integration
A
B
F
G
drivers are replaced one at a
time, "depth first"
C
worker modules are grouped into
builds and integrated
D
E
cluster
  • a driver is a piece of code emulating a calling
    function

23
Bottom-up Advantages Disadvantages
  • The operational modules are thoroughly tested
    when using a bottom-up strategy. It also
    provides fault-isolation, as does top-down
    integration.
  • Major design faults will be left undetected until
    late in the development, since the logic module
    are integrated last.
  • This may result in large costs in redesigning and
    recoding substantial portions of the project.

24
Sandwich Testing
  • Combine the two so as to capitalize upon their
    strengths and minimize their weaknesses
  • neither top-down nor bottom-up implementation/inte
    gration is suitable for all the modules, the
    solution is to partition them

25
Sandwich Testing
A
Top modules are tested with stubs
B
F
G
C
Worker modules are grouped into
builds and integrated
D
E
cluster
26
Outlines
  • 2.1 Introduction
  • Testing Objectives
  • 2.2 Software Testing Strategies
  • 2.3 Software Test Classifications
  • 2.4 White Box Testing
  • 2.5 Black Box Testing

27
Software Test Classification
  • Software test may be classified
  • according to the testing concept or
  • to the requirement classification

28
According to Testing Concept
  • What concept to test?
  • Output
  • Output is used to archive an acceptable level of
    quality
  • Structural of the software
  • Internal structural and calculations involved is
    included for satisfactory testing
  • Two Classes have been developed
  • Black box testing
  • Identify bug according to software malfunctioning
  • Functionality testing
  • White box testing
  • Examine internal calculation paths in order to
    identify bug
  • Structural testing

29
According to Requirement
  • The test id carried out to ensure full coverage
    of the respective requirement
  • Operation
  • Correctness, Reliability, Efficiency, Integrity,
    Usability
  • Revision
  • Maintainability, flexibility, testability
  • Transition
  • Portability, reusabilily, interoperability
  • From the requirement we can define the test
    classification
  • White box and black box can be used implemented
    respectively

30
Test Classification according to Requirement
  • Correctness 1.1 Output correctness
    tests
  • 1.2 Documentation tests
  • 1.3 Availability tests
  • 1.4 Data processing calculations correctness
    tests
  • 1.5 Software qualification tests
  • Reliability 2. Reliability tests
  • Efficiency 3. Stress test (load tests
    and durability tests)
  • Integrity 4. Software system security
    tests
  • Usability 5.1 Training usability tests
  • 5.2 Operational usability tests
  • Maintainability 6. Maintainability
    tests
  • Flexibility 7. Flexibility tests
  • Testability 8. Testability tests
  • Portability 9. Portability tests
  • Re usability 10. Re usability
    tests
  • Interoperability 11.1 Software
    interoperability tests
  • 11.2 Equipment interoperability test

31
IEEE definitions
  • Black box testing
  • Testing that ignores the internal mechanism of
    the system or component and focuses solely on
    the outputs in response to selected inputs and
    execution conditions
  • Testing conducted to evaluate the compliance of a
    system or component with specified functional
    requirements
  • White box testing
  • Testing that takes into account the
    internal mechanism of a system or component

32
Outlines
  • 2.1 Introduction
  • Testing Objectives
  • 2.2 Software Testing Strategies
  • 2.3 Software Test Classifications
  • 2.4 White Box Testing
  • 2.5 Black Box Testing

33
White Box Testing
  • White Box Testing requires verification of every
    program statement and comment
  • White box testing enables performance of
  • Data processing and calculation correctness tests
  • Every computation operation must be examined
  • Software qualification tests,
  • Software code (including comments)
  • Maintainability test
  • Failure causes detection, support adaption,
    software improvement
  • Reusability test
  • Reusable for future software packages

34
Data processing and calculation correctness tests
  • Checking the operation by each test cases
  • Different path
  • Which path is going to be tested?
  • Two approaches
  • Path coverage
  • Path coverage of a test is measured by the
    percentage of all possible program paths included
    in planned testing.
  • Line coverage
  • Line coverage of a test is measured by the
    percentage of program code lines included in
    planned testing.

35
Path coverage
  • Different path is created by condition statements
  • IF-THEN-ELSE
  • DO-WHILE
  • DO-UNTIL
  • Consider a simple module with 10 conditional
    statement allowing two options each
  • Creates 210 different path 1024 path
  • Examples
  • Taximeter

36
Taxi meter
  • Minimal fare 2. This fare covers the distance
    traveled up to 1000 yards and waiting time
    (stopping for traffic lights or traffic jams,
    etc.) of up to 3 minutes.
  • 2. For every additional 250 yards or part of it
    25 cents.
  • 3. For every additional 2 minutes of stopping or
    waiting or part thereof 20 cents.
  • 4. One suitcase 0 change each additional
    suitcase 1.
  • 5. Night supplement 25, effective for
    journeys between 21.00 and 06.00.
  • Regular clients are entitled to a 10 discount
    and are not charged the night supplement.

37
Flow Chart
1 Charge the minimal fare
D gt 1000
D 1000
2 Distance
3
4
WT gt 3
5 Waiting time
WT 3
6
7
8 No.of suitcases
S 1
S gt1
9
10
11 Regular client?
No
Yes
12
13
14 Night journey?
No
Yes
15
16
17 Print receipt.
38
Examples
  • 24 different paths may be indicated
  • In order to archive full path coverage of the
    software module, we have to prepare at least 24
    test cases
  • We need to find a minimum number of paths to
    cover all line of code
  • Line coverage

39
Flow Graph
1
2
4
R1
3
5
7
R2
6
8
R6
9
10
R3
11
13
12
14
R4
15
16
R5
17
40
The Minimum number of paths
Minimun 3 path is required
1
2
4
R1
3
5
7
R2
6
8
R6
9
10
R3
11
13
12
14
R4
15
16
R5
17
41
Advantages of White box Testing
  • Advantages
  •       Direct determination of software
    correctness as expressed in the processing paths,
    including algorithms.
  • Allows performance of line coverage follow
    up.
  •   Ascertains quality of coding work and its
    adherence to coding standards.
  • Disadvantages
  •   The vast resources utilized, much above
    those required for black box testing of the same
    software package.
  • The inability to test software performance
    in terms of availability (response time),
    reliability, load durability, etc.

42
Outlines
  • 2.1 Introduction
  • Testing Objectives
  • 2.2 Software Testing Strategies
  • 2.3 Software Test Classifications
  • 2.4 White Box Testing
  • 2.5 Black Box Testing

43
Test Classification according to Requirement
  • Correctness 1.1 Output correctness
    tests
  • 1.2 Documentation tests
  • 1.3 Availability tests
  • 1.4 Data processing calculations correctness
    tests
  • 1.5 Software qualification tests
  • Reliability 2. Reliability tests
  • Efficiency 3. Stress test (load tests and
    durability tests)
  • Integrity 4. Software system security tests
  • Usability 5.1 Training usability tests
  • 5.2 Operational usability tests
  • Maintainability 6. Maintainability
    tests
  • Flexibility 7. Flexibility tests
  • Testability 8. Testability tests
  • Portability 9. Portability tests
  • Re usability 10. Re usability tests Intero
    perability 11.1 Software interoperability tests
  • 11.2 Equipment interoperability test

44
Black Box Testing
  • Apart from correctness tests(those 2),
    maintainability and re usability, most of other
    testing classes are unique to black box testing
  • Explain the importance of black box testing
  • However, due to the special characteristic of
    each testing strategy, black box testing cannot
    automatically substitute for white box testing

45
Black-Box Testing
  • Black-box testing is testing from a functional or
    behavioral perspective to ensure a program meets
    its specification
  • Testing usually conducted without knowledge of
    software implementation - system treated as a
    black box
  • Black-box test design techniques include
  • equivalence partitioning,
  • boundary value analysis,
  • cause-effect graphing,
  • random testing

46
How much testing is adequate?
  • Completely validating IEEE 754 floating-point
    division requires 264 test-cases!
  • float divide(float x, float y)
  • From practical and economic perspectives,
    exhaustive testing is usually not possible
  • Which software pieces should we test?
  • Which test cases should we choose?

47
Equivalence class partitioning (EC)
  • A black box method aimed at increasing the
    efficiency of testing and, at the same time,
    improving coverage of potential error conditions.

48
Boundary Value Analysis
  • Based on experience / heuristics
  • Testing boundary conditions of equivalence
    classes is more effective
  • Choose input boundary values as equivalence
    classes representatives
  • Choose inputs that invoke output boundary values
  • Examples
  • (0, 10 ? validate using 0, 1, 2, 9, 10, 11
  • Read up to 5 elements ? validate reading 0, 1, 4,
    5, 6 elements

49
BVA as an equivalence partitioning extension
  • Choose one (or more) arbitrary value(s) in each
    equivalence class
  • Choose valid values exactly on lower and upper
    boundaries of equivalence class
  • Choose invalid values immediately below and above
    each boundary (if applicable)

50
Equivalence Class Partitioning (EC)
  • An equivalence class (EC) is a set of input
    variable values that produce the same output
    results or that are processed identically.
  • EC boundaries are defined by a single numeric or
    alphabetic value, a group of numeric or
    alphabetic values, a range of values, and so on.
  • An EC that contains only valid states is defined
    as a "valid EC," whereas an EC that contains only
    invalid states is defined as the "invalid EC."
  • In cases where a program's input is provided by
    several variables, valid and invalid ECs should
    be defined for each variable.

51
Equivalence Class Partitioning (EC)
  • According to the equivalence class partitioning
    method
  • Each valid EC and each invalid EC are included in
    at least one test case.
  • Definition of test cases is done separately for
    the valid and invalid ECs.

52
Equivalence Class Partitioning (EC)
  • In defining a test case for the valid ECs, we try
    to cover as many as possible new ECs in that
    same test case.
  • In defining invalid ECs, we must assign one test
    case to each new invalid EC, as a test case
    that includes more than one invalid EC may not
    allow the tester to distinguish between the
    programs separate reactions to each of the
    invalid ECs.
  • Test cases are added as long as there are
    uncovered ECs.

53
Example Ticket Price
  • Ticket price depends on four variables
  • Day (weekday, weekend)
  • Visitor's status (OT one time, M members)
  • Entry Hour (6 19, 19.01 24)
  • Visitor's age (up to 16, 16.01 60, 60.01 120)

54
Entrance Ticket Price table
55
Test cases The Ticket price
56
Advantage of Black Box Testing
  • Advantages
  • Allows us to carry out the majority of testing
    classes, most of which can be implemented solely
    by black box tests, i.e. load tests and
    availability tests.
  • For testing classes that can be carried out by
    both white and black box tests, black box testing
    requires fewer resources.

57
Disadvantages of Black Box Testing
  • Disadvantages
  • Possibility that coincidental aggregation of
    several errors will produce the correct response
    for a test case, and prevent error detection.
  • Absence of control of line coverage. There is no
    easy way to specify the parameters of the test
    cases required to improve coverage.
  • Impossibility of testing the quality of coding
    and its strict adherence to the coding standards.
Write a Comment
User Comments (0)