Dimensions of Testing - PowerPoint PPT Presentation

1 / 48
About This Presentation
Title:

Dimensions of Testing

Description:

How is the choice of a strategy affected by the Dimensions of Testing? ... testimonial/magazine reviews. profile customer uses. polish the design. confirm bugs ... – PowerPoint PPT presentation

Number of Views:107
Avg rating:3.0/5.0
Slides: 49
Provided by: dian183
Category:

less

Transcript and Presenter's Notes

Title: Dimensions of Testing


1
Dimensions of Testing 3
  • EE599 Software VV
  • Winter 2006
  • Diane Kelly Terry Shepard

2
Dimensions of Testing Part 3
3
Definitions Strategy and Technique
  • Strategy
  • outlines in broad terms how to use testing to
    assess the extent to which one or more goals for
    a product have been met
  • Techniques
  • implement strategies, provide the data needed to
    do the assessment

4
Strategies and Techniques Details
  • Strategies
  • How is the choice of a strategy affected by the
    Dimensions of Testing?
  • consider each of the Dimensions in turn
  • Examples of Strategies
  • Techniques
  • lists taken from a variety of sources
  • mostly Marick 4, Beizer 5

5
All strategies are based on the fact that testing
works!
  • Why does testing work at all? Hoare 1
  • defects are bimodal
  • means that nearly all defects are either
  • found quickly because they are on frequently used
    paths
  • not found for a long time because they are
    obscure/complex
  • most software is reasonable and predictable in
    most circumstances
  • this idea has not been expressed in quantitative
    terms
  • it is not the same as measures of reliability
  • testing has a chance to be feasible because the
    number paths of interest is much less than the
    size of the state space
  • most of the state space is unreachable and only a
    small fraction of the reachable part is of
    interest

6
Examples of Strategies
  • Exploratory testing
  • Requirements based test design
  • Design based testing
  • Testing in the small
  • Testing in the large
  • integration
  • beta
  • installation

details later
7
Effect of Dimensions on Choosing Strategies and
Techniques
  • Dimensions define a large space
  • position of a given development/maintenance
    activity in that space determines what testing
    strategies are available
  • strategy or strategies chosen in turn may limit
    which techniques can be used

8
Choice of Strategy Effect of Process (1)
  • There are four levels of testing process,
    according to Kit 3
  • Full testing
  • consideration of testing and allocation of
    resources starts with requirements or before
    (200 payback)
  • Partial testing
  • starts with functional design (150 payback)
  • Endgame testing
  • validation oriented
  • Audit-level testing
  • audit of plans, procedures, products checking
    after the fact

9
Choice of Strategy Effect of Process (2)
  • If development is incremental and iterative, it
    is natural that testing will be too
  • this may be essential to the development process
  • If development is based on a contract and
    acceptance testing, that influences the focus of
    testing
  • Management decisions affect process in other ways
    as well
  • set priorities on availability of resources for
    testing
  • money, staff with appropriate skill set, time
  • release with high defect rates because of time to
    market pressures

10
Choice of Strategy Effect of Process (3)
  • If the development process is risk driven, then
    the testing process is likely to be risk driven
    as well
  • test whatever represents the highest risk if it
    fails
  • Three kinds of risk 14
  • Project
  • e.g. not enough of a particular kind of skill
  • may need to test weak parts of code,
  • may need more testers if there are too few
    experienced testers
  • Business
  • e.g. test flexibility and modifiability for a
    high volatility market
  • e.g. test functionality that is most important to
    customers
  • Technical
  • new language, new compiler test the development
    tools

11
Choice of Strategy Effect of Ilities
  • Purpose of testing may be to focus on particular
    ilities
  • Testability is special affects design and coding
  • high testability means good control and
    observation being able to get to a particular
    state in the code, and then having some
    observable effects in that state

12
Choice of Strategy Effect of C/E (1)
  • safety/mission/business critical context
  • test everything to some stopping criteria
  • executable design
  • allows testing of design
  • hw/os/sw tool environment
  • determines level of tool support for testing
  • nature of market
  • e.g. beta test of shrink-wrapped software
  • fewer testing strategies for single user software

13
Choice of StrategyEffect of Purpose
  • Strategy is affected by the focus of purpose
  • Focus on ilities of the system
  • safety, performance, functionality, reliability,
    usability, etc.
  • Focus on users expectations
  • e.g. users tolerance of failures
  • test frequently used paths until error rate
    acceptable
  • test scenarios that are most important to
    customer/user
  • Focus on maintenance issues
  • portability, versioning, installability, user
    support, etc.
  • Focus on management issues
  • business risk, economics, testing policies

14
Choice of StrategyEffect of Automation
  • Some automation is always present
  • A decision to go to a high degree of automation
    is a strategic decision
  • increases costs in return for more thorough
    testing
  • can be high risk
  • Success depends on
  • management support
  • money
  • staff with the right skill set
  • tools
  • shift to an automatable testing process

15
Choice of StrategyEffect of Adequacy (1)
  • Are all testing strategies driven by coverage
    considerations?
  • Coverage can be applied to any type of document
  • e.g. all requirements have been tested
  • meaning touched at least once?
  • many different forms of traversal can be used
  • e.g. exploratory testing can be breadth first or
    depth first
  • It is common wisdom among testers that choosing
    tests to satisfy some code coverage measure is a
    very poor strategy
  • It may be good strategy to cover checklists
  • Strategy based on reaching reliability targets
    may lead to infeasible amounts of testing

16
Examples of Strategies
  • Exploratory testing
  • Requirements based test design
  • Design based testing
  • Testing in the small
  • Testing in the large
  • The balance between these is a strategic decision

17
Exploratory Testing 9 (1)
  • Defined as test design and test execution at the
    same time
  • test while you explore
  • Different from scripted testing
  • not defined in advance
  • not carried out precisely according to a plan
  • Output is a set of notes
  • about the product
  • what failures were found
  • how product was tested

18
Exploratory Testing 9 (2)
  • Elements of exploratory testing
  • discover and record
  • purposes and functions of the product
  • behaviour of the product
  • types of data processes
  • areas of potential instability
  • strategies to operate and evaluate the product
  • heuristics to help decide how to test the product
  • produce deliverables that meet specified
    requirements
  • may be given broad tasks to do, questions to
    answer

19
Requirements Based Test Design (1)
  • Tests derived solely from external specifications
    10
  • use test cases to validate requirements
  • use test cases to focus on missing information
  • e.g. Unspecified behaviour or response to some
    input
  • use test cases to explore imprecise requirements
  • e.g. Is this the result I should have in this
    situation?

20
Requirements Based Test Design (2)
  • Associated techniques 10
  • requirements validation matrix
  • matrix of requirements versus test cases
  • prototypes and models
  • confirm understanding of requirements
  • gain some experience with evolving application
  • requirements will change with experience

21
Requirements Based Test Design (3)
  • Requirements and testing seven missing-link
    myths 34
  • Requirements at the beginning, testing at the end
  • Testing isnt possible until the system exists
  • Requirements are used in testing but not vice
    versa
  • If writing tests is difficult, its solely a test
    problem
  • Minor changes in the requirements wont affect
    the project (much)
  • Testers dont really need requirements
  • Testers cant test without requirements

22
Design Based Testing 10
  • Objectives
  • Is the design solution the right choice?
  • Does the design solution fulfill the
    requirements?
  • Checking design for key ilities
  • e.g. Performance response time in database
    transactions
  • Associated techniques
  • focus on data and process paths within software
    structures
  • ease of testing dependent on testability built
    into design
  • testing of models
  • e.g. Rose RealTime

23
Testing in the Small 10
  • Testing logical pieces of work done by one person
  • functions, subroutines, classes, methods, modules
    or other logically distinct parts of programs
  • e.g. class testing, module testing, unit testing
  • Objectives
  • Does the logic work properly?
  • Does the code do what was intended?
  • Can the the code fail?
  • Is all the necessary logic present?
  • Are any functions missing?
  • Does the module do everything specified?

24
Techniques for Testing in the Small 10
  • Code based coverage to select test cases
  • e.g. path analysis, structured basis testing
  • Test extremes and abnormalities
  • e.g. exception conditions, boundary conditions
  • Random testing
  • e.g. test data generators or test data extractors
    for databases
  • .

25
Testing in the Large 10
  • Combine units that have been tested in the small
  • Many different levels and names!
  • e.g. systems testing, component testing,
    functional testing, performance testing,
    integration testing, delivery testing, stress
    testing, parallel testing, field testing,
    acceptance testing, configuration testing,
    security testing, operational readiness,
    installation testing, compatibility/conversion
    testing, benchmarking, usability, alpha/beta
    testing, backup and recovery, replication
    testing, conformance testing

26
Integration Testing
  • Obtain a working skeleton as quickly as possible
  • establish confidence in interfaces between
    skeleton parts
  • demonstrate simple test cases and transactions
    are handled properly
  • Integrate to the skeleton
  • Choose number of modules to integrate at each step
  • Choose order of integration
  • Top level first
  • need stubs
  • Critical modules first
  • Bottom levels first
  • need drivers
  • Functional groups
  • As-available modules
  • Complete skeleton
  • Increments planned to make testing easier
  • How much testing should be done on each skeleton?
  • What tests?
  • e.g. smoke tests, usability, functional, ...

27
Beta Testing
  • Objectives 11
  • marketing
  • testimonial/magazine reviews
  • profile customer uses
  • polish the design
  • confirm bugs
  • check performance or compatibility with specific
    equipment
  • feature feedback for next release

28
Installation Testing
  • Examples of purposes 13
  • transformation of old data to a new format
  • integrity of production data files
  • creation of new data
  • changeover from old programs to new programs
  • deletion of old programs from production system
  • may involve running the two programs in parallel
  • may involve fail-safe back to old program
  • may involve compromises to security
  • updated system instructions
  • usability issues

29
Techniques
  • A technique is a structured method to guide
    creation of tests
  • Generate data to allow assessment of the product
    under test
  • Provide supporting information about the product
    under test
  • Outline of following slides
  • Black box/white box classification of techniques
  • Beizers list of testing techniques
  • Two other testing techniques

30
Black box/White box
  • Two classes of techniques to derive test cases
  • black box
  • derive test cases without reference to the
    construction of the program
  • based on examining what the system is supposed to
    do
  • white box
  • derive test cases by examining the construction
    of the program

31
Black Box Functional Testing
  • Ideal Controlled automated generation of test
    cases and expected results from specifications,
    and automated execution
  • In practice specifications are poor
  • must therefore select test cases manually
  • in the worst case, from the code (white box bias)
  • if possible, calculate expected results by hand
  • can still use automated execution
  • Black box testing wont tell you
  • how much of the code has been tested or how
    thoroughly

32
White Box Structural Testing
  • Complementary to black box testing
  • Fine-grained testing to check operation of
    specific parts of the code
  • useful for assessing code coverage
  • White box testing may not find certain flavours
    of the following problems11
  • timing related problems
  • unanticipated error conditions
  • unexpected data combinations
  • invalid output
  • volume, load, hardware problems
  • interaction problems with other software or
    hardware

33
Usually want mixed white/black box testing
  • Unit testing
  • dual black box/white box approach
  • level of smallest granularity want to look at
    code
  • Above unit testing level gray box
  • awareness of structure, but not all details,
  • System testing primarily black box
  • Ideally, test at all levels with an understanding
    of both the code structure and the application
    domain

34
Beizers Classification of Testing Techniques
  • Transaction-flow testing (bb)
  • Data-flow testing (wb)
  • Domain testing (bb)
  • also called Partition or Equivalence testing
  • includes boundary value analysis
  • Syntax testing (bb)
  • Logic-based testing (wbbb)
  • States, state graphs and Transition testing
    (wbbb)
  • Mutation Testing (wb)
  • Cause effect graphing (bb)
  • Flowgraphs and Path testing (wb)
  • Statement, branch, path coverage (wb)
  • Basis path coverage (wb)

35
Two Other Testing Techniques
  • Error guessing
  • list possible errors or error-prone situations
  • develop tests based on these lists
  • Category partition method 12
  • identifies parameters and constraints for each
    function
  • choose test cases from allowable combinations of
    parameters
  • and from unallowable combinations!

36
Transaction-flow testing
  • Transaction
  • unit of work seen from users viewpoint
  • equivalent to path testing, but at the level of
    interfaces among software components rather than
    interfaces among individual statements
  • might also be called scenario based or use-case
    based testing
  • if scenarios or use cases are part of the
    requirements, they can be used to generate tests

37
Data-flow testing
  • Concentrates on the way data is used in the
    program 5
  • requires tester to identify variables in the
    program as
  • k (killed or undefined),
  • d (defined but not yet used),
  • u (used c-use for use in a computation p-use
    for a predicate), or
  • a (anomalous)
  • technique defines various levels of stringency
  • most demanding is testing all possible d-u
    combinations for every variable in the program
  • 12 uses undefined (u), defined (d), and
    referenced (r)

38
Domain testing
  • partition the input space, and test using
    representative values from each partition
  • simple strategy, but can be tedious
  • use points at
  • interior, epsilon neighbourhood, boundary point,
    extreme point
  • Controversy whether random values drawn from the
    whole input space are as effective in estimating
    reliability or at finding defects remaining as
    are random values chosen to be typical of
    subdomains of the input space

39
Syntax testing
  • Recognizing syntax (patterns, assumed formats,
    etc.) in input data
  • expressing it in a standard form e.g. Backus-Naur
    form
  • a_vowel A/E/I/O/U
  • use rules to mechanically generate input-data
    validation tests
  • look for inputs that dont follow the syntax
  • less useful if software designers have already
    recognized and used syntax of inputs

40
Logic-based testing
  • Addresses the issue of how to combine a variety
    of different conditions in an orderly fashion to
    create multiple test cases
  • examples
  • Decision trees
  • Decision tables
  • Function lists
  • Input/output tables
  • Matrices
  • e.g. Printer compatibility matrix, operations
    versus modes matrix

41
States, state graphs and Transition testing
  • Some aspects of a design may be represented as
    relatively small finite state machine diagrams
  • Testing these parts of a design can be done by
    undertaking a transition tour of the state
    diagram (state graph)
  • This is not useful for state represented in the
    form of integers, floats, arrays, records, etc.

42
Mutation Testing
  • Attempt to reduce the amount of data required to
    exercise the program
  • concentrate on data that reveals likely faults
  • generate mutations of program under test
  • (preferably automatically)
  • run tests on all mutations (computationally
    expensive)
  • examine all cases where mutants and original
    program give the same result
  • either add tests so they give different results,
    or conclude that the original program is in error.

43
Cause-effect graphing
  • Idea from the 70s (Myers)
  • A Boolean graph linking causes and effects. The
    graph is actually a digital-logic circuit (a
    combinatorial logic network) using a simpler
    notation than standard electronics notation.
  • Explores combinations of input conditions
    (causes) against output conditions (effects)
    taken from the requirements
  • Not practical
  • By transforming a written specification into a
    set of cause-effect graphs, the tester is
    replacing one complex representation with
    another 6

44
Flowgraphs and Path testing
  • Flowgraph
  • graphical representation of programs control
    structure
  • Path
  • follows the edges of a flowgraph
  • Path Selection Criteria - examples
  • every path from entry to exit (path)
  • impractical for most programs
  • selected paths
  • e.g. each loop 0, once, maxcount times
  • all paths up to some length

45
Basis path coverage
  • Select test cases that follow the control flow
    paths defined for the McCabe complexity measure
  • note McCabe cyclometric complexity is often seen
    as a single measure of program quality this is
    only one of many attributes of quality
  • McCabes conjecture was that such test cases
    would provide adequate coverage
  • Myers triangle example follows 8

46
Example Code with two Branches (part of Myers
Triangle)
include ltstdio.hgt include ltstdlib.hgt include
ltmath.hgt main(argc, argv) int argc char
argv int sideA int sideB int
sideC double s double Area sideA
atoi(argv1) sideB atoi(argv2) sideC
atoi(argv2) if ( (sideA sideB) (sideA
sideC ) ) s 0.5 (sideA sideB
sideC) Area sqrt (s / (s - sideA) (s -
sideB) (s - sideC) ) printf ( "area g\n",
Area) Else puts ("not an equilateral
triangle") return 0
47
Branch Coverage 2 test cases needed
  • To exercise the TRUE branch
  • use inputs
  • Side A 2
  • Side B 2
  • Side C 2
  • output
  • area 1.73205
  • To exercise the FALSE branch
  • use inputs
  • Side A 3
  • Side B 4
  • Side C 5
  • output
  • not an equilateral triangle

48
Two Errors not Caught by Test Cases
  • wrong operand to read in sideC
  • should be argv3
  • Herons formula for area of a triangle
  • Area s (s - sideA) (s - sideB) (s - sideC)
  • code has division operator instead of
    multiplication
  • 2 is the only value for which the wrong equation
    gives the right answer
Write a Comment
User Comments (0)
About PowerShow.com