Fault Detection Techniques - PowerPoint PPT Presentation

About This Presentation
Title:

Fault Detection Techniques

Description:

... Severity of Software Defects How to find software defect software defects detection techniques classification Optimistic v.s. Pessimistic Comments ... – PowerPoint PPT presentation

Number of Views:1051
Avg rating:3.0/5.0
Slides: 115
Provided by: edut1550
Category:

less

Transcript and Presenter's Notes

Title: Fault Detection Techniques


1
Fault Detection Techniques Testing other
approaches
  • Chapter 8

2
Why testing?
  • Lets look at some well-known software defects
  • ??????????????????,??????????????? (no stress
    testing)
  • ???????????
  • 2000-2005- ?????????-5?????????????????15?????????
    ?
  • 2003 -????????????????5000?????,3???
  • 2000 ????????,4??? (??????)
  • 1997 ????,225??? (????????)
  • 1995 ???????????159???(??????)

3
Statistics
  • 2002- ?????????-????????595??? (0.6 GDP)
  • ??????
  • ?????????????
  • ?????????????

4
Quality Control and Quality Assurance
  • Software is a product.
  • Good quality product requires ????quality control
    (QC) and quality assurance (QA) ????

5
Manufacturing in other discipline
  • ?????????
  • ????(quality)?????????
  • ????????????????????????100????100???????
  • QC (Quality Control) ????
  • ??????????????????????
  • ????????????????
  • ???????????(process improvement)?????,????????????
    ?

6
How to improve quality?
  • ????????????????,?????????????????????????????????
    ???????????????????????????,???????
  • Ex. ????????????????
  • Question
  • ?????????????????,?????????????

7
Assembly line in manufacturing
8
???
  • ???????????????????????
  • ???????????
  • ?????????????????????????????????????
  • ??????????.

9
Again, lets review the engineering process of
other engineering fields.
  • Idea (??)
  • Marketing analysis (Requirement Analysis) (????)
  • Analysis and Design (?????)
  • Manufacturing (QC) (??)
  • Testing (QA)(??)
  • Release product (????)

10
Software engineering
  1. Idea (??)
  2. Requirement analysis and specification (100
    design) (???????)
  3. Design and analysis (?????)(100 design, QC
    here?)
  4. Implementation (????)(95 design?, QC here?)
  5. Manufacturing (??)(compilation no cost
    manufacturing)
  6. Testing(??)

11
Question?
  • Is coding a design process or a manufacturing
    process
  • Is Code a design or a product?
  • Again, the analog breaks
  • More questions
  • If coding is not a design process, we should be
    able to hire many ????? to control our quality
  • If coding is a design process, bad news, errors
    are inevitable because programmers are always
    doing complicated design jobs, not simple and
    easy job.

12
Facts
  • Now, you should know why software has so many
    bugs or notorious for being flawed

13
The QC story
?????
software
??????
QC ????
14
A Joke
  • ???????????????,
  • ???????????,
  • ?????????????
  • ??????,???????????,?????????,?????????????,???????
    ????????
  • ?????????,????K??????,???????????,?????????????,??
    ?????????????,???????
  • ?????,???????????????,??????????????,????????????
    ?????????????????
  • ?????????,
  • ????????????????????
  • ???????????????,
  • ??
  • ????????????,???????????????????
  • ???????????,??????????????,?????????,?????????????
    ??????????,???????????????

15
  • ???????,
  • ?????????,
  • ??????????????
  • ?????????
  • (??????????????
  • 21????????????????
  • ????????)?
  • ????????,
  • ????????????????
  • ?????????,
  • ???????????????????
  • ??????,
  • ?????????????,
  • (????????????),

??????????????, ???????????????? ???????????! ??
???????, ????????????????, ???????????-- ?????????
????, ???????, ??????????????......? ???????, ???
????, ???????, ????????, ??????????????????, ?????
?????????????......?
16
Software Quality
  • ???????,??????????????,,??????,??????????
  • ???? programmers
  • This is why testing is so important in matured
    software industry.

17
??????,?????????
18
How do you test a DVD player
19
How do you test a software
20
How do you test an Anti-Virus engine?
21
How do you test an online-game which could have
10000 online at the same time?
22
The complexity of software testing
  • ???????????? (including user interface, network
    interface, file interfaces, etc. )
  • ?????????????????
  • Correct execution paths(?????)
  • Incorrect execution paths (??????)
  • ??????????????????????,??,??,??????????

23
SDLC (Software Development Life Cycle)
  • ???? Requirement analysis (marketing research)
  • ?????? Software function/performance
    specification
  • ????? Analysis and Design (QA ??????????)
  • ??,??PL,?? platform, database, network protocols.
  • ????????,???????,??,??
  • ????????? Coding and unit testing (by
    programmers)
  • ???? Integration testing (by programmers/QAs)
  • Alpha testing (by QAs)
  • Most software functions and features are
    basically completed
  • All functions are tested, no functions will be
    added beyond this point
  • Serious flaws (high severity) are solved and
    addressed (show stopper)
  • Beta testing (by Beta users)
  • sub-serious bugs are all fixed
  • Test plan has been completely executed
  • Bug discovering rate is lower than bug fixing
    rate
  • ???? Release
  • Bug discovering rate is lower than bug fixing
    rate for a long period of time.
  • The version after fixing bugs has been
    regressively tested (regression test)

24
Common software defects
  • 70 occurs in design and hard to correct
  • Software specifications are not precise
  • Software is complicated
  • Coding errors (20)
  • Function changes which affects other components
  • 3rd party software is flawed

25
? Early detection of errors
Error found Mar 13
Error found Feb 13
Error occurs Jan 13
Work (person-days)
10
20
Time
Work proceeds at (say) 10 person- days per month
10 person-days of work has been done assuming
the error is not there. Now this must be redone.
If error found this late, 20 person-days must be
redone
26
What to test for a QA (?????)
  • ??? (completeness)
  • ???????????,??????????????
  • ??? (correctness)
  • ??? (reliability)
  • ??? (compatibility)
  • ??(efficiency)
  • ???? (usability)
  • ??? (portability)
  • ????? (scalability)
  • ??? (testability)

27
??
  • ????????????
  • ???????????????????,????????????????
  • ??????????,???????
  • ???????,?????????,????????????
  • ??????????????

28
??????
  • ?????????
  • Big software company has their testing team
  • Small software companies which do not have
    resources to test can outsource to 3rd party
    testing company
  • ?????????
  • Big software company like ????Trend Micro has QA
    team
  • Many software companies use programmers as
    testers a sad fact!

29
Company Organization
Developers
Marketings
QAs (testing teams)
30
Management Hierarchy
Director of development
Director of QA
Developer manager
Developer manager
Test manager
Test automation Team manager
Test facilities manager
programmers
Test automation programmers
Test database programmers
programmers
testers
31
Important reasons to make QA team and developer
team equal and separated
  • ????????? programmers ?????(????)???
  • ? programmer ????,???bugs ???????????,???????????
  • ?????????bug ,?????????,????????????
  • ??bug??,??????????,????????,???????
  • ??????????
  • QA ????????? programmer ?????
  • QA ???????????? bug,????,QA??????
  • QA ??? developer team ??,???????????????????

32
Marketing
????
Developer
QA (testers)
33
????V??
????
??????
????
??????????
???????
????
??????????
??????????
???????
????
???????
????
????
34
Software Defects
  • Software defects represent the undesirable
    aspects of a software's quality
  • What is undesirable?
  • A software defect is software behaviors that do
    not conform to specification
  • Software specs define the correctness of the
    program

35
Software defect classification
  • Specs error
  • A specs that is wrong in the beginning
  • e.g., in specs, a teacher can only teach one
    course in practice, two teachers can teach one
    course and share the credits/fee
  • design error
  • An error that is caused by wrong design
  • e.g., in design, UDP is used to transmit the
    holding (owning) of a treasure in practice, UDP
    can lose, program behaviors are incorrect
  • coding error
  • An error caused by programmer the error type
    you are mostly familiar with
  • performance inadequacy failed to meet the
    nonfunctional requirement
  • Scalability inadequacy failed to be able to
    serve large amount of users and transactions
  • memory management error failed to meet the
    memory resource requirements

36
Severity of Software Defects
  • show stopper (severity 1)
  • unusable software (severity 1)
  • Microsoft ranks the bug severity from 1-30
  • killing people, waste a lot of money.

37
How to find software defect
  • Dynamic
  • testing
  • testing, teamwork in a ACM survey are the three
    disciplines that should be trained in CS
  • rely on dynamic execution of the program
  • a program must be built first before testing
  • The major way to test performance or scalability
  • Static
  • code review (survey shows 70 sever bugs can be
    discovered in this process)
  • static analyzer verify software defects without
    running a program but by analyzing program or
    models

38
software defects detection techniques
classification
  • optimistic (???)
  • testing
  • using optimistic approach can only show how good
    your code may be. It can not show the absence of
    faults
  • pessimistic (???)
  • static analyzer, code review
  • mostly want to show the absence of faults
  • may report spurious results
  • compiler complains about usage of noninitialized
    variable

39
Optimistic v.s. Pessimistic
  • Optimistic

40
Comments
  • In principle, testing cannot show the absence of
    faults, so it is impossible to prove a program is
    correct by testing
  • E.g., you can not prove a sorting algorithm is
    correct totally by testing. To prove a sorting
    program is correct exhaustively, you need to test
    N! cases. If N 100, it intractable.
  • Using exhaustive testing to obtain correctness of
    a program is often infeasible for common problem.
  • Question How Sorting algorithms in Algorithm
    Course are proved correct?

41
Comments
  • Proof is difficult. You need to show that your
    solution works for all cases. Even the cases are
    N! and N can be arbitrary
  • Testing only works for a comparatively small set
    of input domain under reasonable cost and
    resources.

42
Testing in Software industry
  • Testing is still the most effective and mostly
    used fault detection techniques although testing
    in theory can not show the absence of faults.
  • Testing only show the presence of faults not the
    absence of faults
  • Most errors can still be found in extensive
    testing if you are willing to invest resources in
    it.

43
Testing in software development
  • Testing plan in the beginning of the project
  • unit testing (programmer)when programmers
    complete their functions, procedures, modules, or
    components, they should perform unit testing.
    Often proceeded as white-box testing
  • Integration testing (programmer)The process for
    development team to integrate their modules,
    components, classes, library into a complete
    executable system (a complete build is produced
    for later stages).
  • Alpha Testing (QA- Testers)The process for
    internal testers to test a complete build. It is
    often proceeded as blackbox testing. During this
    stages, assertion is often turned on.
  • Beta Testing (Power Users)A build is given to
    power users to test. During this stages,
    assertion is often turned off.
  • Released

44
Unit Testing (component testing in text book)
  • Testing applied on software units. The typical
    types of system units are
  • classes
  • components
  • modules
  • library
  • functions
  • subsystems
  • Unit testing is programmers responsibility

45
How to determine a units correctness?
  • Since software units are not complete systems, so
    they do not have specification to describe their
    correctness, how can we determine the unit
    behaviors under tested is correct?
  • ANS The correctness of units are specified
    in design not in specs. For example, correctness
    of units are specified in module interfaces,
    class interfaces, class descriptions, and so on.

46
Unit testing
  • Just like the module testing described before.
  • The purpose is to produce test cases to feed the
    interface of software units and exercise the
    software units
  • In principle

47
The testing criteria for unit testing
  • In principle, exhaustive testing of unit
    interface is also impossible.
  • e.g.,void p(short int x, short int y)
  • .
  • You need to test 65536 65536 cases to make sure
    your unit behavior correctly and each test run
    needed to be judged by human (with eyes)
  • So, how much we need to test to (at least)
    guarantee some degree of confidence.?

48
Criteria of confidence
  • Test Coverage (from weak to strong)
  • statement coverage each statement is at least
    executed once.
  • branch coverage each branch should be exercised
    at least once
  • path coverage each path should be at least
    exercised at least once

49
Binary search flow graph
50
The weakest criteria
  • Statement coverage1 test case to run 1 2 3 8 9
  • 1 test case to run 1 2 4 6 7 2
  • 1 test case to run 1 2 4 5 7 2 8
  • You only need 3 test cases to have each statement
    at least executed once.

51
Branch coverage
  • each branch can produce two choices, every choice
    combinations should all be exercised
  • there are 3 branches, so at most 2 2 2
    branching choices should be exercised if loop is
    not considered.
  • So at least you need to find test cases to meet
    this criteria
  • 1 2 8 9
  • 1 2 3 8 9
  • 1 2 3 4 5 7 2 8 9
  • 1 2 3 4 5 7 2 3 8 9
  • 1 2 3 4 6 7 2 8 9
  • 1 2 3 4 6 7 2 3 8 9
  • In the example, there are less than 8 branching
    choices because it is not a complete tree

52
Path Coverage
  • The highest coverage
  • Try to find test cases which can cover all the
    paths
  • equal to exhaustive testing if you want to cover
    them all, which is impossible
  • You need to find infinite test cases which have
    finite (if the program must stop) or infinite
    length (if the program may run forever)
  • 1 2 8 9 (finite length)
  • 1 2 3 8 9 (finite length)
  • 1 2 3 4 5 7 2 8 9
  • 1 2 3 4 5 7 2 3 4 5 7 2 8 9
  • 1 2 3 4 5 7 2 3 4 6 7 2 3 8 9
  • 1 2 (3 4 5 7 2) 8 9 ( finite length but
    very long sequence)
  • 1 2 (3 4 5 7 2 3 4 6 7 2) 8 9 ( finite
    length with permutations)
  • 1 2 (3 4 5 7 2) ? 8 9
  • where is finite iteration in automata theory
  • where ? is infinite iteration in automata theory

53
Path coverage
  • in a program which can run for ever, full path
    testing is equal to exhaustive testing
  • it is infeasible in practice
  • So, we must discover more practical path coverage
    criteria.

54
Independent paths (a weaker criteria)
  • 1, 2, 3, 8, 9
  • 1, 2, 3, 4, 6, 7, 2
  • 1, 2, 3, 4, 5, 7, 2
  • 1, 2, 3, 4, 6, 7, 2, 8, 9
  • each test case should explore statements that are
    not explored by the previous test cases to reduce
    redundant test cases
  • A dynamic program analyser may be used to check
    that paths have been executed
  • In practice, this is often difficult too

55
Test coverage in real world
  • Since different coverage criteria involve
    different degree of cost and time
  • Often statement coverage is the lower bound.
  • Branching coverage or path coverage can be
    applied to important components
  • Test coverage testing is widely applied in
    software industry. Although coverage rate mean
    nothing to program correctness. It is only an
    index to show if testing has been done
    adequately. Most people accept that index is
    closely related to software quality since test
    cases are executed.

56
Test coverage
  • IMPORTANT
  • If you have 100 statement coverage, it does not
    mean your program is correct.
  • Sorting algorithm is an example
  • you can easy derive only few test cases to have
    100 coverage of your program
  • But to prove your program is correct is totally a
    different scale of the program

57
How to proceed test coverage testing?
  • You need to have source code to know the
    coverage.
  • You need to have tool to display the coverage
    rate
  • Each test case, you need to have someone (or in
    very few cases, tools) to judge the correctness
    of program behaviors.

58
White box testing
Test cases
output
You test an equipment by observing the internal
behaviors of the equipment also known as
structural testing (text book)
59
Test coverage
  • Test coverage must be done with white-box
    testing. The tester must have source code for
    testing
  • Effective test cases must be derived from source
    code, which require programming skills.
  • So, test coverage can be done by programmers or
    3rd party testing company.
  • In USA, Europe, there are many 3rd party testing
    companies.
  • In Taiwan, never heard of it. Maybe you should
    start a new one.

60
Test coverage tools
  • Aonix Validator/Req
  • BullseyeCoverage
  • Clover
  • CodeTEST
  • CTC
  • Dynamic Code Coverage
  • GCT
  • Glass JAR Toolkit
  • Hindsight/TCA
  • Hindsight/TPA
  • Insure
  • Java Test Coverage
  • JavaCov
  • Koalog Code Coverage
  • LDRA Testbed
  • Logiscope
  • Microsoft Visual Test
  • Rational PureCoverage
  • Rational Test RealTime Coverage
  • TCMON
  • TCA
  • TCAT C/C
  • TCAT for Java
  • TCAT-PATH
  • TestCenter

61
(No Transcript)
62
Black Box Testing
Test Cases
Outputs
63
Black Box Testing
  • A testing process proceed without knowing the
    internal behaviors of the system.
  • Often done by QA (Testers)
  • QA only interact with system by
  • feeding test cases by user interfaces
  • feeding test cases by files
  • feeding test cases by networking traffic
  • feeding test cases by consoles

64
BlackBox testing and Coverage
  • In principle, Black Box testing can still combine
    with test coverage tools, if source code is
    available.
  • Test coverage tools can be used to show the index
    of how good the test cases are derived by the
    QAs.

65
BlackBox Testing
  • In practice, QAs use blackbox testing to assure
    Software quality after unit testing and
    integration testing. They take over from
    developers
  • They focus on checking
  • whether the program is conformed to specs.

66
A simple ???? of testing
  • It is a common knowledge that testing after alpha
    testing should not be done by developers.
  • developers tends to feed good test cases only
  • developers are not happy to find more bugs (find
    more bugs, more works)
  • find more bugs may not get promoted. On the
    contrary, more bugs prove his coding quality is
    low.
  • So, to obtain high quality software, a simple
    management rule
  • have testers to test the program, not programmers
  • find more bugs mean promotion
  • just like ????

67
Marketing
????
Developer
QA (testers)
68
BlackBox (alpha testing) Testing
  • QAs test program by specs.
  • In common cases, specs may describe
  • correct behaviors
  • incorrect behaviors that should be avoid.
  • QAs job goal
  • to show the presence of software errors as much
    as possible
  • test the software from a users perspective
  • test all positive, negative, and boundary cases

69
Black-box testing
70
Equivalence partitioning(partition testing)
  • Input data and output results often fall into
    different classes where all members of a class
    are related
  • Each of these classes is an equivalence partition
    where the program behaves in an equivalent way
    for each class member
  • Test cases should be chosen from each partition

71
Equivalence partitioning
72
Equivalence partitioning with boundary cases
  • Partition system inputs and outputs into
    equivalence sets
  • If input is a 5-digit integer between 10,000 and
    99,999, equivalence partitions are lt10,000,
    10,000-99, 999 and gt 10, 000
  • Choose test cases at the boundary of these sets
  • 00000, 09999, 10000, 99999, 10001

73
Equivalence partitions
74
Automation of testing(including blackbox
whitebox)
  • If software quality is important to you, of
    course, you want a higher criteria to be met.
  • Unfortunately, higher criteria means higher test
    cases, more time, more cost.
  • If your test cases outcome must to be judged by
    humans, higher criteria means much higher cost
    and time

75
Testing automation
  • In many applications or functionalities, testing
    can be automated at different degrees. In
    contrast, many functionalities and applications
    are not amenable to automatic testing, where
    humans are needed
  • examples that is difficult to automate testing
  • testing WORD.
  • examples that is easy to automate
  • testing a sorting algorithm you can easily
    write a program to check if the output is listed
    in ascending or descending order

76
Test oracles
  • If test cases can be derived automatically and
    then these test cases can be executed
    automatically and then report the error What a
    perfect world ! We can fire all the testers.
  • In practice, this is only a dream
  • In many applications, Effective test cases can
    only be derived by human
  • In many applications, Executions requires human
    to monitor and validate if program behaviors are
    conformed to specs.

77
Test Oracles
  • Oracle In ancient Greece, an oracle was a
    priest or priestess who made statements about
    future events or about the truths
  • A test oracle is a program that can determine if
    the program behaviors or program input/output are
    conformed to specs.

78
Testing automation
  • For testing automation, test oracle must be
    presented to determine the conformity between
    specs and programs.
  • In some applications, test oracles are easy to
    derive
  • In most applications, test oracles are difficult
    to implement or in theory, impossible to derive.

79
Testing automation
  • In a big map, network games, a test oracle can be
  • characters movements are controlled by a test
    program. The test program keep feeding the mouse
    events and keyboard events randomly
  • A simple test oracle is to see if the program
    crash. This test oracle is easy and close to
    none.
  • A more complicated test oracle is to check if the
    virtual money given to the characters are
    controlled under a fixed amount, C How can
    you design this test oracles?
  • complicated test oracle often require program
    instrumentation program code that inserted in
    the program only for monitoring purpose.
  • The difficulty of test oracles is depending on
    what kind of system properties you want to check.

80
Integration testing
  • Tests complete systems or subsystems composed of
    integrated components
  • Integration testing should be black-box testing
    with tests derived from the specification
  • Main difficulty is localising errors
  • Incremental integration testing reduces this
    problem

81
Incremental integration testing
82
Approaches to integration testing
  • Top-down testing
  • Start with high-level system and integrate from
    the top-down replacing individual components by
    stubs where appropriate
  • Bottom-up testing
  • Integrate individual components in levels until
    the complete system is created
  • In practice, most integration involves a
    combination of these strategies

83
The key to incremental testing
  • emulate the functions you will call
  • emulate the functions that call you.

84
Top-down testing
85
Top-down testing
  • top down testing is used mostly by personal
    programming.
  • Test cases are full test cases derived one by one
    from specs.
  • it is a hot programming paradiagm called test
    driven programming.

86
  • main()
  • loadfile(myarray) // load an array from
    file
  • sorting(myarray) // sort an array
  • input a // get input from
    user
  • ret binseary(myarray,a)
  • if (ret) dosomethingA()
  • else dosomethingB()

87
  • An loadfile stubs
  • loadfile(int array)
  • array1 10
  • array2 1
  • array3 55

88
Bottom-up testing
89
Bottom up Testing
  • Bottom up testing are suitable for teamwork
    project which proceeds concurrently and parallel.
  • Test drivers are derived based on modules
    interfaces, parameters, etc. They are typically
    not full test cases derived from specs.

90
Comparisons of top-down and bottom up testing
  • top-down test cases are derived from specs. So,
    many errors can be discovered earlier.
  • top down testing is often less efficient than
    bottom up regarding teamwork and parallel
    cooperation.
  • bottom up testing drawback test cases are
    typically not derived from specs. If design is
    wrong, it can not be discovered earlier.

91
Testing approaches
  • Architectural validation
  • Top-down integration testing is better at
    discovering errors in the system architecture
  • System demonstration
  • Top-down integration testing allows a limited
    demonstration at an early stage in the
    development
  • Test implementation
  • Often easier with bottom-up integration testing
  • Test observation
  • Problems with both approaches. Extra code may be
    required to observe tests

92
Interface testing
  • Takes place when modules or sub-systems are
    integrated to create larger systems
  • Objectives are to detect faults due to interface
    errors or invalid assumptions about interfaces
  • Particularly important for object-oriented
    development as objects are defined by their
    interfaces

93
Interface testing
94
Interfaces types
  • Parameter interfaces
  • Data passed from one procedure to another
  • Shared memory interfaces
  • Block of memory is shared between procedures
  • Procedural interfaces
  • Sub-system encapsulates a set of procedures to be
    called by other sub-systems
  • Message passing interfaces
  • Sub-systems request services from other
    sub-systems

95
Interface errors
  • Interface misuse
  • A calling component calls another component and
    makes an error in its use of its interface e.g.
    parameters in the wrong order
  • Interface misunderstanding
  • A calling component embeds assumptions about the
    behaviour of the called component which are
    incorrect
  • Timing errors
  • The called and the calling component operate at
    different speeds and out-of-date information is
    accessed

96
Interface testing guidelines
  • Design tests so that parameters to a called
    procedure are at the extreme ends of their ranges
  • Always test pointer parameters with null pointers
  • Design tests which cause the component to fail
  • Use stress testing in message passing systems
  • In shared memory systems, vary the order in which
    components are activated

97
Stress testing
  • Exercises the system beyond its maximum design
    load. Stressing the system often causes defects
    to come to light
  • Stressing the system test failure behaviour..
    Systems should not fail catastrophically. Stress
    testing checks for unacceptable loss of service
    or data
  • Particularly relevant to distributed systems
    which can exhibit severe degradation as a
    network becomes overloaded

98
Stress testing
  • require considerable cost and efforts.
  • often require you to implement a system to test
    the system.

99
Object-oriented testing
  • The components to be tested are object classes
    that are instantiated as objects
  • Larger grain than individual functions so
    approaches to white-box testing have to be
    extended
  • No obvious top to the system for top-down
    integration and testing

100
Testing levels
  • Testing operations associated with objects
  • Testing object classes
  • Testing clusters of cooperating objects
  • Testing the complete OO system

101
Object class testing
  • Complete test coverage of a class involves
  • Testing all operations associated with an object
  • Setting and interrogating all object attributes
  • Exercising the object in all possible states
  • Inheritance makes it more difficult to design
    object class tests as the information to be
    tested is not localised

102
Weather station object interface
  • Test cases are needed for all operations
  • Use a state model to identify state transitions
    for testing
  • Examples of testing sequences
  • Shutdown Waiting Shutdown
  • Waiting Calibrating Testing Transmitting
    Waiting
  • Waiting Collecting Waiting Summarising
    Transmitting Waiting

103
Object integration
  • Levels of integration are less distinct in
    object-oriented systems
  • Cluster testing is concerned with integrating and
    testing clusters of cooperating objects
  • Identify clusters using knowledge of the
    operation of objects and the system features that
    are implemented by these clusters

104
Approaches to cluster testing
  • Use-case or scenario testing
  • Testing is based on a user interactions with the
    system
  • Has the advantage that it tests system features
    as experienced by users
  • Thread testing
  • Tests the systems response to events as
    processing threads through the system
  • Object interaction testing
  • Tests sequences of object interactions that stop
    when an object operation does not call on
    services from another object

105
Scenario-based testing
  • Identify scenarios from use-cases and supplement
    these with interaction diagrams that show the
    objects involved in the scenario
  • Consider the scenario in the weather station
    system where a report is generated

106
Collect weather data
107
Weather station testing
  • Thread of methods executed
  • CommsControllerrequest WeatherStationreport
    WeatherDatasummarise
  • Inputs and outputs
  • Input of report request with associated
    acknowledge and a final output of a report
  • Can be tested by creating raw data and ensuring
    that it is summarised properly
  • Use the same raw data to test the WeatherData
    object

108
Testing workbenches
  • Testing is an expensive process phase. Testing
    workbenches provide a range of tools to reduce
    the time required and total testing costs
  • Most testing workbenches are open systems because
    testing needs are organisation-specific
  • Difficult to integrate with closed design and
    analysis workbenches

109
A testing workbench
110
Tetsing workbench adaptation
  • Scripts may be developed for user interface
    simulators and patterns for test data generators
  • Test outputs may have to be prepared manually for
    comparison
  • Special-purpose file comparators may be developed

111
Three Well Known Approaches
  • Testing execution on product or program
    (implementation) to detect faults (optimistic)
  • Simulation execution on artifacts (models)
    which is simpler representation of the original
    complexity (optimistic)
  • Verification explore all the state space of
    models which is a simpler representation of
    original complexity (pessmistic)

112
Models
  • So, we want models can be
  • Simulated!
  • Verified !
  • They can provide more clues of how our design is
    wrong
  • By the way, what is design?

113
The foundation behind model simulation and
verification
X ? Y
(X gt -1) gt (Ygt-1)
Preorder (implements)
?
Prog
Model
deadlock exist ? deadlock
exists
114
Fault Detection Tools in Industry
Write a Comment
User Comments (0)
About PowerShow.com