System Testing - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

System Testing

Description:

System Testing There are several steps in testing the system: Function testing Performance testing Acceptance testing Installation testing – PowerPoint PPT presentation

Number of Views:62
Avg rating:3.0/5.0
Slides: 22
Provided by: uwi49
Category:
Tags: mtbf | mttr | system | testing

less

Transcript and Presenter's Notes

Title: System Testing


1
System Testing
  • There are several steps in testing the system
  • Function testing
  • Performance testing
  • Acceptance testing
  • Installation testing

2
Function Testing
  • Tests functions performed by the system
  • Checks that the integrated system performs its
    functions as specified in the requirements
  • e.g. a function test for a bank account package
    verifies that the package can correctly
  • Credit a deposit
  • Enter a withdrawal
  • Calculate interest
  • Print the balance

3
Performance Testing
  • Once convinced that the functions work as
    specified, compare the integrated components with
    the non-functional system requirements
  • e.g. performance test on the bank account package
    evaluates
  • Speed with which calculations are made
  • Precision of the computation
  • Security precautions required
  • Response time to user inquiry

4
Performance Testing
  • At this point, the system operates as designers
    intended.
  • This is a verified system.
  • Next, the system is compared with the customers
    expectations by reviewing the requirements
    definition.
  • If satisfied that the system built meets the
    requirements, then we have a validated system.

5
Acceptance Testing
  • The customers test the system, making sure that
    it meets their understanding of the requirements,
    which may be different from the developers.
  • This assures the customers that the system they
    requested is the system that was built for them.
  • Sometimes it is run in its actual environment but
    often is run at a test facility different from
    the target location.

6
Installation Testing
  • This allows users to exercise system functions
    and document additional problem that result from
    being at the actual site.
  • e.g. a naval system may be designed, built and
    tested at the developers site, which is
    configured as a ship might be, but is not on an
    actual ship.
  • Once development site tests are complete, an
    additional set of installation tests may be run
    with the system on board each type of ship that
    will eventually use a system.

7
Function Testing
  • Previous tests concentrated on components and
    their interactions.
  • This first step in system testing ignores the
    system structure and focuses on functionality.
  • Approach is more closed box than open box
  • As it need not be known what component is being
    executed, rather what the system is to do.
  • Each function can be associated with those system
    components that accomplish it.
  • For some functions, the parts may comprise the
    entire system.

8
Performance Testing
  • Addresses the non-functional requirements.
  • System performance is measured against
    performance objectives set by the customer as
    expressed in the non-functional requirements.
  • Performance testing is designed and administered
    by a test team, and the results are provided to
    the customer.

9
Performance Testing
  • Function testing
  • May demonstrate that a test system can calculate
    the trajectory of a rocket, based on the rockets
    thrust, weather conditions, and related sensor
    and system information.
  • Performance testing examines how well the
    calculation is done
  • the speed of the response to user commands
  • accuracy of the result
  • accessibility of the data

10
Reliability, Availability and Maintainability
  • Reliability
  • Explores how the software functions consistently
    and correctly over long periods of time
  • Availability
  • Explores if the software is available when we
    need it
  • Maintainability
  • When the software product fails, it examines how
    quickly and easily repairs are performed

11
Reliability
  • Reliability is the probability that a system will
    operate without failure under given conditions
    for a given time interval.
  • It is expressed on a scale from 0 to 1.
  • Highly reliable ? close to 1
  • Unreliable ? close to 0

12
Availability
  • Availability is the probability that a system is
    functioning completely at a given instant in
    time, assuming that the required external
    resources are also available.
  • It is expressed on a scale from 0 to 1.
  • Completely up and running ? 1
  • Unusable ? 0

13
Maintainability
  • Maintainability is the probability, for a given
    condition, that any maintenance activity can be
    carried out within a stated time interval and
    using stated procedures and resources.
  • It is expressed on a scale from 0 to 1.
  • Software maintenance can still be done when the
    system is up, which is different from hardware
    maintenance where the system is unavailable.

14
Measures
  • To derive these measures, attributes of failure
    data is examined.
  • Capture failure data ? (i-1) failures
  • Record inter-failure times ? t1, t2, , ti-1
  • Mean Time To Failure (MTTF)
  • MTTF (t1 t2 ti-1) / (i-1)

15
Measures
  • Suppose each underlying fault has been fixed and
    the system is again running.
  • Ti denotes the yet-to-be observed, next time to
    failure (a random variable).
  • Mean Time To Repair (MTTR) tells us the average
    time it takes to fix a faulty software component.
  • Mean Time Between Failures (MTBF)
  • MTBF MTTF MTTR

16
Measures
  • Reliability-
  • As the system becomes more reliable, its mean
    time to failure should increase
  • R MTTF / (1 MTTF)
  • Availability
  • Can be measured so as to maximise MTBF (or MTTF?)
  • A MTBF / ( 1 MTBF ) ?
  • Or A MTTF/ (MTTF MTTR)?
  • Or A MTTF/MTBF?
  • Maintainability
  • When a system is maintainable, MTTR is minimised
  • M 1 / (1 MTTR)

17
Acceptance Testing
  • Once convinced that the system meets all the
    requirements specified, the customers and users
    asked to test.
  • Customer leads testing and defines the test
    cases.
  • Purpose is to enable the customers and users to
    determine if the system we built really meets
    their needs and expectations.
  • Tests are written conducted and evaluated by the
    customers, with assistance from the developers to
    only answer technical questions.

18
Types of Acceptance Tests
  • Benchmark test
  • Customer prepares a set of test cases that
    represent typical conditions under which the
    system will operate when actually installed.
  • Commonly used when customer has special
    requirements.
  • Pilot test
  • Installs the system on an experimental basis,
    where users test all the functions of the system
    as though system was actually installed
    permanently
  • Customer often prepares a suggested list of
    functions that each user tries to incorporate in
    typical daily procedures

19
Types of Pilot Tests
  • Alpha test
  • In-house pilot test that is run before being
    released to the customer for the real pilot test
  • Beta test
  • Customers pilot test that is run at the
    customers site with a small subset of customers
    potential users

20
Parallel testing
  • May be performed if a new system is replacing an
    existing one or is part of a phased development.
  • The new system operates in parallel with the
    previous system.
  • Users become gradually accustomed to the new
    system but continue to use the old system to
    duplicate the new
  • This gradual transition allows users to compare
    and contrast the new system with the old
  • Allows sceptical users to build their confidence
    in the new system by comparing the results
    obtained in both and verifying that the new
    system is just as effective and efficient as the
    old.

21
Installation Testing
  • This test focuses on two things
  • Completeness of installed system
  • Any functional or non-functional characteristics
    that may be affected by site conditions
  • If the acceptance is performed on-site and
    conditions under which it was tested do not
    change, then the installation test may not be
    needed.
  • When the customer is satisfied with the results,
    testing is complete and the system is formally
    delivered.
Write a Comment
User Comments (0)
About PowerShow.com