Summary - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

Summary

Description:

location to store results. Logging specification. active or inactive. level of detail. location to store results. upper bound on log size. Saving of all this ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 24
Provided by: alanwi8
Category:
Tags: summary

less

Transcript and Presenter's Notes

Title: Summary


1
Summary
  • Dimensions of test automation
  • Particular application / technology domain
  • Language oriented
  • Specific test function
  • Infrastructure
  • Management

2
Application / technology domain
  • Examples
  • Cactus specific to Java Enterprise Edition
    servlets and Java application servers.
  • dbUnit specific to database applications
  • GUI capture and replay
  • Benefits
  • Can test specific characteristics of a particular
    technology
  • Issues
  • If the technology changes for the application,
    the investment is automation is lost.

3
Language oriented
  • Examples
  • JUnit
  • TTCN
  • UML profile
  • Scripts Python, Perl, Tcl, ...
  • Benefits
  • Often the most general purpose and flexible
  • Possible reuse of development resources (e.g.
    Java, UML)
  • Issues
  • Inherit issues from language base
  • If not the development language, parallel code
    infrastructure.

4
Function-oriented
  • Examples
  • state-space exploration tools
  • combinatorial tools
  • automated generation of tests
  • Benefits
  • thorough coverage of specific aspect of
    verification or validation
  • Issues
  • focus on one methodology
  • tools often are exploratory / prototypes

5
Automated Testing Infrastructure
  • Includes
  • Test execution management
  • selection
  • running of tests
  • test result reporting
  • Test suites
  • Configuration management
  • Documentation, data base of tests
  • Progress tracking
  • includes result data bases

6
Test Infrastructure
Campaign Information
Execution manager
Report
Results database
Test scripts
Test controller
Test log
SUT
SUT log
7
Test Execution Management (1)
  • Test selection
  • location of test scripts
  • selection of tests to run
  • order of test cases
  • Dynamic test selection criteria, if applicable
  • Specification of start time (now, later)
  • General setup for
  • system under test
  • connection (e.g. sockets) to remote SUT
  • turning on coverage / instrumentation tools

8
Test Execution Management (2)
  • Should failures be retried immediately?
  • User interrupt capabilities pause/resume
  • Additional criteria for stopping a test case
  • Example upper bound on execution time
  • Operations between tests
  • cleanup / setup
  • delay specification
  • Criteria for stopping the test run
  • Threshold for number of failures / errors
  • Total execution time
  • System under test failure / shutdown
  • Failure of specific test cases

9
Test Execution Management (3)
  • Test report selection
  • level of detail
  • location to store results
  • Logging specification
  • active or inactive
  • level of detail
  • location to store results
  • upper bound on log size
  • Saving of all this information

10
Dynamic Test Selection
  • During a test run, test selection is based on
    results of previous tests
  • Example
  • If test T47 passes, proceed to test T48
  • If test T47 fails, run secondary tests T47A and
    T47B.
  • May be hierarchical

11
Test results reporting
  • Scoreboard numbers of tests
  • run / retried / not run
  • in each verdict category pass, fail, error,
    inconclusive
  • Test results log with
  • name of each test case, start time, end time
  • verdict for test
  • for test cases that do not pass incident
    report (next slide)
  • (if applicable/available) system under test
    logs

12
Incident reports
  • source IEEE standard 829
  • Incident identifier
  • Items involved
  • test case identifier
  • logs (for tester or SUT)
  • locale of SUT (e.g. procedure name called, etc.)
  • Description
  • input
  • expected response
  • actual response
  • time stamp
  • attempt to repeat?
  • other information environment information,
    anomalies, etc.

13
Tracking of Results
  • During a test campaign, tracking of results is
    required.
  • Most recent execution verdict for each test case
  • History of test case results during campaign
  • History of test case results during all campaigns
  • Automated updating of results tracking system
  • Example test execution system updates data base
    each time a test is run.
  • Cumulative results may then be determined from
    data base, or supplied by test execution system.

14
Test Documentation
  • Can be stored within test script, or separately
    in
  • document(s)
  • data base
  • specialized tool / management system
  • Configuration management of documentation is
    needed.

15
Test Case Documentation
  • For each test case
  • Identification
  • Name/Number (mandatory)
  • Author
  • Purpose of test
  • General description
  • Coverage of specific requirements
    (links/references)
  • Coverage of design model / code elements
  • Assumptions
  • examples test ordering, SUT start state
  • Setup required
  • Cleanup performed after test

16
Maintenance of Automated Tests
  • Maintaining code is expensive.
  • Automated tests have to be similarly maintained
  • When creating automated tests (and tools to
    support them), maintenance should be considered
    as a design aspect.

17
Maintenance Issues
  • Keeping changes in system functionality
    co-ordinated with the test suite
  • Modifications needed to tests as system is
    upgraded
  • Changes in data dependencies
  • Changes in data format (storage, presentation,
    internal)
  • Test suite clean-up
  • Removing tests that are no longer relevant
  • Accumulation of duplicate / redundant tests.
  • Removal or refactoring of unreliable tests.

18
Maintenance Issues
  • Documentation of tests
  • Can a future tester understand a test in order to
    maintain it?
  • Will someone other than the original test author
    understand the significance of a test failure in
    terms of assisting a designer to diagnose a
    problem?
  • Is the documentation kept up to date?
  • Test dependencies
  • If tests are dependent on each other, there is a
    risk that subsequent maintenance may not take
    this into account.

19
Maintenance Issues
  • Test suite coding / scripting style
  • Linear
  • Good test independence
  • Bad redundancy, hard coding
  • Shared
  • Good reduced test case size, easier to upgrade,
    faster automation
  • Bad dependencies, management
  • Data-driven
  • Good adaptable
  • Bad initial setup required

20
Potential Automation Metrics
  • Work hours or elapsed time to do the following
    tasks
  • (average?) time to create an automated test case
  • debugging automated tests
  • setup and execute a test suite run
  • analyze the results and create problem reports
  • training time to be able to create and/or run
    automated tests
  • automation maintenance
  • Reliability
  • number of false positives for each test case
  • number of false negatives for each test case

21
Potential Automation Metrics
  • Robustness
  • number of test case execution errors for a test
    case
  • time taken to investigate such cases
  • number of test cases that fail for a single
    software defect
  • Portability
  • time / cost of moving test cases from one
    execution platform to another

22
Some final points (1)
  • Return on investment is key
  • Benefits faster, more reliable test execution,
    better product quality
  • Costs resources people, time, tools
  • Do a thorough cost-benefit analysis to be sure
    that automation results in a net gain.
  • Choose appropriate targets
  • Some types of testing are inherently more
    suitable for automation
  • Repetitive testing with predictable results.
  • Not everything is suitable for automation

23
Some final points (2)
  • Test automation requires a parallel software
    engineering process.
  • Design
  • Debugging
  • Documentation
  • Maintenance
  • Change management
  • Know what supporting tools do, what they dont
    do, and what assumptions they rely on.
Write a Comment
User Comments (0)
About PowerShow.com