Title: User-session based Testing of Web Applications
1User-session based Testing of Web Applications
2Two Papers
- A Scalable Approach to User-session based Testing
of Web Applications through Concept Analysis - Uses concept analysis to reduce test suite size
- An Empirical Comparison of Test Suite Reduction
Techniques for User-session-based Testing of Web
Applications - Compares concept analysis to other test suite
reduction techniques
3Talk Outline
- Introduction
- Background
- User-session Testing
- Concept Analysis
- Applying Concept Analysis
- Incremental Reduced Test Suite Update
- Empirical Evaluation (Incremental vs. Batch)
- Empirical Comparison of Concept Analysis to other
Test Suite Reduction Techniques - Conclusions
4Characteristics of Web-based Applications
- Short time to market
- Integration of numerous technologies
- Dynamic generation of content
- May contain millions of LOC
- Extensive use
- Need for high reliability, continuous
availability - Significant interaction with users
- Changing user profiles
- Frequent small maintenance changes
5User-session Testing
- User session
- A collection of user requests in the form of URL
and name-value pairs - User sessions are transformed into test cases
- Each logged request in a user session is changed
into an HTTP request that can be sent to a web
server - Previous studies of user-session testing
- Previous results showed fault detection
capabilities and cost effectiveness - Will not uncover faults associated with rarely
entered data - Effectiveness improves as the number of sessions
increases (downside cost increases as well)
6Contributions
- View user sessions as use cases
- Apply concept analysis for test suite reduction
- Perform incremental test suite update
- Automate testing framework
- Evaluate cost effectiveness
- Test suite size
- Program coverage
- Fault detection
7Concept Analysis
- Technique for clustering objects that have common
discrete attributes - Input
- Set of objects O
- Set of attributes A
- Binary relation R
- Relates objects to attributes
- Implemented as a Boolean-valued table
- A row for each object in O
- A column for each attribute in A
- Table entry o, a is true if object o has
attribute a, otherwise false
8Concept Analysis (2)
- Identifies concepts given (O, A, R)
- Concept is a tuple (Oi, Aj)
- Concepts form a partial order
- Output
- Concept lattice represented by a DAG
- Node represents concept
- Edge denotes the partial ordering
- Top element T most general concept
- Contains attributes that are shared by all
objects in O - Bottom element ? most special concept
- Contains objects that have all attributes in A
9Concept Analysis for Web Testing
- Binary relation table
- User session s object
- URL u attribute
- A pair (s, u) is in the relation table if s
requests u
10Concept Lattice Explained
- Top node T
- Most general concept
- Contains URLs that are requested by all user
sessions - Bottom node ?
- Most special concept
- Contains user sessions that requests all URLs
- Examples
- Identification of common URLs requested by 2 user
sessions - us3 and us4
- Identification of user sessions that jointly
request 2 URLs - PL and GS
11Concept Analysis for Test Suite Reduction
- Exploit lattices hierarchical use-case
clustering - Heuristic
- Identify smallest set of user sessions that will
cover all URLs executed by original suite
12Incremental Test Suite Update
13Incremental Test Suite Update (2)
- Incremental algorithm by Godin et al.
- Create new nodes/edges
- Modify existing nodes/edges
- Next-to-bottom nodes may rise up in the lattice
- Existing internal nodes never sink to the bottom
- Test cases are not maintained for internal nodes
- Set of next-to-bottom nodes (user sessions) form
the test suite
14Web Testing Framework
15Empirical Evaluation
- Test suite reduction
- Test suite size
- Replay time
- Oracle time
- Cost-effectiveness of incremental vs. batch
concept analysis - Program coverage
- Fault detection capabilities
16Experimental Setup
- Bookstore Application
- 9748 LOC
- 385 methods
- 11 classes
- JSP front-end, MySQL backend
- 123 user sessions
- 40 seeded faults
17Test Suite Reduction
- Metrics
- Test suite size
- Replay time
- Oracle time
18Incremental vs. Batch Analysis
- Metric
- Space costs
- Relative sizes of files required by incremental
and batch techniques - Methodology
- Batch 123 user sessions processed
- Incremental 100 processed first, then 23
incrementally
19Program Coverage
- Metrics
- Statement coverage
- Method coverage
- Methodology
- Instrumented Java classes using Clover
- Restored database state before replay
- Wget for replaying user sessions
20Fault Detection Capability
- Metric
- Number of faults detected
- Methodology
- Manually seeded 40 faults into separate copies of
the application - Replayed user sessions through
- Correct version to generate expected output
- Faulty version to generate actual output
- Diff expected and actual outputs
21Empirical Comparison ofTest Suite Reduction
Techniques
22Empirical Comparison of Test Suite Reduction
Techniques
- Compared 3 variations of Concept with 3
requirements-based reduction techniques - Random
- Greedy
- Harrold, Gupta, and Soffas reduction (HGS)
- Each requirements-based reduction technique
satisfies program or URL coverage - Statement, method, conditional, URL
23Random and Greedy Reduction
- Random
- Selection process continues until reduced test
suite satisfies some coverage criterion - Greedy
- Each subsequent test case selected provides
maximum coverage of some criterion - Example
- Select us6 maximum URL coverage
- Then, select us2 most marginal improvement for
all-URL coverage criterion
24HGS Reduction
- Selects a representative set from the original by
approximating the optimal reduced set - Requirement cardinality of test cases
covering that requirement - Select most frequently occurring test case with
lowest requirement cardinality - Example
- Consider requirement with cardinality 1 GM
- Select us2
- Consider requirement with cardinality 2 PL and
GB - Select test case that occurs most frequently in
the union - us6 occurs twice, us3 and us4 once
- Select us6
25Empirical Evaluation
- Test suite size
- Program coverage
- Fault detection effectiveness
- Time cost
- Space cost
26Experimental Setup
- Bookstore application
- Course Project Manager (CPM)
- Create grader/group accounts
- Assign grades, create schedules for demo time
- Send notification emails about account creation,
grade postings
27Test Suite Size
- Suite Size Hypothesis
- Larger suites than
- HGS and Greedy
- Smaller suites than
- Random
- More diverse in terms of use case representation
- Results
- Bookstore application
- HGS-S, HGS-C, GRD-S, GRD-C created larger suites
- CPM
- Larger suites than HGS and Greedy
- Smaller than Random
28Test Suite Size (2)
29Program Coverage
- Coverage Hypothesis
- Similar coverage to
- Original suite
- Less coverage than
- Suites that satisfy program-based requirements
- Higher URL coverage than
- Greedy and HGS with URL criterion
- Results
- Program coverage comparable to (within 2 of)
PRG_REQ techniques - Slightly less program coverage than original
suite and Random - More program coverage than URL_REQ techniques,
Greedy and HGS
30Program Coverage (2)
31Fault Detection Effectiveness
- Fault Detection Hypothesis
- Greater fault detection effectiveness than
- Requirements-based techniques with URL criterion
- Similar fault detection effectiveness to
- Original suite
- Requirements-based techniques with program-based
criteria - Results
- Best fault detection but low number of faults
detected per test case - Random PRG_REQ - Similar fault detection to the best PRG_REQ
techniques - Detected more faults than HGS-U
32Fault Detection Effectiveness (2)
33Time and Space Costs
- Costs Hypothesis
- Less space and time than
- HGS, Greedy, Random
- Space for Concept Lattice vs. space for
requirement mappings - Results
- Costs considerably less than PRG_REQ techniques
- Collecting coverage information for each session
is the clear bottleneck of requirements-based
approaches
34Conclusions
- Problems with Greedy and Random reduction
- Non-determinism
- Generated suites with wide range in size,
coverage, fault detection effectiveness - Test suite reduction based on concept-analysis
clustering of user sessions - Achieves large reduction in test suite size
- Saves oracle and replay time
- Preserves program coverage
- Preserves fault detection effectiveness
- Chooses test cases based on use case
representation - Incremental test suite reduction/update
- Scalable approach to user-session-based testing
of web applications - Necessary for web applications that undergoes
constant maintenance, evolution, and usage
changes
35References
- Sreedevi Sampath, Valentin Mihaylov, Amie Souter,
Lori Pollock "A Scalable Approach to User-session
based Testing of Web Applications through Concept
Analysis," Automated Software Engineering
Conference (ASE), September 2004. - Sara Sprenkle, Sreedevi Sampath, Emily Gibson,
Amie Souter, Lori Pollock, "An Empirical
Comparison of Test Suite Reduction Techniques for
User-session-based Testing of Web Applications,"
International Conference on Software Maintenance
(ICSM), September 2005.