Title: Testing and your Y2K Project
1Module/Unit Tests
Dr Steven Butler K. J. Ross Associates Pty.
Ltd. PO Box 131, West Burleigh, 4219 Ph 07 5522
5131 Fax 07 5522 5232 admin_at_kjross.com.au http/
/www.kjross.com.au
2Overview
- Integration Levels
- Module / Unit Testing
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
3Overview
- Module / Unit Testing
- Integration Levels
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
4Module (Unit) Testing
- Module Testing is applied at the level of each
module of the system under test. - Module testing is nearly always done by the
developers themselves, before the module is
handed over for integration with other modules. - Usually the testing at this level is structural
(white-box) testing. - Module Testing is sometimes called Unit Testing
representing the lowest level item available for
testing.
5Module Testing Questions
- Has the component interface been fully tested?
- Have local data structured been exercised at
their boundaries? - Have all independent basis paths been tested?
- Have all loops been tested appropriately?
- Have data flow paths been tested?
- Have all error handling paths been tested?
6White Box Module Testing
- The goal is to extensively test the code with a
view to exposing and fixing bugs prior to
integration with others work. - Finds lurking bugs that are very hard to find
cost-effectively during integration or functional
testing - Tools are essential to get test coverage data
- REMEMBER Caution is required
- you must have a specification against which to
test - without a spec, you are simply testing that the
code does what you think it does (but is it doing
the RIGHT thing?)
7Module Test Technique
- The module under test is usually treated as a
white box however a grey box is more
appropriate - 1. Systematic test design techniques are applied
to the specification of the module - 2. The module is tested
- 3. If any tests fail the module is debugged and
the process is repeated from step 2 - 4. Test coverage is evaluated if coverage
criteria is met then the module is considered
verified otherwise the process is repeated from
step 1 - knowledge of the algorithms in use can help find
new test cases when coverage is too low
8Module Test Approaches
- Ad-hoc
- Checklist
- Automated
- Planned
9Adhoc Approach
- Usually the task of the implementing developer
- Ad-hoc approach random testing before the
modules are handed over for integration - Pros
- quick (sometimes very quick)
- Cons
- Difficult to quantify how well the code was
tested before integration - Bugs slip through to later testing where they are
much more difficult (and costly) to find and fix - Less likely that the developer will design
testable code
10Checklist Approach
- A module test checklist is the set of abbreviated
test cases used by a developer to ensure thorough
testing of their module. - Called a checklist because it usually lists
inputs and expected results and has a checkbox to
indicate the result of executing the test - may not be applicable if tests are automated
although the automated test may have its origins
in a test checklist
11Checklist Approach
- Checklists are more suitable for developer
testing than standard test plans because - they minimise the amount of work required of a
developer to prepare a repeatable set of test
cases - a developer does not require detailed test
instructions because they already know how to
apply a test to their own code - YOU try getting a developer to write an IEEE 829
compliant test plan!
12Example Test ChecklistsModule Specific
- class IntSet
- ...
- void add(int i) // contains(i) gt exception
otherwise adds i - void remove(int i) // !contains(i) gt
exception otherwise removes i - bool contains(int i) // returns true iff i
is contained in the set - ...
13Example Test ChecklistsGeneric 1
- Myers 1979
- Is the number of input parameters equal to number
of arguments? - Do parameter and argument attributes match?
- Do parameter and argument units system match?
- Is the number of arguments transmitted to called
modules equal to number of parameters? - Are the attributes of arguments transmitted to
called modules equal to attributes of parameters?
- Is the units system of arguments transmitted to
called modules equal to units system of
parameters? - Are the number of attributes and the order of
arguments to built-in functions correct? - Are any references to parameters not associated
with current point of entry? - Have input only arguments altered?
- Are global variable definitions consistent across
modules? - Are constraints passed as arguments?
- When a module performs external I/O, additional
interface tests must be conducted.
14Example Test ChecklistsGeneric 2
15Automated Approach
- Tests are automated through building harnesses
and stubs - Pros
- Tests can be repeated quickly and efficiently as
minor changes are made during development - No need to select specific test cases to save
time during regression testing - Cons
- has high up front cost, and may be costly to
maintain if the test code is badly designed - automation is difficult when testing a system
with spaghetti dependencies
16Planned Approach
- Developer plans to test systematically from the
beginning. - The plan may involve testing incrementally as
classes are constructed, or to do testing once
most or all of the classes are complete - When complete, release the module and its test
code for integration with the system
17Planned Approach
- Pros
- tests are documented and can be reviewed and
reproduced - automated tests reduce the expense of regression
tests - a systematic approach means many more bugs are
found early in the development process where they
are least expensive to fix - Cons
- Amount of test code (i.e. test harness and stubs)
often outweighs the real module code (i.e. lots
more work) - Changes to released software will usually require
updates to the test-suite
18Planned Approach
- Test planning happens during the modules design
and implementation - design for testability
- a checklist of things to test is collated
during design and implementation - a peer or team leader may review the checklist
for suitability and suggest other things to test - ideally module tests are automated
- reduce duplication by specifying test cases
directly in code - important to ensure that the purpose of each test
case is documented
19Overview
- Module / Unit Testing
- Integration Levels
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
20Integration Testing
- A partial system level test can be conducted
without waiting for all the components to be
available - thus problems can be fixed before affecting
downstream development on other components - Integration testing is also performed as some
tests cannot be carried out on a fully integrated
system - helps to focus testing to particular components
and to isolate the problems that will be
discovered - Integration Testing is an interim level of
testing applied between module testing and system
testing - tests the interaction and consistency of an
integrated subsystem
21Integration Testing
- Integrate incrementally as components are
assembled into larger subsystems - Bottom-up - assembles up from the lowest level
components and replaces higher level components
with test harnesses to drive the units under
test. - Top-down assembles down from the highest level
components replacing lower level components with
test stubs to simulate interfaces - Inside-out - Mixture of both bottom-up and
top-down - Big-bang - complete system integration in one hit
- Collaboration - big bang all the components
required to exercise a particular collaboration - High Frequency - frequent integration of small
components, aim is to always have a system that
builds and (to some extent) works
22System Architecture
23Bottom-Up Integration Test
24Bottom-Up Integration Test
25Bottom Up Integration
- Bottom-up incremental integration of classes to
form the module - Pros
- incremental approach simplifies the task of
locating errors when a new class fails a test - bottom up approach minimises test stubs
- good test coverage is achievable on lowest level
classes - Cons
- hard to get good coverage in higher level control
classes - must carefully plan the order in which classes
are completed
26Top-Down Integration Test
27Top-Down Integration Test
28Inside-Out Integration Test
29Top Down Testing
- Top-down incremental integration of classes to
form the module - Pros
- incremental approach simplifies the task of
locating errors when a new class fails a test - top down approach minimises test harnesses
- good test coverage is achievable on highest level
control classes - Cons
- hard to get good coverage in lowest level
(work-horse) classes - must carefully plan the order in which classes
are completed
30Big Bang Testing
- Test module once all classes are complete
- Pros
- minimal overhead producing test stubs
- a single test harness may be sufficient for all
testing - more thorough testing than an ad-hoc approach
- dont need to plan the order in which classes are
completed - Cons
- hard to get good coverage of any of the classes
- more difficult to trace the location of a defect
when a test fails - risky approach to integration
31Integration ScenarioXML Application
32Overview
- Module / Unit Testing
- Integration Levels
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
33Integration Testing
- test components for faults that cause
inter-component failures - performed as a preliminary series of testing
before system functional testing begins - reveal interoperability faults that would hamper
system tests
34Integration Testing
- best performed in a controlled, staged manner
- interfaces are progressively tested
- new components and their interfaces are treated
as suspect until proven stable against the
growing stable core - low-level components such as classes with no
interdependencies can be integrated in parallel
Typically, module testing involves much component
integration at the lowest levels.
35Planning Integration
- Incremental integration requires an analysis of
the architectural dependencies of the system - if no architectural specification is available
then integration will typically be a big bang
(or more likely a big fizzle) - Integration planning should commence early in the
life-cycle of the software - as integrator, plan to review the architectural
design - create a first cut of the planned integration
sequence based on the dependency analysis in the
architectural design - use the insight gained to give feedback to the
architect this will result in an overall better
(and more testable) design
36Integration Checklist
Following is a summary of things to keep in mind
when designing integration tests
- Configuration/version control problems (interface
changes) - Missing overlapping or conflicting functions
(name space issues) - Incorrect or inconsistent data structures used
for a file or database - Conflicting usage or view of a data structure
- Violation of data integrity of a data structure
- Calling the wrong method (typos/code
error/dynamic binding) - Client calls server and violates its
preconditions - Client calls modal server with out of sequence
message (e.g. MFC assert triggered) - Call the wrong object (e.g. incorrect index into
an array of objects)
37Integration Checklist (cont.)
- Wrong parameters, or incorrect parameter values
- Memory management and memory ownership confusion
(e.g. strings gained from strdup() must be
deleted by the client) - Incorrect use of OS or VM services (e.g. call
System.exit() in a java applet) - Use obsolete or non-implemented services (e.g.
create a HashMap in version 1.1 JVM calling BSD
functions in a SysV UNIX) - Inter-component conflicts (e.g. two components
write simultaneously to the same UNIX pipe,
causing interleaving of data) - Lack of resources / resource contention (e.g.
lack of memory disks not fast enough to handle
contention between two IO heavy processes)
38Managing Integration
- Configuration Management
- You must know the component versions which make
up the software under test - at the end of integration, an audit should be
done to ensure that tested components match those
that are released for system testing - Test Execution Management
- testing is a service to project management that
lets them know the quality of the software - without good information, management cannot
determine when a product is suitable for release - as a tester, you must provide quantitative
information to management
39Preventative Measures
- Many defects can be fixed early or prevented if
the following precautions are taken - get testers involved early in review of
architecture and design documents - make sure a peer review is done on modules before
release to integration - many bugs that might slip through testing are
found while explaining code to a reviewer - implementation knowledge is shared amongst
developers - you always have the option to throw it away and
start again if necessary - start integration as soon as the first modules
are ready to be integrated - dont go for a big
bang approach
40Overview
- Module / Unit Testing
- Integration Levels
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Unit Test Frameworks
41Test Harnesses
- Supporting code and data used to provide an
environment for testing components of your
system. - Typically used during unit/module test
- Typically created by the developer of the code
- vary between simple user interfaces to allow
tests to be conducted manually, through to fully
automated execution of tests to provide
repeatability when the code is modified. - Building test harnesses may account for 50 of
the total code produced - valuable asset, and should be controlled in the
same way the code is managed
42Test Automation Techniques
- code driven
- test cases are embedded directly in the source
code of the test harness - simple to get tests up and running
- requires little effort for small unchanging
modules - is not easily maintainable for large modules, or
modules were tests are likely to be added a
change frequently
43Test Automation Techniques
- data driven
- the test harness does not contain any test cases
instead it reads data from a file that defines
the inputs and expected outputs of each test case - higher initial overhead in design of interpreter
and data file formats - initial overhead is quickly recouped by the ease
in adding new tests
44Data Driven Test Harness
The test harness usually conforms to the
following structure
while data file is not empty read inputs and
outputs from data file call function with
inputs if results of function match expected
outputs register TC passed else register TC
failed end if end while output summary
45Example - Harness
/ Test Harness this does all the work
/ main() int count 0 // count the no of
tests executed int fail 0 double
result / open the input script / while
( line readLine(inputFile) ! EOF ) if (
is_comment(line) ) continue count /
split line into testid,opposite,adjacent,expected
/ result calcHypotenuse ( opposite,
adjacent ) if ( result ! expected )
fail output(Test failed s( d, d
) got d expected d\n, testid,
opposite, adjacent, result, expected
) Â / output the final stats
/ output(Tests run d\n, count) output(Tes
ts OK d\n, count-fail) output(Tests failed
d\n, fail)
/ Calculate the hypotenuse of a right angle
/ / triangle . nOpposite and nAdjacent must
values / / be non negative . Return the
hypotenuse or 1 / / for invalid input
/ double calcHypotenuse ( double
nOpposite, double nAdjacent ) if ( nOpposite
lt 0 or nAdjacent lt 0 ) return 1 return (
sqrt(pow(nOpposite,2) pow(nAdjacent,2)) ) Â
46Example - Harness Run
- Sample Input Data
- Sample Output
format testid,opposite,adjacent,expected Test1,
3,4,5 Test2,3,4,6 just to try and force a
failure Test3,-3,4,-1 Test4,4,-1,-1 Test5,0,4,4
Test failed Test2( 3, 4 ) got 5 expected 6
Tests run 5 Tests OK 4 Tests failed 1
47Test Stubs
- dummy components that stand in for unfinished
components during unit/module testing and
integration testing. - quite often code routines that have no or limited
internal processing - may always return a specific value
- request value from user
- read value from test data file
- simulators may be used as test stubs to mimic
other interfaced systems
48Test Stubs
- Sometimes code for a low level module is modified
to provide specific control - E.g. a file I/O driver module may be modified to
inject file I/O errors at specific times, and
thus enable the error handing of I/O errors to be
analysed
49Example - Stub
- Same result for each call
enum Method GET, POST struct response int
code String content String header /
static stub commonly used since predictable
results / response sendHttpRequest( String url,
Method getpost, String params) response
res res.code 200 / status OK
/ res.content lthtmlgtlttitlegtHellolt/titlegtltbody
gttestinglt/bodygtlt/htmlgt res.header
return res
50Example - Stub
- Lookup result from predefined lookup table or file
/ get value from global lookup table generated
from input script / response sendHttpRequest(
String url, Method getpost, String params)
response res / assume test input script
included data to populate lookup table / res
lookup(url) return res
51Example - Stub
- Conditional response (if file accessible)
/ get value from a file / response
sendHttpRequest( String url, Method getpost,
String params) response res / assume a file
has been opened and filehandle is accessible
/ if ( code readLine(filehandle) )
res.code code else res.code
400 / fail / if ( content
readLine(filehandle) ) res.content content
if ( header readLine(filehandle) )
res.header header return res
52Example - Stub
- Error injection using actual method
- Uses mode (controlled by harness) to determine
whether to inject error
/ get value from a file / response
sendHttpRequest( String url, Method getpost,
String params) response res / assume a file
has been opened and filehandle is accessible
/ if ( mode normal ) res
actual_sendHttpRequest(url, getpost, params)
else res.code 400 / fail / return
res
53Coordinating Harnesses and Stubs
The data file may have fields indicating data
that should be returned by stubbed functions
during the test. The stubbed functions return
global variables that are set by the test harness
after reading the appropriate value from the test
file. For example, the daysInMonth function used
by checkDate is stubbed. It returns a
pre-determined value from the test harness data
file (shown in bold).
bool checkDate(int d, int m, int y) if ( y
lt 0 ) return false if ( m lt 1 m gt 12 )
return false if ( d lt 1 d gt
daysInMonth(m, isLeapYear(y)) ) return
false return true
TC013,4,199830T normal dateTC0229,2,199928
F not a leap yearTC0329,2,200029T leap
yearTC0429,2,210028F not a leap year
54Coordinating Harnesses and Stubs (cont)
int daysInMonth (int month, bool leapYear)
return globalDaysInMonth int isLeapYear(int
y) return true / simple stub / while
(data remaining in file) read data line
into (TCID, (d, m, y), globalDaysInMonth,
Expected) if (checkDate(d,m,y) Expected)
register (TCID,pass) else
register (TCID,fail)
bool checkDate(int d, int m, int y) if ( y
lt 0 ) return false if ( m lt 1 m gt 12 )
return false if ( d lt 1 d gt
daysInMonth(m, isLeapYear(y)) ) return
false return true
TC013,4,199830T normal dateTC0229,2,199928
F not a leap yearTC0329,2,200029T leap
yearTC0429,2,210028F not a leap year
55Overview
- Module / Unit Testing
- Integration Levels
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
56Test Oracles
- Capture and comparison of results is one key to
successful software testing. Test oracles are an
mechanism used to calculate the expected result
of a test. - Test oracles are commonly the tester calculating
or checking the result by hand. - If the test case output involves complex
calculations, then programs may be written to
compute or check the output. - Care must be taken to ensure that the same
approach is not used for developing the test
oracle as used in developing the system operation
tested by the oracle, otherwise errors may remain
undetected. - If different results are obtained for a test
case, the tester must first determine whether the
system under test is incorrect or whether the
oracle is incorrect.
57Test Oracles
- Douglas Hoffman categorised a number of
different alternatives for implementing oracles
58Test Oracle Example
59True Oracle
- Select points on the X axis and calculate the
sine at each point selected. - The calculation should be performed using an
independent algorithm
60Sampling Oracle
- Select specific points on the X axis where the
sine result is known without calculation. - For example, the outcome of the sine function is
precisely known at 0, 90, 180, 270 and 360
degrees.
61Heuristic Oracle
- Tester analyses the inputs and results of the
sine function to determine if there are
predictable relationships between them. - Provide a number of test cases using the sampling
oracle approach above - The remainder of the tests would be based on
consistency of inputs and results with the
predicted relationships
62Heuristic Oracle
- Sampling oracle defined tests at 0, 90, 180, 270
and 360 degrees. - Between 0 and 90 degrees, the result of sine(X)
only increases as X increases. Similarly,
between 90 and 180 degrees, sine(X) always
decreases as X increases, and so on for 180 to
270 and 270 to 360 degrees.
63Heuristic Oracle
- The heuristic oracle for sine is very simple as
we need only check for increases and decreases
depending on the range. - This example oracle will trap a number of
different errors, e.g. dips, scale or
discontinuity of the function.
64Heuristic Oracle
- Some kinds of errors may go undetected using a
heuristic oracle. - For example, rounding errors, or constant change
of the sine function may go undetected.
65Heuristic Oracle Relationships
- Look for simple predictable relationships between
inputs and results that they can easily check. - Most complex algorithms contain simple
relationships. - Selected ranges may be selected for the
heuristic, but the heuristic must hold for all
inputs and results within the range. - If there are exceptions it becomes more
complicated to check whether the exceptions
apply, and the advantages are lost. - It is best to define to groups, what we can know
to expect, and what we can't determine
separately. - The former are tested using a sampling oracle,
and the latter using a heuristic oracle.
66Looking for Patterns
- When looking for patterns, consider reordering
data. - For instance, two equivalent sets of elements can
be sorted and compared to check that they contain
matching items. - For example, where new database items are created
using an ID that is the next increment, sorting
the items by ID may reveal any breaks in the
sequence to reveal missing items.
67Looking for Patterns
- Blocks of information can be represented by the
starting value and count of elements. - Variations are possible by using fixed increments
between values or using simple patterns for
changing increments. - For example, to test packets of data transmitted
over a network, packets can be randomly generated
with varying lengths, and starting values, but
fixed increments. - Simple to check the integrity of the received
packet, by only understanding the packet start
value, length and increment parameters - Not necessary to record the complete packet
contents as an expected result.
68Overview
- Module / Unit Testing
- Integration Levels
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
69Module Testing OO Software
- Module testing OO software offers more challenges
than traditional software. - encapsulation
- inheritance
- dynamic dispatch
- modal objects
- templates
70Encapsulation
- Hides object state
- difficult to establish pre-test conditions
- difficult to to determine actual results which
are expressed in terms of a state change - There are a few techniques to help
- name a test harness class in a friend declaration
- provide helper functions for establishing or
testing state - test only the exported state (i.e. that which
can be determined through the class interface) - use built-in test (pre post conditions class
invariant assertions)
71Inheritance
- A base class defines an interface with a
contract that all derived classes must honour - a test of a derived class must consider the
promises made by the base class - the state of the derived class includes all state
in the classes above it in the inheritance tree - Checklist for testing inheritance hierarchies
- incorrect initialisation check that the
superclass is correctly initialised in the
subclass - inadvertent bindings arises out of
misunderstanding the name scope rules - missing override copy constructor, equality
etc...
72Inheritance Checklist (cont)
- naked access child writes directly to a parents
variable, violating the parents state invariant - incorrect subclassing design problem
- naughty children child method doesnt honour
parents contact or leaves parent in invalid
state - worm holes child expands the superclass state
- spaghetti inheritance deep inheritance (depth gt
5) and top heavy multiple inheritance - gnarly inheritance child restricts acceptable
inputs - weird inheritance multiple inheritance used to
achieve perverse code-sharing
Many of these problems can be detected and
corrected by design and code walk-throughs
73Dynamic Dispatch
- Cant test the client interface to the dynamic
server object by just one run through the client
object code - if (account-gtavailable(amount)
- account-gtwithdraw(amount)
-
- else
- // display error
-
74Dynamic Dispatch
- Treat the dynamic dispatch like a C switch
statement for each subclass and test all branches
to achieve good interface coverage - switch(account.type)
- case CHEQUE
- if (cheque_available(account,amount)
- withdraw(account, amount)
-
- else
- // display error
-
- break // etc for other account types
75Modal Objects
- The modality of an object is categorised as one
of the following - Non-modal the methods in the object can be
invoked in any order, regardless of object state
(e.g. DateTime storage class get/set methods) - Quasi-modal methods can throw an exception if
called, depending on some abstraction of object
state (e.g. Stack container pop/push methods are
illegal when the stack is empty/full
respectively) - Modal method validity is directly related to
object state (e.g. close() method on a bank
account is only allowed when the account
attribute closed is false)
76Testing Modal Objects
- Use specification driven white-box testing for
objects with any kind of modality - For Quasi-modal and modal objects, there is more
chance of a bug lurking around the exceptional
cases than elsewhere - class IntSet
- ...
- void add(int i) // contains(i) gt exception
otherwise adds i - void remove(int i) // !contains(I) gt
exception otherwise removes i - bool contains(int i) // returns true iff i
is contained in the set - ...
77Testing Modal Objects
- Purpose
- To test that an attempt to add an integer to a
set which already contains that item will throw
an exception and not corrupt the set. - To test removal of an item from a set
- removes the item
- does not remove any other item
- What else is tested by this script?
Test Sequenceset-gtadd(1)set-gtadd(2)try
set-gtadd(1) // shouldnt get
here assert(false) catch assert(set-gtcontai
ns(1)) set-gtremove(1) // exception?
assert(!set-gtcontains(1))assert(set-gtcontains(2)
)
78Templates
- Template classes or functions are never fully
tested - Whenever a new template instantiation is made,
that instantiation will require testing - the new parameter may not implement the features
required by the template, or may not implement
them correctly according to the rules of the
template class - on some occasions, the template itself has subtle
bugs which are only triggered by particular
(legal) parameters - testing a template instantiation is an
integration problem - templates cannot be tested at all without
integration with a stub parameter class
79Overview
- Integration Levels
- Module / Unit Testing
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
80Coding Standards
- Ensure consistency of development
- Maintainability
- Reusability
- Testability
- Help avoid common pitfalls and faults
- Defensive programming
- Fault tolerant programming
81Coding Standards
- For sample coding standard resources
- See
- http//healthnet.hnet.bc.ca/catalogu/provider_regi
stry/supplementarybin/database_design/oracle_desi
gn_and_coding.pdf - See
- http//www.parasoft.com/jsp/templates/bulletproofi
ng/Chap8_crop.pdf
82Overview
- Integration Levels
- Module / Unit Testing
- Integration Testing
- Harnesses Stubs
- Test Oracles
- Object-Orientation Concerns
- Coding Standards
- Unit Test Frameworks
83Unit Test Frameworks
- Provides the test scaffolding around the code
under development - Available tools
- Commercial Tools
- Jtest, Cantata, etc.
- Extreme Programming frameworks (OpenSource)(http
//c2.com/cgi/wiki?TestingFramework) - Junit, HTTPUnit, DelphiUnit, CppUnit, PerlUnit,
PhPUnit, etc.
84JUnit
- Java (code-level) regression testing framework
- Written by Erich Gamma Kent Beck
- See www.junit.org
- Open Source
- Many extensions and documentation
85Building JUnit Tests
- Define a subclass of TestCase
- Override the setUp() method to initialise objects
under test - Override the tearDown() method to release objects
under test - Define one or more testXXX() methods to exercise
objects under test - Define a suite() factory method the creates all
the testXXX() methods or the TestCase - Define a main() method to run the TestCase
86Case Study - Money Class
class Money private int fAmount private
String fCurrency public Money(int amount,
String currency) Â Â Â Â Â Â Â fAmount
amount fCurrency currency public int
amount() return fAmount public String
currency() return fCurrency public
Money add(Money m) return new
Money(amount()m.amount(), currency())
87Case Study - Test Case
Junit Test Case
public class MoneyTest extends TestCase
// public void testSimpleAdd() Money
m12CHF new Money(12, CHF") Money m14CHF new
Money(14, "CHF") Money expected new Money(26,
"CHF") Money resultm12CHF.add(m14CHF)
assert(expected.equals(result))
Object Creation
Method Under Test
Result Verification assert - check boolean result
is true
- Need an equals method in Money
88Case Study - Test Case
- Create test for Equals first
Result Verification assertEquals - compare two
values
public void testEquals() Money m12CHF new
Money(12, "CHF") Money m14CHF new Money(14,
"CHF") assert(!m12CHF.equals(null))
assertEquals(m12CHF, m12CHF)
assertEquals(m12CHF, new Money(12, "CHF"))
assert(!m12CHF.equals(m14CHF))
89Case Study - Optimised Test Cases
- setUp tearDown methods available to manage
objects
- public class MoneyTest extends TestCase
- private Money f12CHFprivate Money f14CHFÂ
- protected void setUp() f12CHF new Money(12,
"CHF") f14CHF new Money(14, "CHF") - public void testEquals() assert(!f12CHF.equals
(null)) assertEquals(f12CHF, f12CHF) assertEqu
als(f12CHF, new Money(12, "CHF")) assert(!f12CHF
.equals(f14CHF)) - public void testSimpleAdd() Money expected
new Money(26, "CHF") Money result
f12CHF.add(f14CHF) assert(expected.equals(result
))
90Case Study - Test Suite
- Suite is a static method
- Test cases are added to the suite
- public static Test suite()
- TestSuite suite new TestSuite()suite.addTest(n
ew MoneyTest("testEquals"))suite.addTest(new
MoneyTest("testSimpleAdd"))return suite
91Case Study - Running Tests
- Choice of
- Textual UI
- Swing UI
- AWT UI
- /
- Uncomment choice of UI
- /
- public static void main()
- String testCaseName MoneyTest.class.getName(
)// junit.textui.TestRunner.main(testCaseName)
// junit.swingui.TestRunner.main(testCaseName)j
unit.ui.TestRunner.main(testCaseName) -
92Unit Test Organisation
- Create the tests in the same package as the code
under test - For each Java package in the application, define
a TestSuite class that contains all the tests for
verifying the class - Make sure the build process includes the
compilation of all test suites and test cases
93Unit Test Principles
- Code a litle, test a little, code a little, test
a little - Run tests as often as possible, at least as often
as you run compiler - Run all the tests in the system at least once per
day - Begin by writing tests for the area of code that
your most worried about breaking - Write tests that have highest possible return on
your testing investment - When you need to add new functionality, add the
tests first - If you find yourself debugging using println,
write a test case instead - When a bug is reported, write a test case to
expose the bug - Next time someone asks for help debugging, write
a test case to expose the bug - Dont deliver code that doesnt pass all the tests