Component Assembly - PowerPoint PPT Presentation

1 / 71
About This Presentation
Title:

Component Assembly

Description:

Production quality components, prototype components, and throwaway use-once ... A good test case has a high probability of finding undiscovered errors. ... – PowerPoint PPT presentation

Number of Views:399
Avg rating:3.0/5.0
Slides: 72
Provided by: xliu1
Category:

less

Transcript and Presenter's Notes

Title: Component Assembly


1
Component Assembly
  • Components are units of deployment.
  • Component instances interact with each other,
    usually mediated by one or more component
    frameworks.
  • One obvious way of assembling systems out of
    components is by way of traditional programming.
    However, the reach and viability of components
    are much increased by enabling simpler forms of
    assembly to cover most common component
    applications.

2
Visual component assembly
  • Component assembly is always about assembly of
    component instances. For example, if the
    component has been implemented using object
    technology, a component instance is normally a
    web of objects.
  • Visual assembly is one way of simplifying the
    assembly process, since it is intuitive.

3
Visual component assembly--Example
  • A JavaBeans component, namely a bean, can exhibit
  • special lookssuch as building block icons
  • behavioursuch as handles to connect instances to
    others, and
  • guidancesuch as dedicated online help to assist
    assembly personnel.

4
Visual component assembly
  • During assembly, components are instantiated, and
    instances are connected using a uniform approach
    to connectable objects with outgoing and incoming
    interfaces.
  • Both JavaBeans and COM support general connection
    paradigms for this purpose.
  • Where required, additional behaviour is added
    using a scripting approach. A script is
    essentially a small programusually a procedure,
    that intercepts an event on its path from source
    to sink and triggers special actions.

5
Component documents to supersede visual
architecture
  • Where component instances are naturally visual
    (for example provide a visual user interface),
    dedicated builders or assembly environments can
    be unified with environments for regular use.
  • With compound documents, integration of build and
    use environments is straightforward and natural
  • Documents are applications, and
  • document editing is component assembly.

6
Component documents to supersede visual
architecture
  • Seamless integration of build and use
    environments, especially in the context of
    compound documents, forms the strongest and most
    productive case yet for RAD.
  • Production quality components, prototype
    components, and throwaway use-once solutions can
    be freely combined.

7
Component beyond GUI
  • Most component software approaches proposed, and
    all that have gained a viable market share, have
    addressed client-side front-end or stand-alone
    interactive applications. The demanding nature of
    modern graphical user interfaces, combined with
    the relative regularity of user interfaces, makes
    reusable components particularly valuable assets.

8
Component beyond GUI
  • Component-based reuse also exists in other areas
    of computing, in particular, server-based
    solutions.
  • However, development of component technology for
    servers and other non-interactive systems is
    lagging well behind that for interactive systems,
    and in particular that for compound document and
    web-browsing systems.

9
Component beyond GUI
  • A most recent example systems for server-based
    components are Java Servlets components that are
    designed to operate on a server but which can be
    assembled visually.

10
Managed and self-guided component assembly
  • For dynamic assembly
  • Managed assembly rests on an automated assembly
    component that implements the policies that
    govern the dynamic assembly of instances.
  • An example would be a system that used a rule
    base to synthesize forms according to the current
    situation.

11
Managed and self-guided component assembly
  • Self-guided assembly is similar, but uses rules
    that are carried by the component instances
    themselves.
  • For example, a component instance could form a
    mobile agentan entity that migrates from
    server to server and which aggregates other
    component instances to assemble functionality
    based on the findings at various servers.

12
Component evolution
  • Components will normally undergo regular product
    evolution.
  • Installation of new versions may cause version
    conflicts between the new and older systems.
  • Version management techniques should be engaged
    to solve these conflicts.

13
What is Testing
  • Errors are part of every development and not a
    criticism of an individual.
  • Testing is the process of executing a program
    with the intent of finding an error.
  • Testing cannot show the absence of errors, it can
    only show that errors are present.
  • It is also important to understand that Testing
    and Debugging are different processes. Debugging
    is concerned with locating and correcting the
    errors.

14
Two basic types of testing
  • There are two kinds of testing Defect Testing
    and Statistical Testing.
  • In Defect Testing tests are designed to reveal
    the presence of defects in the system.
  • In Statistical Testing tests are designed to
    reflect the frequency of actual user inputs so
    that after running the tests an estimate of the
    operational reliability of the system can be made.

15
Two basic types of testing
  • Statistical Testing is the area of Software
    Reliability. In this course, we will focus
    Defect Testing.

16
Testing objectives and test cases
  • Each test case will have a description of the
    objective of the test, the input test data needed
    to conduct the test and a description of the
    expected result.
  • The objective of testing is to design test cases
    that have the highest likelihood of finding the
    most errors with the minimum amount of time and
    effort.

17
Testing objectives and test cases
  • A good test case has a high probability of
    finding undiscovered errors.
  • A successful test is the one which reveals an
    undiscovered error.

18
Testing principles
  • All tests should be traceable to customer
    requirements
  • Test should be planned long before testing begins
  • Exhaustive testing is not possible
  • To be most effective, system testing should be
    conducted by a independent third party

19
Testing Levels
20
Unit Testing
  • Testing each module individually, assuring that
    it functions properly as a unit.
  • Unit testing intends to find bugs in logic, data
    and algorithm in individual modules.

21
Integration Testing
  • Modules are integrated often as sub-systems to
    form the complete software package.
  • Integration testing intends to find bugs in
    interfaces between modules.

22
Validation Testing
  • This is often referred as System Testing as well.
    It intends to assuring that the software meets
    all functional and performance requirements.

23
Acceptance Testing
  • It determines whether software meets customer
    requirements.
  • It can be viewed as an additional level of System
    Testing.

24
Regression Testing
  • As part of validation testing, regression testing
    is performed to determine if the software still
    meets its requirements in light of changes to the
    software.

25
Testing of newly developed components
  • Firstly focus on unit testing
  • And then on integration testing if the component
    comprises a number of sub-components or units
  • At final stage, system testing should be
    conducted on the component as a whole system.

26
Testing of component-based software systems
  • Focus on integration testing between components
  • Then, focus on system testing

27
Typical testing process
28
Test Methods
  • White box testing a method of testing in which
    knowledge of the softwares internal design is
    used to develop tests. Also called as structural
    testing or glass box testing.
  • Black box testing no knowledge of software
    design is used, and tests are based strictly on
    requirements and functionality. Also called as
    functional testing.
  • 'Big Bang' or non-incremental integration
  • Incremental integration, which can be Top-Down or
    Bottom-Up

29
General suggestions
  • Regardless of the strategy adopted, the testing
    team should identify critical modules i.e.,
    addresses several requirements, high-level of
    control, complex structure, definite performance
    requirements
  • In practice many projects take into account the
    need to utilize resources as effectively as
    possible and so often combine the top-down
    strategy with the bottom-up. Often the overall
    system is built top-down whereas individual
    sub-systems may be built bottom-up.

30
Incremental Testing
  • In this method, components being tested are added
    to the test sequence in a stepwise incremental
    manner.

31
Incremental Testing
32
Whitebox testing
  • Applied at the unit testing stage
  • Needs understanding of the structure and logic of
    the unit being tested

33
Whitebox testing
34
Blackbox testing
  • Applied at the integration testing stage of a
    testing process
  • Approach to testing where the program is
    considered as a black-box
  • The program test cases are based on the system
    specification
  • The focus is consistent interfacing

35
Blackbox testing
36
Top-Down Integration testing
  • Modules are integrated by moving downward through
    the control hierarchy beginning with the main
    control module.
  • When the main control module has been tested it
    is used as a Test Driver
  • And Test Stubs of subordinate modules are
    replaced one at a time with actual modules.
  • This can be done in a depth-first or
    breadth-first manner.

37
Top-Down Integration testing
  • Tests are conducted as each module is integrated.

38
Test Harness and Test Stub
  • A Test Harness (Test Driver) is a program that
    accepts test case data, passes this to the module
    under test, collects the outputs from the tested
    module and prints (or records) these results.
  • If the tested module calls another module, this
    module is replaced by a Stub, which has the same
    interface as the replaced module but which
    returns a known (or controlled) result.

39
Test Harness and Test Stub
  • Both Test Harnesses and Stubs represent an
    overhead - extra software that must be written
    but which is generally not deliverable.

40
Advantage of Top-Down Integration testing
  • Verifies major control or decision points early
    in the test process.
  • A depth-first approach sometimes has the
    advantage that a complete path of functionality
    can be tested and demonstrated. E.g., an online
    transaction to request a cheque book, which can
    be useful to illustrate progress.

41
Disadvantage of Top-Down Integration testing
  • Often the lower level modules are the ones which
    acquire data and then pass it up to the higher
    level modules. While stubs are still in place,
    no significant data can pass upwards and so the
    tester is left with two choices
  • delay test until stubs replaced, but care is
    needed because it can cause us to lose
    correspondence between specific tests and the
    incorporation of specific modules

42
Disadvantage of Top-Down Integration testing
  • develop stubs with restricted functionality of
    actual module, but care is needed because extra
    effort is required to construct them and they can
    all too easily become too complex and swallow up
    resources.

43
Bottom-Up Integration Testing
  • Testing begins at the lowest level of the program
    structure and works upward.
  • Low-level modules are combined into clusters that
    perform a specific software function.
  • A Test Harness is written to coordinate input and
    output.
  • The cluster is tested and then the Harness is
    removed and clusters are combined.

44
Bottom-Up Integration Testing
45
Advantage of Bottom-Up Integration Testing
  • stubs are not required
  • easier test case design

46
Disadvantage of Bottom-Up Integration Testing
  • program does not exists until all modules are
    integrated

47
Typical methods for different testing levels
  • Unit testing----White box
  • Integration testing----Black box, top-down,
    bottom-up
  • Validation testing----Black box

48
Object-oriented system testing
  • Less closely coupled systems. Objects are not
    necessarily integrated into sub-systems
  • Cluster testing. Test a group of cooperating
    objects
  • Thread testing. Test a processing thread as it
    weaves from object to object.

49
Thread testing
  • Suitable for real-time and object-oriented
    systems
  • Based on testing an operation which involves a
    sequence of processing steps which thread their
    way through the system
  • Start with single event threads then go on to
    multiple event threads
  • Complete thread testing is impossible because of
    the large number of event combinations

50
An Example of Multiple Thread Testing
51
Possible Thread Testing
52
Multiple Thread Testing
53
Stress testing
  • Exercises the system beyond its maximum design
    load. Stressing the system often causes defects
    to come to light
  • Stressing the system test failure behaviour..
    Systems should not fail catastrophically. Stress
    testing checks for unacceptable loss of service
    or data
  • Particularly relevant to distributed systems
    which can exhibit severe degradation as a network
    becomes overloaded

54
Back-to-back testing
  • Present the same tests to different versions of
    the system and compare outputs. Differing outputs
    imply potential problems
  • Reduces the costs of examining test results.
    Automatic comparison of outputs.
  • Possible when a prototype is available or with
    regression testing of a new system version

55
Back-to-back testing
56
Test Documentation
  • Test documentation includes all test program
    documents from overall test plan through the
    final test report.
  • Test documentation is a parallel effort with
    development. The related process is known as the
    V-model of development

57
V-Model
58
Contents of test document
  • Test Plan
  • Test Cases
  • Test Procedures
  • Test Report

59
Test Tools
  • Automated and manual test tools are available to
    assist in various test activities.
  • A summary according to the application domain
  • Test data provision.
  • Data recording
  • Path analysers.

60
Test data provision.
  • Commercially available software packages to help
    in the creation and insertion of test data.
  • Including
  • Test data generators
  • Simulators
  • Stimulators

61
Data recording
  • Event recorders.
  • Large-scale event recorders often are used to
    record long or complicated interactive test data
    for future repeats of test or for detailed test
    analysis.

62
Data recording
  • General/specific purpose data reduction packages.
  • Large volumes of data are often sorted and
    categorised so that individual analyses can be
    made of particular areas of interest.
  • Some powerful packages are commercially
    available, providing computational and graphic
    capabilities that can be of great assistance in
    the analysis of test results and trend
    determination.

63
Interface testing
64
Interface testing
  • Takes place when modules or sub-systems are
    integrated to create larger systems
  • Objectives are to detect faults due to interface
    errors or invalid assumptions about interfaces
  • Particularly important for object-oriented
    development as objects are defined by their
    interfaces

65
Typical interfaces types
  • Parameter interfaces
  • Data passed from one procedure to another
  • Shared memory interfaces
  • Block of memory is shared between procedures
  • Procedural interfaces
  • Sub-system encapsulates a set of procedures to be
    called by other sub-systems

66
Typical interfaces types
  • Message passing interfaces
  • Sub-systems or objects request services from
    other sub-systems or objects

67
Popular Interface errors
  • Interface misuse
  • A calling component calls another component and
    makes an error in its use of its interface e.g.
    parameters in the wrong order
  • Interface misunderstanding
  • A calling component embeds assumptions about the
    behaviour of the called component which are
    incorrect

68
Popular Interface errors
  • Timing errors
  • The called and the calling component operate at
    different speeds and out-of-date information is
    accessed

69
Interface testing guidelines
  • Design tests so that parameters to a called
    procedure are at the extreme ends of their ranges
  • Always test pointer parameters with null pointers
  • Design tests which cause the component to fail
  • Use stress testing in message passing systems
  • In shared memory systems, vary the order in which
    components are activated

70
System Testing
  • To ensure that it performs as described in the
    requirements document.
  • Often been derived in close consultation with the
    customer and often performed with in the presence
    of the customer
  • Acceptance Testing An additional level of
    system testing. The user and its agent are
    allowed to perform a set of their own tests on
    the system separate to the developer.

71
System Testing
  • Installation Tests ensure that the software can
    be installed and run smoothly at the user/agent's
    site.
  • Alpha Testing is usually conducted at the
    developer's site often by an end-user. The tests
    are undertaken in a controlled environment.
  • Beta testing is undertaken at selected
    customer's sites by the end-user. This is
    effectively a 'live' system and the developer has
    no control over what happens.
Write a Comment
User Comments (0)
About PowerShow.com