Chapter 16 Test Economics - PowerPoint PPT Presentation

1 / 50
About This Presentation
Title:

Chapter 16 Test Economics

Description:

Profitability is the difference between the revenues generated by a company's ... cost producers may attempt to usurp their dominant market position by offering ... – PowerPoint PPT presentation

Number of Views:123
Avg rating:3.0/5.0
Slides: 51
Provided by: WinC153
Category:

less

Transcript and Presenter's Notes

Title: Chapter 16 Test Economics


1
Chapter 16 - Test Economics
2
  • Profitability Factors
  • What is Meant by Test Economics?
  • Profitability is the difference between the
    revenues generated by a companys products and
    the costs associated with developing,
    manufacturing, and selling them.
  • The test engineer has direct or indirect
    influence over the revenues, the development
    costs, and the manufacturing costs of
    semiconductor products.
  • We will examine direct testing costs and their
    obvious effect on overall manufacturing expenses.
    But we will also examine other less obvious
    aspects of test economics. For example, the test
    engineers debugging skills that have a direct
    and profound effect on time to market and yield
    enhancement.

3
  • Profitability Factors
  • Time to Market
  • Time to market is a critical factor in product
    profitability.
  • Late product introduction can result in missed
    opportunities as a competitors product gains
    market share.
  • Conversely, early product introduction leads to
    higher selling prices because of limited
    competition.
  • Therefore, time to market is perhaps as important
    to a companys profitability as direct
    manufacturing costs.
  • The test engineer certainly has influence (good
    or bad) over time to market, and therefore has
    influence over the selling price and total
    revenues.

4
  • Profitability Factors
  • Testing Costs
  • The test engineer has a very obvious connection
    to manufacturing costs.
  • Many years ago, production testing represented a
    fairly small portion of the cost of manufacturing
    a semiconductor device.
  • Today, testing often represents a painfully large
    percentage of the total production cost. This is
    especially true of mixed-signal semiconductor
    devices.
  • Shrinking IC geometries and ever-increasing
    performance requirements are the primary driving
    forces behind this trend.

5
  • Profitability Factors
  • Testing Costs
  • Every time fabrication geometries shrink by a
    factor of two, we can build approximately four
    times as many circuits onto a given area of
    silicon.
  • The fabrication costs do not quadruple and in
    fact may decline over time. However, the
    complexity of the circuit and the functions it
    performs certainly do quadruple.
  • Circuit complexity is directly related to testing
    costs. If a circuit has to perform four times as
    many functions as a previous generation device,
    then its testing time tends to quadruple as well.
  • Since the cost of printing four times as much
    circuitry does not quadruple, the testing costs
    grow faster than the fabrication costs, making
    test cost percentage increase.

6
  • Profitability Factors
  • Testing Costs
  • Increasing circuit complexity and performance
    requirements give rise to another major
    mixed-signal testing problem. As more circuits
    are added to the same die, crosstalk problems
    grow exponentially.
  • Four mixed-signal circuit blocks have at least
    2(321)12 possible block-to-block interaction
    mechanisms.
  • Eight mixed-signal circuit blocks have at least
    2(7654321)56 possible block-to-block
    interactions.
  • Marginal device performance requires longer test
    times (and thus higher testing costs) because of
    increased averaging and accuracy requirements

7
  • Profitability Factors
  • Testing Costs
  • More demanding DUT performance requirements also
    drive up the cost of testing, while reducing
    yield.
  • A 100 dB signal to noise ratio (SNR) test is far
    more costly to implement than a 60 dB SNR test,
    simply because less averaging is required at a
    level of repeatability of 60 dB.
  • Also, high-performance devices that push the
    capability of the fabrication process often have
    poor design margin.
  • The absolute accuracy of each marginal
    measurement must also be very high. High levels
    of accuracy are usually more expensive to attain
    than low levels of accuracy.
  • Finally, the cost of test equipment is directly
    proportional to performance requirements

8
  • Profitability Factors
  • Yield Enhancement
  • Yield enhancement is another important area of
    test economics. A good test engineer or product
    engineer can play a critical role in increasing
    the overall yield of a product.
  • Production yields can be enhanced by identifying
    and resolving problems in device design,
    fabrication process, test hardware, or test
    software.
  • The entire engineering team is responsible for
    the yield enhancement task. The yield
    enhancement team should include members from test
    engineering, product engineering, process
    engineering, systems engineering, and design
    engineering.

9
  • Direct Testing Costs
  • Cost Models
  • Determining the exact cost of production testing
    is not as simple as it might seem. A very
    simplistic model could be
  • There are many factors that affect cost of test,
    including tester depreciation, handler index
    time, tester down time, tester idle time, etc.
    These complex factors are hidden in the Test Cost
    per Second value. The cost models can therefore
    become fairly complex in a constantly changing
    production environment.
  • There are many cases in which the correct
    decision from a total cost of test viewpoint is
    not immediately obvious. For example, one might
    assume that the lowest cost of test is always
    achieved by the tester with the lowest purchase
    price.

10
  • Direct Testing Costs
  • Cost of Test Versus Cost of Tester
  • A testers throughput and yield are often more
    important to profitability than its purchase
    price. Tester throughput is defined as the
    average number of passing DUTs that can be tested
    in a given amount of time
  • Very expensive testers can often be justified
    because they allow a much higher throughput than
    slower, less expensive testers.
  • A less efficient hardware architecture or
    operating system can slow down each of the many
    tests in a typical mixed signal program, perhaps
    doubling or tripling the test time.

11
  • Direct Testing Costs
  • Cost of Test Versus Cost of Tester
  • It might seem that a tester costing one tenth the
    price of a faster tester would be justified, even
    at twice the test time of the expensive tester.
    However, in the test cost equation you use test
    cost per second rather than tester depreciation
    expense per second.
  • Test cost per second is much more complex than
    the depreciation of the testers purchase price
    over its lifetime.
  • The total cost per second of test time certainly
    includes a large amount of tester depreciation,
    but it also includes the cost of factory
    facilities, handlers and probers, factory
    personnel, equipment maintenance, electricity,
    and general corporate expenses.
  • These other expenses are independent of the cost
    of the tester itself.

12
  • Direct Testing Costs
  • Cost of Test Versus Cost of Tester
  • If we show the total cost per second associated
    with a fully utilized production tester, we can
    begin to understand how tester throughput is far
    more important than tester purchase price.

13
  • Direct Testing Costs
  • Cost of Test Versus Cost of Tester
  • Now consider a mixed-signal tester costing one
    third the price of the expensive tester. Since
    tester depreciation expenses are directly
    proportional to the testers purchase price, they
    will drop by 1/3. However, since the other
    expenses are independent of tester purchase
    price, the total test cost per second in this
    example only drops to
  • Testing cost per second associated with the
    inexpensive tester is actually only 20 lower
    than that of the expensive tester. To achieve
    equivalent testing costs, the test time of the
    lower-priced tester must be no greater than
    1/0.8, or 125 of the test time of the expensive
    tester. Clearly, then, a low cost tester must
    not only be inexpensive, but it must have almost
    the same throughput as its expensive counterpart.

14
  • Direct Testing Costs
  • Cost of Test Versus Cost of Tester
  • An even more subtle issue relates to test yield.
    If the number of passing DUTs is reduced by a
    lower quality tester, then throughput is reduced.
    Thus, an inexpensive tester must also maintain
    the same test yield as a higher cost tester to
    achieve equivalent cost effectiveness.
  • Another aspect of tester cost is flexibility and
    ease of use. This affects time to market,
    because an inexpensive tester will typically have
    an inexpensive, less developed operating system
    that may extend test development time. For all
    of these reasons, mixed-signal testers have grown
    in price over the years to allow fast, accurate
    measurements with very high throughput.

15
  • Direct Testing Costs
  • Throughput
  • There are many factors that affect the average
    test time per DUT. These factors include handler
    or prober index time, multi-head versus single
    head testing, multi-site versus single-site
    testing, equipment down time, and tester idle
    time.
  • Of course, the most important factor in
    throughput is the test time for a passing DUT.
    This time tends to dominate the average test time
    per DUT. Test time reduction is one of the main
    contributions a test or product engineer can make
    towards increased profitability.
  • Coherent DSP-based testing is one of the common
    test time reduction techniques found on all
    mixed-signal testers. Other obvious examples of
    test time reduction are the elimination of
    unnecessary settling time and the use of
    simultaneous testing of multiple circuit blocks.

16
  • Direct Testing Costs
  • Throughput
  • Handler or prober index time is the amount of
    time it takes to remove one DUT from the tester
    and replace it with the next DUT. Index times
    for probers are typically on the order of 100 ms,
    but handler index times can be very long.
    Robotic (pick-and-place) handlers are especially
    slow (1 second), since they are designed for very
    flexible operation.
  • Because index times can be so lengthy, testers
    are often equipped with two test heads. The
    heads are not designed for simultaneous use.
    Rather, the mainframe tester instruments are
    multiplexed between the heads so that testing can
    proceed on one head while the handler or prober
    is indexing on the other head. In effect, the
    index time of the handler or prober is hidden.
    Of course, the extra test head adds expense to
    the tester, but the added cost is justified by
    the increased throughput.

17
  • Direct Testing Costs
  • Throughput
  • Another technique that can increase throughput is
    multi-site testing. A tester having multi-site
    capabilities can test multiple DUTs at the same
    time. Each DUT tested in parallel is called a
    site. The advantage of multi-site testing is
    obvious. If four devices can be tested at the
    same time, then a testers throughput goes up by
    a factor of four. This assumes a fully parallel
    test capability, of course. Some testers are
    only capable of semi-parallel testing, in which
    portions of the test program are performed in
    parallel while other portions are performed
    serially, one DUT at a time.
  • The serial portions of a semi-parallel multi-site
    test program are required because certain tester
    resources may be limited.

18
  • Direct Testing Costs
  • Throughput
  • Down time and idle time are two final factors to
    consider in cost of test. Down time is defined
    as any time the tester cannot be used for
    production testing. This time includes time
    required for maintenance, repair, calibration,
    and change-over time from one DUT type to
    another. Idle time is any time the tester is
    available for production, but is not testing
    devices. Idle time can result from poor
    production planning or from a temporary lack of
    market demand for devices that can be tested on a
    particular tester. Both down time and idle time
    increase the effective cost of a tester.

19
  • Direct Testing Costs
  • Throughput
  • Taking all these factors into account, we can
    propose a fairly complete model for testing
    costs. The exact cost model will vary from one
    company to the next
  • DT Depreciation of the Tester (cents per
    second)
  • DH Depreciation of the Handler/Prober (cents /
    second)
  • CF Testers Share of Fixed Costs (cents per
    second)
  • TTest Test Time (seconds)
  • TIndex Index Time (seconds)
  • TProd Average Production Testing Hours Per Week
  • TDown Average Hours Per Week Tester is Down
  • TIdle Average Hours Per Week Tester is Idle

20
  • Problem
  • During a particular month, tester XYZ has an
    average down time of 12 hours per week. During
    that same time it has an average idle time of 16
    hours per week, leaving a total of 140 hours per
    week of useful production time. The average
    depreciation cost for this type of tester is 2
    cents per second. A particular DUT requires
    handler ABC, which has a depreciation cost of 1
    cent per second. Handler index time is 1 second,
    but this time is effectively masked by dual head
    testing. Single-site test time is 5 seconds. This
    testers share of the test floors fixed costs is
    2 cents per second. What is the overall cost per
    second of this tester/handler combination? By
    what percentage could we increase the cost of the
    tester (i.e. the tester depreciation cost) if we
    wanted to perform quad-site testing? (Assume the
    handler is already quad-site capable).

21
  • Solution
  • The total testing cost for this DUT would be 6
    cents per second times 5 seconds, or 30 cents per
    DUT. Quad site testing would reduce the
    effective test time to 1.25 seconds. Solving for
    the tester depreciation that corresponds to 30
    cents at a test time of 1.25 seconds.
  • we can afford a tester costing 17/28.5 times as
    much as a single-site tester if we can achieve
    fully parallel quad site testing.

22
  • Debugging Skills
  • Sources of Error
  • Murphys law states that anything that can go
    wrong will go wrong. A mixed-signal test
    program, hardware, tester, and DUT provide many
    opportunities for something to go wrong. As a
    result, test engineers spend much of their time
    debugging hardware and software errors.
  • Most of the particularly difficult, time
    consuming problems are eventually found to be
    defects in hardware rather than defects in the
    test code. Problems can be caused by defective
    tester hardware, poor DIB design, or poor DIB
    layout. Most importantly, problems can be caused
    by the DUT itself. Many times, a problem that
    appears to be a test program bug is actually a
    defect in the DUT. Rapid identification and debug
    of DUT design flaws allows a much shorter time to
    market.

23
  • Debugging Skills
  • The Scientific Method
  • In high school science classes, we all learned
    the five step scientific method, which can be
    used in the investigation of any problem
  • State the Problem
  • Form a Hypothesis
  • Design Experiments to Test the Hypothesis
  • Test the Hypothesis
  • Draw Conclusions
  • We can easily apply the scientific method to test
    program debugging

24
  • Debugging Skills
  • The Scientific Method
  • We usually have no problem stating the problem
    (the DAC wont work, the DUT explodes when it is
    powered up, etc.). We next have to come up with
    a list of possible causes, which are our
    hypotheses. Next, we have to design experiments
    that will rule out each of the hypotheses in a
    logical order. Then we conduct experiments to
    find out which of the possible causes is giving
    us the problem. Finally, the conclusions are
    drawn (the bug is fixed, the DIB needs to be
    modified, the DUT needs to be redesigned, etc.)

25
  • Debugging Skills
  • The Scientific Method
  • It is easy to state problems, draw conclusions,
    and perform experiments, once we know what
    experiments we need to perform. Forming a
    hypothesis, on the other hand, requires a little
    imagination and a lot of experience. When we
    have a problem, we have to imagine all the things
    that could possibly cause it.
  • Fortunately, designing experiments is not as
    difficult as forming a hypothesis out of thin
    air. The biggest problem most test engineers
    have with this step is limiting the number of
    experimental variables. We cant change five
    things at once and expect to gain any meaningful
    experimental results.

26
  • Debugging Skills
  • The Scientific Method
  • Often several parameters are changed at once,
    creating a situation in which the source of error
    can not be identified.
  • Returning to a known good state and then working
    our way back toward the bad state is a very
    effective technique for isolating bugs. Using
    this methodology, we can eliminate multiple
    experimental variables and reintroduce them one
    at a time.

27
  • Debugging Skills
  • Practical Debugging Skills
  • Effective test debugging is often a matter of
    breaking a problem into pieces and examining the
    pieces in logical order.
  • For example, if we have a continuity test that is
    failing, we have to imagine all the things that
    could possibly have gone wrong. We might list a
    set of hypotheses as follows
  • The DUT is not in the socket
  • The DUT is defective
  • The socket is not properly seated on the DIB
    board
  • There is a short between pins on the DIB board
  • The DIB board is not properly seated on the test
    head
  • The DUT power supplies are not connected to the
    DUT and set to 0V
  • The testers pin card electronics have gone bad
  • The continuity test code has a bug

28
  • Debugging Skills
  • Practical Debugging Skills
  • Next, we decide which of these problems is most
    likely. We usually base this on experience and
    common sense. If our continuity test worked
    yesterday and not today, and we havent changed
    the code, then it is very unlikely that our test
    code is defective. We usually try a new DUT as
    the first and easiest hardware experiment. If
    several DUTs all show the same failure, then we
    probably have a DIB or tester hardware problem.
    Etc
  • Eventually, by making assumptions about what
    might be going wrong and performing experiments
    to verify which assumption is correct, we find
    the cause of the problem.

29
  • Debugging Skills
  • Importance of Bench Instrumentation
  • The simple fact is that most test problems are
    far easier to debug using a bench instrument such
    as a voltmeter, oscilloscope, or spectrum
    analyzer. It is impossible to debug mixed-signal
    test programs efficiently without observing test
    signals with bench instruments. The test engineer
    must not avoid these tools because it takes time
    to learn how to use them or because it takes time
    to set them up beside the tester.
  • Another important use of non-ATE instrumentation
    is bench correlation. The design engineers often
    set up a completely separate non-ATE test fixture
    to perform measurements on the DUT. These bench
    setups are both a blessing and a curse to the
    test engineer.

30
  • Debugging Skills
  • Importance of Bench Instrumentation
  • On the other hand, the bench equipment frequently
    gets different answers than the ATE tester. This
    causes a great deal of extra effort as the two
    measurements must be brought into agreement.
    Sometimes the test engineer has to figure out
    what is wrong with the bench setup, but other
    times, the bench setup shows a weakness in the
    ATE test program. Bench correlation may seem
    like a lot of extra work, but it serves to
    validate the mixed-signal test program. Any
    analog measurement performed with a single piece
    of test equipment is highly suspect. Correlation
    between two independent measurement techniques is
    a necessary step to prove that the measurements
    are correct.

31
  • Debugging Skills
  • Test Program Structure
  • One of the most effective ways to shorten the
    time spent debugging test problems is to
    structure the test program so that it is
    debuggable. There are several general
    guidelines
  • extensive use of comments in the program
  • avoidance of over-procedurized code (several
    levels of subroutines)
  • mixed-signal digital pattern loops should always
    be written so that they can either stop after one
    unit test period (UTP) or loop indefinitely.
  • avoid mystery constants in a test program.
  • explicit computations help make measurements
    under a variety of test conditions.

32
  • Debugging Skills
  • Common Bugs and Techniques to Find Them
  • Test program worked yesterday, but it doesnt
    work today.
  • Be sure the code really is the same as it was
    yesterday.
  • Occasionally, the test was just barely passing on
    the previous day and a slight drift in the
    testers performance causes the DUT to fail.
  • Make sure the DUT is the same one that passed
    before.
  • Use the same DIB and tester that worked before.
  • Make sure the continuity test is working properly
    to verify that the DIB is properly connected to
    the tester.
  • If the tester uses dual heads, make sure the test
    program is loaded on the correct test head.
  • Finally, verify that the tester passed all
    checkers.

33
  • Debugging Skills
  • Common Bugs and Techniques to Find Them
  • When debugging a new test, the DUT is completely
    non-functional.
  • make sure the DUT is powered up by observing the
    voltage at the DUT pins.
  • using an oscilloscope, verify that the analog and
    digital signals are arriving at the DUT as
    expected.
  • then verify that the DUT is producing the
    expected analog and digital outputs.
  • try loosening the digital timing and setting the
    digital logic levels of the tester so that the
    DUT is not stressed to its test limits.
  • get the design engineer to help determine what is
    wrong. Sometimes, the problem is caused by a
    last-minute design change.

34
  • Debugging Skills
  • Common Bugs and Techniques to Find Them
  • The DUT works, but the FFT output shows excessive
    spikes that cause a failure in signal to noise
    ratio.
  • First, if the spikes are confined to individual
    FFT spectral bins, then they are probably not
    coming from an external source such as 60 Hz
    power lines or cellular telephones.
  • They are coming from inside the DUT or perhaps
    from the tester. If a spike is located at a
    frequency that is a multiple of the DUT master
    clock, then it may be caused by digital to analog
    crosstalk.
  • change the frequency of the test tone. If the
    spike shifts to a different spectral bin, then it
    is either caused by distortion, imaging,
    aliasing, or mixing of the test tone with another
    frequency.

35
  • Debugging Skills
  • Common Bugs and Techniques to Find Them
  • The DUT works, but the noise levels are too high
  • Coupling Mechanisms
  • direct injection of noise into the high impedance
    voltage reference inputs, bias current inputs, or
    VMID inputs.
  • directly into the analog inputs or outputs.
    Sometimes the testers signals are simply not
    clean enough to allow the DUT to pass. In these
    cases, special test techniques such as DIB
    filters must be added to clean up the signal
    before it is sourced or measured by the tester.
  • power supply or ground noise. Proper power and
    ground layout and good decoupling practices can
    help to reduce this problem.

36
  • Debugging Skills
  • Common Bugs and Techniques to Find Them
  • The DUT works, but the noise levels are too high
  • The process of purposely aggravating a circuit to
    see if it is sensitive can be very effective.
    Sensitive nodes can often be located by purposely
    injecting noise into the circuit using the body
    as a radio antenna and the finger as a signal
    source. This is a peculiar but effective way to
    debug noise problems which some have called
    tactile engineering or the laying of hands
    upon the DIB. However odd it may seem, the
    practice of purposely trying to make a problem
    worse to discover the circuits weakness is often
    more effective than trying to figure out how to
    make it better. Once we know where a circuit is
    weak, we can try to figure out ways to protect it
    from unintentional sources of aggravation.

37
  • Debugging Skills
  • Common Bugs and Techniques to Find Them
  • Ive been trying to debug a problem for two weeks
    and Im not getting anywhere
  • Ask someone for help. Dont stare at the same
    problem for hours or days without asking someone
    else to look at it. If a problem has persisted
    for more than two or three days, it is time to
    attack the problem from a different angle. Often
    the problem can only be seen from the fresh
    viewpoint of a second set of eyes.

38
  • Emerging Trends
  • Test Language Standard
  • One of the factors that drives up the cost of
    mixed-signal test equipment and extends test
    development is the lack of a common software
    environment for mixed-signal testing. One of
    the recent developments in test engineering is
    the emergence of a generic test language called
    STIL (standard test interface language).
  • Previous attempt have fallen short (ATLAS - used
    primarily for military application).

39
  • Emerging Trends
  • Test Language Standard
  • A standard test language could reduce testing
    costs in a number of ways
  • it might allow tester software/hardware
    interchangeability similar to that of the
    Microsoft/Intel PC.
  • ATE hardware vendors could concentrate on the
    production of very low cost, high performance
    hardware compatible with the standard language.
    Independent ATE software companies could
    concentrate on the standard language and
    operating system without being caught up in the
    hardware development headaches associated with a
    mixed-signal tester. Ideally, if all testers
    were compatible with a common computing platform,
    a semiconductor manufacturer could purchase
    whatever tester represented the best value.

40
  • Emerging Trends
  • Test Language Standard
  • Another advantage promised by a standard language
    is reduced training costs. Multiple tester
    platforms require multiple training, which offers
    no inherent financial advantage to a
    semiconductor vendor. Not only does the engineer
    lose productive time to the extra training
    sessions, but his or her level of expertise is
    diluted by the reduction in stick time on any
    particular type of tester. The only advantage
    that might come with multiple tester platforms is
    that the semiconductor vendor encourages
    competition between the ATE vendors, hopefully
    reducing tester purchase price.

41
  • Emerging Trends
  • Test Language Standard
  • If the established ATE companies do not adopt
    software standards due to a potential loss of
    proprietary software ownership, then a host of
    low-cost producers may attempt to usurp their
    dominant market position by offering equipment
    compatible with a standard operating system.
    This may in fact be one of the surprising
    developments that reshapes the ATE industry in
    the coming years

42
  • Emerging Trends
  • Test Simulation
  • Another interesting development in recent years
    has been the advent of test simulation. Although
    still in its infancy, test simulation promises us
    the ability to debug our test programs and DIB
    designs before we have received actual silicon to
    plug into the DUT socket.
  • Using a stand-alone workstation rather than an
    expensive ATE tester, we can debug much of our
    test program errors off-line. However, we cant
    fully debug our programs without the DUT and its
    DIB. We can only verify that the tester will
    produce the DUT input signals that we want.
  • We would prefer to verify that these input
    signals are the appropriate ones, that the DUT
    will react correctly to them, and that our test
    program will capture and analyze the DUT
    responses correctly. Test simulation provides a
    closed-loop simulation process that allows us to
    verify the test code before the DUT or DIB have
    been fabricated.

43
  • Emerging Trends
  • Test Simulation
  • Test simulation links the testers off-line
    simulation software with a software model of the
    device. Typically, the device model is developed
    by the design engineers for the purpose of design
    verification. Device models may be developed
    using SPICE, VHDL, or other software modeling
    languages. The tester simulator generates DUT
    stimulus signals based on the test program. It
    passes these signals, called events, through
    models of the tester and DIB to the DUT model.
    The DUT model produces simulated responses to the
    test stimuli. The DUT responses are passed back
    to the testers simulator which continues as if
    the signals had been captured from a physical
    DUT. Using test simulation, both the test
    program and the DUT design can be verified at the
    same time.

44
  • Emerging Trends
  • Test Simulation
  • Using test simulation, a portion of the test
    program debugging effort can be performed before
    the silicon is fabricated. The test program is
    developed in parallel with the device design,
    reducing the overall cycle time of the new
    product. An added benefit of the test simulation
    approach is that flaws in the design can be
    identified by the test program before the design
    is released to fabrication. These flaws can be
    corrected in a timely manner, saving further
    cycle time.

45
  • Emerging Trends
  • Non-Coherent Sampling
  • The exact relationships between the signal
    frequencies, digital pattern rates, and the
    testers various sampling rates lead to a maze of
    restrictive sampling criteria. A recent trend
    is the elimination of coherence as a fundamental
    requirement for DSP-based testing.
  • Mathematical resampling routines have been used
    to change the effective sampling rate of the
    testers digitizer and capture memory. This
    allows a fully accurate, coherent measurement
    based on non-coherent test tones. Unlike
    windowing techniques, these resampling algorithms
    do not discard useful signal information and they
    do not allow energy from adjacent spectral bins
    to bleed into one another.

46
  • Emerging Trends
  • Non-Coherent Sampling
  • The relaxed coherence requirements allow test
    cost savings in two ways. First, the clock
    generation circuits of the tester do not have to
    be as flexible as they would need to be in a
    coherent tester. Less demanding clocking
    requirements can help reduce the cost of the
    tester hardware. The second advantage of
    non-coherent testing is that it reduces the
    difficulty associated with the complicated
    calculation of coherent sampling systems. This
    in turn may reduce the test development cycle
    time, since it allows the test engineer to
    concentrate on the signals themselves rather than
    the methods used to generate and analyze them.

47
  • Emerging Trends
  • Built-In-Self-Test (BIST)
  • BIST is commonly used in digital circuits but has
    been difficult to introduce into high-performance
    mixed-signal circuits. A number of interesting
    BIST circuits have been proposed at the yearly
    International Test Conference.
  • Most analog and mixed-signal BIST concepts are
    not easily traced to NIST accuracy standards.
    However, we observed that wide design margins can
    lead to reduced accuracy requirements in the
    production testing process. Therefore, the
    promise of low-cost analog and mixed-signal
    testing through BIST is tightly coupled to the
    improvement of design margins to allow less
    stringent testing.

48
  • Emerging Trends
  • Defect-Oriented Testing (DOT)
  • Defect-oriented testing (DOT) is yet another
    cost-saving test methodology tightly coupled to
    robust designs.
  • However, over the past decade or so many
    companies have begun to adopt a methodology
    called defect oriented testing. DOT is based on
    the assumption that we can understand and predict
    some or all of a circuits major failure
    mechanisms, and that these mechanisms can be made
    to exhibit themselves in the form of a limited
    set of measured parameters. The reduced set of
    measured parameters leads to a lower test cost
    while providing more thorough coverage of failure
    mechanisms.

49
  • Emerging Trends
  • Defect-Oriented Testing (DOT)
  • IDDQ testing is a type of DOT that efficiently
    tests for the presence of undesirable current
    leakage paths in an improperly fabricated IC.
  • Another example of defect-oriented testing is the
    major carrier method for the testing of DAC and
    ADC INL and DNL. The major carrier method
    eliminates the redundancy inherent in all-codes
    linearity testing. It allows us to predict the
    shape of the overall transfer curve of a
    converter by measuring only a selected set of
    converter levels.
  • Need to be careful about DOT to SPOT
    (Specification Oriented Testing) errors using
    guardbands.

50
  • Summary
  • We have seen how testing costs are based not only
    on the purchase price of the tester, but on the
    testers overall throughput and accuracy. We
    have discussed the importance of time to market
    on revenues and profit margins, and we have seen
    how a test engineers debugging skills can reduce
    the cycle time of a new semiconductor product.
    Finally we have examined some of the emerging
    trends that are changing the manner in which we
    develop and utilize test programs for the
    production of mixed-signal ICs.
Write a Comment
User Comments (0)
About PowerShow.com