Title: Manufacturability, Test, and Diagnostics for Microelectronics
1Manufacturability, Test, and Diagnostics for
Microelectronics
- Summer Semester 2008Dr. Bernd KoenemannProf.
Dr. Walter Anheier
2Overall Context
3Whats the Problem?
Perfectby design ???
Lost inTranslation
Defects !!!
4From Design to Manufacturing
5300mm Wafer
Source Intel Corp.
6CMOS Inverter and Its Layout
7Decomposition for Layer-By-Layer Printing
8Masks for CMOS Inverter
9Elements of a Lithography System
10MASK/Reticle
Phase shift coating
Chrome pattern
Pellicle
Quartz substrate
11Mask vs. Reticle
Photo courtesy SGS Thompson
A reticle only covers a part of the wafer It uses
a larger image that is reduced bythe lithography
system (wafer stepper)Enables higher resulution
A mask covers the whole wafer image 11 ratio
12Stepper for Printing from Reticle
13Sample Reticle (MOSIS Multi-Project)
14A Complete Lithography System
15Immersion Lithography
Replace air gap between the lens and the wafer
surface with a liquid medium Refractive Index
of medium gt 1 Resolution improves due to
wave-length reduction
16Phase Shift Mask (PSM)
Image is degraded as light from clear areas is
diffracted into neighboring regions Idea behind
PSM Modify the reticle so that interference
works to our advantage
17Phase Assignment
- Better resolution enhancement
- Cant be used for certain geometries like
T-junctions
18Chemical Mechanical Polishing (CMP)
silicon wafer
slurry feeder
wafer carrier
polishing pad
slurry
polishing table
Dummy feature insertion Decreases local feature
density variation decreases the ILD thickness
variation after CMP
Post-CMP ILD thickness
Features
Area fill features
19Dummy Feature Insertion in Mask Prep
- E.g., dummy patterns for uniformity in CMP
20Overall Context
21From Design to Test
22Test
23How Does Test Work?
Question Question Question Question Question Quest
ion Question Question Question
??? Under Test
24Test versus Verification
- Verification is virtual
- Verification targets are software models
- Tools of the trade are general purpose computers
or specialized simulation hardware - Manufacturing test is real
- Verification targets are wafers, chips, packages,
etc. - Tools of the trade are test machines, aka ATE
(Automatic Test Equipment)
25Tools of the Test Trade
- Sophisticated Machines
- Expensive
- Big
- Driven by software
- Test programs
- Data-logging
- Complex logistics
Main Frame
Test Head
26Wafer and Chip Pads
27Wafer Probe Interface
- Probe card
- Connects test head to chip pads on wafer
- Custom design for each new chip type
- Different technologies depending on pad-count,
pad-spacing, and speed requirements
28Wafer Prober
- Handles wafers for test
- Holds wafer
- Motion stage for indexing (moving to next
chip/group of chips on wafer) - Facilitates touch down
29Pass/Fail Flow
30Wafer inking
- Traditional method for marking chips on wafer
- ATE supplies wafer map (map of good/bad chips)
- Bad chips get an ink spot
- Facilitates sorting of good/bad after dicing
- Inking machines are still available
31Automated Mapping/Binning
- Virtual wafer maps instead of ink-dots
- Less error prone
- Ability to map fail types (binning)
- Input for automated sorting machines
32Chip Packages
- Containers for diced chips
- Many different technologies and form factors,
e.g., - Pin-through-hole
- Solder balls
- Etc., etc., etc.
33Package Test Handler
- Handles packaged chips
- Yet another big, expensive machine
34Reliability
- Chips age and fail after a certain time of use
- One Measure MTBF (Mean Time Before Failure)
- Different end-use applications have different
reliability requirements, e.g., - Cheap watch versus satellite
- Technology and design issue
- Design for Reliability is an up and coming
discipline - Reliability issues often not found in
manufacturing test - Test happens too early in product life (its not
broken yet) - May need something else to identify products with
reliability weaknesses (e.g., burn-in)
35Burn-In
- Exposure to elevated ambient temperature
- Accelerates aging/failing
- Facilitated by burn-in ovens
- Test or no test in burn-in
- Depends on desired reliability grade
- Lengthy and expensive
36Lets Talk design/test interface
37Example of Wafer Level Tests
- Power-up contact tests
- Parametric tests
- IDDQ (6-12 measurements)
- Fixed set or from ATPG
- Scan used to load vectors, then wait and measure
- Scan tests, low VDD corner (nom. - 10)
- Voltage screen tests
- Scan tests, high VDD corner (nom. 10)
- AC scan screen
- Ring oscillator measurements
- Phase-Locked Loop tests
- Characterization tests (sample of parts)
- Note only the red items come from design
Source IBM
38Test Program Generation
Design
Test Floor
Test Data
Test Program
TestProgramGeneration
ATE
ATPG/Fault-Grading
Lots of Other Data
39Intermezzo Test Quality
Question Question Question Question Question Quest
ion Question Question Question
??? Under Test
40Intermezzo Test Quality
Question Question Question Question Question Quest
ion Question Question Question
??? Under Test
Yield Loss
Test Escape
41Intermezzo Test Quality
- Yield loss increases the product price
- Fewer good chips per wafer
- Test escapes can cause product fails at the
customer - Defect-Level percentage of defective chips
shipped as good - Measured in ppm (parts per million)
- Increases down-stream costs (scrap, repair,
product delays, etc.) - To minimize escapes, the test program ideally has
to detect all defects - How can we estimate ahead of time how good the
test program is? - Need a quality measure for the test data supplied
from design!
42Lets Talk Faults
Faults
ATPG/Fault-Grading
Netlist
Test Data
FaultCoverage
43The Challenge
Design
Fab
Correlation?
Defects
Faults
Test
ATPG/Fault-Grading
Netlist
Wafers
Correlation?
Escapes/Yield Loss
FaultCoverage
44Fault Models What Do They Do?
- Fault-Grading
- Criteria to be used for calculating fault
coverage - Test Generation
- Conditions to be generated by Automatic Test
Pattern Generation (ATPG) locally in the netlist
to excite and propagate fault effects - Guide for Diagnosis
- Establish link between netlist locations and fail
behavior
45Fault-Grading
Netlist
Fault Simulation
CoverageReports
Test Patterns
Fault List
46Netlists and Fault Lists
- ATPG/Fault-grading use a gate-level structural
netlist of the network under test - Some circuit-level elements (transistors,
resistors, etc.) may be allowed but only
sparingly - The design flow must produce an accurate
gate-level netlist - Specially generated for test, if not needed
otherwise - Faults are attached to pins, nets, or paths in
the netlist - Most commonly individual faults are attached to
the input/output pins of the logic gates in the
netlist - The collection of all individual faults defined
for a netlist is called the fault list
47Fault Simulation
Fault-free network Good Machine
Responsecomparison
CoverageReport
Stimulus data
Network with fault Fault Machine
Fault List
48Coverage Report
- Fault coverage
- Percentage of faults detected by the tests
- Fault statistics
- Information about individual faults or groups of
faults, e.g., - By which test a fault is first detected
- Sub-module I/O faults versus module-internal
faults - Fault dictionary (optional)
- More detailed information about how each
individual fault is detected - Used for diagnostics
49Fault simulation
- Fault simulators have to be very fast
- Large networks (millions of gates)
- Long stimulus sequences (tens of thousands)
- Lots of faults (millions of faults)
- Trade-offs are required
- Accuracy (zero-delay versus accurate delay)
- Fault dropping (drop after first or multiple
detects) - Fault handling (one or multiple faults at a time)
- Pattern handling (one or multiple patterns at a
time)
50Fault Simulation
- Predominant fault simulation techniques are
- Concurrent for manually generated tests
- One pattern at a time, event-driven simulation
- Multiple faults at a time
- Nominal delays
- Slower, but more accurate, better sequential
capabilities - PPSFP (Parallel Patterns Single Fault
Propagate) for automatically generated tests - Multiple patterns at a time, event-driven or
compiled - Single fault at a time
- Typically zero-delay (levelized network)
- Much faster, but less accurate, limited
sequential capabilities
51Other Fault-grading Techniques
- E.g., testability measures
- Try to avoid cost of simulation
- Estimated fault coverage without explicit fault
lists - based on controllability/observability of network
nodes - Very limited use in practice for fault grading
- Not accurate enough
- Some use for automatic test point insertion
- Test points will be discussed later
52Next Topic Fault models
- Introduction, definition, examples, and
discussion of basic fault models will happen in
the Ãœbungen - Use models, context, and some advanced issues
will be elaborated in the Vorlesung
53Our Agenda for the SEmester