Title: CMP and Dummy Fill
1Supported by MARCO GSRC
Compression Schemes for "Dummy Fill" VLSI Layout
Data
Robert Ellis, Andrew B. Kahng and Yuhong Zheng (
Texas AM University and UCSD) http//vlsicad.ucs
d.edu
2Outline
- Dummy Fill and Fill Compression Problem
- Our Contributions
- JBIG Standards
- Loss/Lossless Compression Algorithms
- Experimental Results
- Conclusion and Future Research
3CMP and Dummy Fill
- Uneven features cause polishing pad to deform in
Chemical-Mechanical Polishing (CMP)
- Interlevel-dielectric (ILD) thickness ? feature
density - Insert non-functional dummy features to decrease
variation
- Dummy feature explodes layout data volume,
creates a bottleneck in the design-to-manufacturin
g handoff - Dummy fill data compression is required
4Fill Compression Problem
- A fill pattern can be expressed as a binary (0-1)
matrix
Problem Given a 0-1 matrix B m?n digitized
from a dummy fill layout, compress it with the
objective of minimizing output data size h
Compression ratio r m?n/h
- One-sided loss
- Limited loss can improve compressibility
- Asymmetric loss
- 1?0 okay! (fill geometry disappears)
- 0?1 not allowed (fill geometry appears)
5General Flow for Compression Heuristics
6Our Contribution
- New compression heuristic algorithms on JBIG
methods - JBIG1
- JBIG2-Pattern Matching and Substitution (PMS)
- JBIG2-Soft Pattern Matching (SPM)
- Two loss mechanisms
- Proportional loss relative fraction of 1s
allowed to be changed to 0s - Fixed speckle loss absolute number of 1s
allowed to be changed to 0s - Asymmetric cover method that comprehends
one-sided loss and improves the compression ratio
7General Flow for Compression Heuristics
8JBIG Standard
- JBIG (Joint Bi-level Image Experts Group) is an
experts group of ISO, IEC and CCITT (JTC1/SC2/WG9
and SGVIII). Its goal is to define a compression
standard for bi-level image coding - JBIG1 international standard for lossless
compression of bi-level images (ITU-T T.82)
(1993) - JBIG2 the first International standard that
provides for both lossless and lossy compression
of bi-level images (1999) - JBIG methods are based on Arithmetic Coding and
Context-based Statistical Modeling
9JBIG2 PMS
- PMS Pattern Matching and Substitution
- Dictionary reference blocks that used to match
data blocks - Extract and encode repeatable patterns
10JBIG2 SPM
- SPM Soft Pattern Matching
- Dictionary reference blocks that used to match
and coding data blocks - Estimate bits probabilities based on data block
and matched reference block, codes data in
arithmetic coding
11Dictionary Construction
- To achieve better compression ratio
- Dictionary should contain as few reference
blocks as possible to match a much larger number
of data blocks - Reference indices (pointing from data blocks to
reference blocks) as shorter as possible - Removing singletons from the dictionary will
reduce the size of dictionary - Asymmetric cover approach is applied to construct
a dictionary for loss compression
12General Flow for Compression Heuristics
13Asymmetric Cover Heuristic
- The problem of building a cover for a set of data
blocks is an instance of the Set Cover Problem
(SCP) - Asymmetric cover allows number of 1s can be
changed to 0s, yet 0s can not be changed to 1s - Our heuristic for constructing cover views the
data blocks as vertices of a graph with edge
weights defined as - w(D1, D2) min(t(D1) HD(D1, D1 D2),
t(D2)-HD(D2, (D1 D2)) - D data block, bit-wise AND t(D) the
total allowable loss for D - D1 and D2 covered by the same cover iff w(D1, D2)
? 0 - Cover D D1 D2.
Clustering data blocks 111111 and 111101
14Description of Algorithm Pieces
Index Description A1 A2.1 A2.2 A2.3 A3
 Benchmark Compress matrix using JBIG1 ?
 Loss introduction Proportional loss ? ?
 Loss introduction Fixed speckle loss ?
JBIG lossless components JBIG2 PMS ? ? ? ?
JBIG lossless components JBIG2 SPM (lossless) ? ? ?
JBIG lossless components Singleton exclusion singleton data blocks compression by JBIG1 ? ? ? ?
Compress dictionary JBIG1 on reference blocks compression ? ? ? ?
15General Compression Algorithm
Segment data matrix into blocks
Yes
No
Asymmetric cover heuristic for one-sided loss
- Exclude Singleton
- (A2, A3)
Perform lossless compression (JBIG1, JBIG2 PMS,
and JBIG2 SPM) on data matrix
- Dictionary Compression using JBIG1 (A2,
A3)
16Experimental Results
- A2.1 is the best lossless fill compression
methods, with an average of 29.93 improvement to
the Bzip2 - A1 gives competitive compresstion ratios, with an
average of 28.7 improvement to the Bzip2 - A2.2 and A3 performs similar in all test cases
- Large loss yields better compression ratios.
17Dictionary Fits SREFs
111000101 111000111 111000111 000101000 000101000
000101000 101000000 111000000 101000000
M
F
loss
101000101 101000101 101000101 000101000 000101000
000101000 101000000 101000000 101000000
M
F
Dictionary entry
SREF
18Geometry Compression Operators
OASIS Repetition Types
19Conclusion and Future Research
- We have implemented algorithms based on JBIG
methods in combination with the new concept of
one-sided loss to compress binary data files of
dummy fill features. - JBIG1 is quite effective. Our new heuristics
A2-A3 and the fixed speckle loss heuristic offer
better compression with slower runtime,
especially as data files become larger - Ongoing research examines synergies between fill
generation and compression, as well as
compression techniques that exploit constructs in
the GDSII standard (AREF and SREF) and the new
OASIS format (8 repetitions) for layout data.
20Thank You!
21(No Transcript)
22Experimental Results (Contd)
- For lossless compression, A1 is the most
cost-effective method, taking only 2.7? longer
than Bzip2 on average. A2.1 is nearly as cost
effective, but takes 5.9? longer than Bzip2 on
average. - A3 is the most cost-effective proportional loss
method, taking 3.7 ? longer than Bzip2 on
average. The running time of A2.2 is 9.4 ? longer
than Bzip2 on average with proportional loss
ratio k0.2 and 10.3 ? longer with k0.4.