SRAM Yield Rate Optimization EE201C Final Project - PowerPoint PPT Presentation

About This Presentation
Title:

SRAM Yield Rate Optimization EE201C Final Project

Description:

4 Variables, Leff1, Vth1, Leff2, Vth2. Evenly slice each variable and calculate all the possible combinations. By slicing Vth into 10 and Leff into 3, we have total ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 13
Provided by: ucl118
Learn more at: http://eda.ee.ucla.edu
Category:

less

Transcript and Presenter's Notes

Title: SRAM Yield Rate Optimization EE201C Final Project


1
SRAM Yield Rate Optimization EE201C Final Project
  • Spring 2010
  • Chia-Hao Chang, Chien-An Chen

2
Exhausted Search
  • 4 Variables, Leff1, Vth1, Leff2, Vth2
  • Evenly slice each variable and calculate all the
    possible combinations. By slicing Vth into 10 and
    Leff into 3, we have total 900 nominal points.
  • Run a Monte Carlo Simulation on each nominal
    points and compare the yield rate. To reduce the
    samples required, we run Quasi Monte Carlo
    Simulation with 200 samples at each nominal
    point. Hence, there are total 180000 points need
    to be simulated.

Too Slow!
Power(W) Area(m²) Voltage(mV) Yield
Initial Design 8.988e-6 1.881e-13 164.2677 0.534
Optimal Design 8.8051E-6 1.292e-13 163.657 0.998
3
Improvements. Strategy 1
  • Instead of doing Monte-Carlo simulation at each
    nominal point, we spent the effect on a more
    detailed uniform testing for our SRAM cell.
  • The result gives us a 4-D matrix of size 40k
  • The same effort as 40 nominal points with 1k
    samples for Monte-Carlo simulation

4
Strategy 1
  • Idea With the 4-D Matrix, using near by data to
    do a estimation for yield matrix.
  • Assume the grid size is a standard deviation
  • Then we could approximate the yield rate by
    tryingto find expectation.

0.38
0.24
0.24
0.061
0.061
n F(0)
n-1 F(-1)
n1 F(1)
n-2 F(-2)
n2 F(2)
5
Strategy 1
0.38
0.24
0.24
0.061
0.061
  • Yield should be
  • Y(v)EXP distributed v success function
  • With 4D-gt54 check 625 neighbor points
  • Result yield value for each nominal point

n F(0)
n-1 F(-1)
n1 F(1)
n-2 F(-2)
n2 F(2)
6
Strategy1 Results
  • Computational effort40000 spice simulation
    30mins40000625 matlab computation 30mins

Mn1 Leff(m) Mn1 Vth(V) Mn2 Leff(m) Mn2 Vth(V)
Initial Design 1e-07 0.2607 1e-07 0.2607
Optimal Design 9.722e-08 0.4368 9.500e-08 0.2
Power(W) Area(m²) Voltage(mV) Yield
Initial Design 8.988e-6 1.881e-13 164.2677 0.534
Optimal Design 8.854e-6 1.1496e-13 163.4019 0.9999
7
Strategy 2
  • Idea The Success data points are not sparse.
    They locate in cluster. If we can efficiently
    locate these clusters, the searching time can
    be greatly reduced.

After locating these clusters, we only need to
search the points in these clusters. Intuitively,
the centroids of the biggest cluster should
have the highest yield rate
8
Strategy 2
  • Since points in the clusters tend to have high
    yield rate, more samples are needed to achieve an
    accurate yield rate approximation. However,
    traditional Monte Carlo simulation is time
    consuming and doesnt take into consider the
    distribution of the data. We could put more
    emphasis on the boundary in stead of the
    center.
  • Importance Sampling can help us here!

9
Extract 10 points that have the highest yield
rate during the importance sampling and run a
more accurate Quasi Monte Carlo Simulation.
Given the range of Vth and Leff, evenly segment
each variables. Totally 2500 points are
simulated.
Run Kmeans algorithm to find the clusters in the
4 dimension space. We locate the 5 most clustered
points.
The order of our decision rules is Yield
Rate-gtPower-gtArea. The final results are sorted
accordingly.
Check 10 variation around the centroids and
apply Importance sampling to approximate the
yield rate.
99.98Yield Rate
Leff11e-07 Vth10.49Leff21e-07 Vth20.2
10
K-means Algorithm
Given an initial set of k means m1(1),,mk(1) ,
which can be randomly assigned
Assignment Step Assign each observation to the
cluster with the closest mean.
Update step Calculate the new means to be the
centroid of the observations in the cluster.
Iterate until the Assignment Step no longer
changes!
11
Strategy2 Results
Computational effort(2500510050) spice
simulation 15minsK-Means Algorithm 10
seconds
Mn1 Leff(m) Mn1 Vth(V) Mn2 Leff(m) Mn2 Vth(V)
Initial Design 1e-07 0.2607 1e-07 0.2607
Optimal Design 1e-07 0.49 1e-07 0.2
Power(W) Area(m²) Voltage(mV) Yield
Initial Design 8.988e-6 1.881e-13 164.2677 0.534
Optimal Design 8.667e-6 1.881e-13 163.4607 0.998
12
Conclusion
  • These two strategies can be chosen based on the
    nature of the distribution. Strategy1 is in favor
    of more sparse data while strategy2 is in favor
    of clustered data.
  • Exhausted Search can guarantee to find a global
    optimal points, but it is usually not practical
    in a real design.
  • The two methods we purposed can both find the
    optimal points more efficiently.
Write a Comment
User Comments (0)
About PowerShow.com