ERICSSON MEETS SMID Improve - PowerPoint PPT Presentation

1 / 37
About This Presentation
Title:

ERICSSON MEETS SMID Improve

Description:

Title: Ericsson _at_ SMID - Improve Subject: 10. Improve Author: MSI/NA Patrizia Ratto Description: Rev PA1 Last modified by: Patrizia Ratto Created Date – PowerPoint PPT presentation

Number of Views:82
Avg rating:3.0/5.0
Slides: 38
Provided by: MSINAPatr6
Category:

less

Transcript and Presenter's Notes

Title: ERICSSON MEETS SMID Improve


1
ERICSSON MEETS SMIDImprove
April, 2011
2
Agenda
  • 4th April
  • Ericsson presentation
  • Statistical tools in manufacturing
  • DMAIC/IDDOV
  • 7th April
  • Define
  • Measure
  • 11th April
  • Analyze
  • 14th April
  • Implement
  • Control

3
DMAIC Chart
  • Define
  • Understand the task and its financial impact.
  • Task selection matrix
  • SMART review
  • Stakeholder map
  • Risk Management
  • SWOT analysis
  • Process map
  • VOC and break down to CTQs
  • 7MT
  • Affinity diagram
  • Measure
  • Develop and execute an appropriate data
    collection method.
  • Process map
  • Data collection table
  • Pareto diagram
  • 7QCT
  • Measurement system analysis
  • Sampling technique
  • SIPOC
  • Gauge RR, Gauge attribute
  • Capability analysis
  • Benchmark
  • Tagushi loss functions
  • Analyze
  • Find the root causes.
  • Fishbone diagram
  • Correlation analysis
  • 7QCT
  • Hypothesis testing
  • Regression-analysis
  • DOE
  • Anova
  • 7MT
  • Data transformations
  • Simulations
  • Improve
  • Generate and implement solutions.
  • FMEA risk analysis
  • Process map
  • Poka-Yoke
  • Hypothesis testing
  • Loss functions
  • Cost/Benefit selection
  • Pugh Concept Selection
  • Control
  • Ensure that the results will last.
  • Documentation, standardization and training
  • 7QCT
  • SPC
  • Business case verification

Light
Comprehensive
4
FMEA
  • FAILURE MODE AND EFFECTS ANALYSIS

5
FMEA
Focus on observable behaviors
6
FMEA
  • Why?
  • If it can go wrong - it will go wrong!
  • Prevention is better than cure
  • Where?
  • Can be applied to any process
  • Who?
  • A team of people connected with the process

"A large safety factor does not necessarily
translate into a reliable product. Instead, it
often leads to an overdesigned product with
reliability problems. Failure Analysis Beats
Murphy's LawMechanical Engineering , September
1993
7
FMEA worksheet
8
FMEA severity (SEV) rating
  • For each failure mode, decide on the impact on
    the product or operation when it occurs.
  • Rate this impact in the column labeled SEV
    (severity).
  • Establish your baseline for the analysis, i.e.
    SEV10 means death of a human or machine failure.
  • A SEV rating cannot change when improvement
    actions are put in place unless the design has
    changed.

Effect of failure mode
Severity Rating for the Failure Effect.
Output loss from pre-amp
Loss of signal from 2nd RF amp
5
Loss of position, velocity time.
Potential Failure modes
9
FMEA occurrence (OCC) rating
  • For each potential failure mode come up with one
    or more potential causes.
  • Rate the probability of each potential cause
    occurring and place the rating in the column
    labeled OCC (occurrence).

Potential cause of failure mode
Effect of the failure mode
S
O
E
C
Potential Failure Effects
Potential Causes
V
C
Probability of Occurrence
C1 short
3
1

Receiver Output data loss, track loss
U21 function
10
Loss of position, velocity time
5
10
FMEA detection (DET) rating
  • For each potential cause, identify the current
    controls which are in place to prevent or detect
    the failure mode. Rate the ability of each
    current control to prevent or detect the failure
    mode once it occurs. Place the rating in the
    column labeled DET (detection).

Current Controls for each failure mode.
  • Ability to prevent or detect the failure mode.

O
D
R
C
E
P
Potential Causes
Current Controls
C
T
N
Test PR-20 HW-5
C1 short
2
2
32
None
U21 function
6
6
288
11
RPN calculation gt action plan
  • Multiply the SEV, OCC and DET ratings together
    and place the value in the RPN column. The
    largest RPN numbers should get the greatest
    focus. Any SEV which has a value 10 should have
    attention regardless of the OCC and DET values.
    For those RPN numbers which warrant corrective
    action, recommended actions and the person
    responsible for implementation should be listed.

Risk Priority Number (RPN)
Actions
Potential Failure Effects
Potential Causes
Current Controls
S
O
D
R
E
C
E
P
Resp.
Recommended
V
C
T
N

Motor frame is unstable
3
6
18
Operator Knowledge experience Ticket specifies
- but coded
Darren Wooler
Operator skill/training
1
Coding chart to be issued displayed by machine
SEV OCC DET RPN (3 16 18)
Recommended actions and person responsible for
implementation.
12
Impact of corrective actions
  • When making corrective actions use the available
    data to secure the best corrective action at the
    time.
  • Correct SEV most probably needs a design change.
  • Correct OCC may need a design and process change.
  • Correct DET may need a design or process change.

13
Action results
  • After corrective action has been taken, place a
    brief summary of the results in the Actions
    Taken block. A new value should be assessed for
    the severity, occurrence and detection of the
    failure mode and root cause with the recommended
    action implemented. Place these values in the
    SEV, OCC and DET columns and calculate the new
    RPN.

New SEV, OCC and DET values.
  • Summary of actions completed.

Coding chart to be issued and displayed
Chart displayed operator informed
9
3
1
3
New RPN (risk priority number)
machine all viking as no feet - add to works spec
Darren Wooler
Clarify effect of with feet no feet measure
to drg spec
1
20
4
5
14
Example of a Pareto diagram FMEA analysis

Focus on the critical few high RPNs in your
Pareto diagram and do changes in the operations
to decrease SEVERITY or OCCURANCE alt. increase
opportunity of DETECTION
15
POKA-YOKE
  • MISTAKE PROOFING

16
History
  • Poka-yoke was invented by Shigeo Shingo in the
    1960s.
  • The term "poka-yoke" comes from the Japanese
    words "poka" (inadvertent mistake) and "yoke"
    (prevent).
  • The essential idea of poka-yoke is to design your
    process so that mistakes are impossible or at
    least easily detected and corrected.

17
(No Transcript)
18
Poka-yoke major categories
  • A prevention device engineers the process so that
    it is impossible to make a mistake at all.
    Prevention devices remove the need to correct a
    mistake, since the user cannot make the mistake
    in the first place.
  • A detection device signals the user when a
    mistake has been made, so that the user can
    quickly correct the problem. Detection devices
    typically warn the user of a problem, but they do
    not enforce the correction.

19
What Causes Defects?
  • Process Variation From
  • Poor procedures or standards
  • Machines
  • Non-conforming material
  • Worn tooling
  • Human Mistakes
  • Except for human mistakes these conditions can be
    predicted and corrective action can be
    implemented to eliminate the cause of defects

20
Ten types of human mistakes
  • Forgetfulness (Not Concentrating)
  • Misunderstanding (Jump to Conclusions)
  • Wrong identification (View Incorrectly...Too Far
    Away)
  • Lack of experience
  • Willful (ignoring rules or procedure)
  • Inadvertent or sloppiness (Distraction, Fatigue)
  • Slowness (Delay in Judgment)
  • Lack of standardization (Written Visual)
  • Surprise (unexpected machine operation, etc.)
  • Intentional (sabotage)

21
Methods for using Poka-yoke
  • Poka-yoke systems consist of three primary
    methods
  • Contact
  • Counting
  • Motion-Sequence
  • Each method can be used in a control system or a
    warning system.
  • Each method uses a different process prevention
    approach for dealing with irregularities.

22
COST/BENEFIT ANALYSIS
23
Costs and benefitsExample screening of ASICs
subject to failure risk related to process factor
2,50
2,00
1,50
1,00
0,50
0,00
260
259
258
257
256
255
254
253
252
251
Process factor screening limit
24
Costs and benefits Finding the optimum
2,50
2,00
1,50
1,00
0,50
0,00
260
259
258
257
256
255
254
253
252
251
Process factor screening limit
25
Hypothesis testing
26
t-Test
Power and Sample Size
Paired t
2 sample t
1 - sample
Formulate Hypothesis
Formulate Hypothesis
Formulate Hypothesis
Plot, Plot
Plot, Plot
Test assumption of normality (Anderson- Darling)
HoData normal HAData not normal p-value gt 0.05
Test assumption of normality (Anderson- Darling)
f-test Continuous normal data Levenes test
non-normal data
Test equality of variances
Ho ?12 ? 22 HA ?12 ? ? 22
Paired t
2 sample t
1 - sample
27
F-test
2 Sample
1 Sample
Formulate Hypothesis
Formulate Hypothesis
Plot, Plot
HoData normal HAData not normal p-value gt 0.05
Test assumption of normality (Anderson- Darling)
f-test Continuous normal data Levenes test
non-normal data
Test equality of variances
Ho ?12 ? 22 HA ?12 ? ? 22
28
SAMPLE SIZE
29
Student Curve - t Distribution
  • t curves vary with sample size -they get wider
    and flatter than normal as sample size is reduced
  • In a normal curve 95 of sampling distribution is
    contained within ? 1.96 se
  • In a t distribution 95 is within ? 2.131 se and
    ? 2.776 se for sample size of 16 and 5
    respectively
  • As sample size approaches n 30, the t-
    distribution approaches the normal distribution

30
Hypothesis Testing - Options and Errors
T-test F-testH0 my mx sy sx Ha my ? mx
sy ? sx my lt mx sy lt sx my gt mx sy gt sx
a-risk We reject a null hypothesis that is in
fact true b-risk We fail to reject a null
hypothesis that is in fact false
31
(No Transcript)
32
Power and Sample Size
  • The Power of a test is the probability that it
    will allow you to reject Ho when Ho is wrong (Ha
    is really true). (power 1 - ?)
  • The following has a direct bearing on power as
    do the following
  • the alpha (?) level increases, the power
    increases
  • the variability of the population (?) increases,
    the power decreases
  • the difference (effect size) increases, the power
    increases
  • the sample size increases the power increases

33
Front Panel Manufacturer
  • A front panel manufacturer wants to detect
    significant changes in front panel lengths. They
    sample thousands of them because it is cheap and
    quick to do. But this huge sample makes the test
    too sensitive the blue line shows it will sound
    the alarm if the average length differs by a
    trivial amount (0.05).This Power Curve shows they
    are wasting resources on excessive precision. A
    sample size of just 100 will detect meaningful
    differences (0.25) without "crying wolf" at every
    negligible blip.
  • You also want the confidence in your results
    that's appropriate for your situation (testing
    seat belts demands a greater degree of certainty
    than testing shampoo). We measure this certainty
    with statistical power the probability your
    test will detect an effect that truly exists

34
Power and Sample size
  • "We've always done it this way." That's why a PCB
    manufacturer would sample 10 units to test
    whether their strength meets the target.
  • According to the Power Curve, this small sample
    size made their test incapable of detecting
    important effects. They must sample 34 PCBs to
    detect meaningful differences (0.50).

35
Acceptance sampling
36
Acceptance Sampling how to reduce inspection
costs
  • Quality inspection department receives a shipment
    of 540 capacitors every week. You need to develop
    a sampling plan to make decisions regarding the
    lot without having to inspect all of the
    capacitors. 
  • Because some defects are inevitable, you and your
    supplier decide on quality levels and risks that
    allow some defects while maintaining
    profitability for both of you.

37
Acceptance Sampling (2)
  • You and the supplier in common agreement decide
    that the worst quality you are willing to accept
    on a regular basis is 2 defective (AQL) and the
    quality that you want to reject most of the time
    is 8 defective (RQL).

38
Acceptance Sampling (3)
  • It is important to notice that
  • The sampling plan that that has been suggested is
    a good starting point.
  • Sometimes people involved in the sampling
    procedure want you to adjust the sample size and
    acceptance number. In cases like these we should
    try to generate multiple plans at the same time
    and compare OC Curves to find the best plan. The
    following scenarios might have to be considered
  • More convenient sample size The inspectors find
    it most convenient to inspect 10 capacitors from
    each of the nine boxes in the shipment. They want
    you to change the sample size from 98 to 90.
  • Smaller sample size Looking to save time, your
    supervisor suggests taking a much smaller sample.
    He wants you to reduce the sample size from 98 to
    50.
  • Larger acceptance number Your supplier is
    nervous that his shipments will be unfairly
    rejected. He wants you to raise the acceptance
    number and accept at least 10 defective
    capacitors before returning an entire lot.

39
Acceptance Sampling (4)
  • The black line represents the original sample
    plan with sample size of 98 (n) and acceptance
    number of 4 (c).The red line represents a
    relatively small departure from the original
    plan, showing a negligible reduction in the
    producers risk and a slight increase in the
    consumers risk. You are willing to change your
    sample size to a more convenient one to keep your
    inspectors happy.
  • The green and blue lines represent more
    significant changes to the sampling plan which
    result in more risk than you are willing to
    accept.
  • Show your supplier that the resulting consumer
    risk is much too high for you to consider raising
    your acceptance number to 10. Perhaps you will
    evaluate other acceptance numbers between 4 and 10

40
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com