Title: Atomic Lab
1Atomic Lab
- Introduction to Error Analysis
2Significant Figures
- For any quantity, x, the best measurement of x is
defined as xbest ?x - In an introductory lab, ?x is rounded to 1
significant figure - Example ?x0.0235 -gt ?x0.02
- g 9.82 0.02
- Right and Wrong
- Wrong speed of sound 332.8 10 m/s
- Right speed of sound 330 10 m/s
- Always keep significant figures throughout
calculation, otherwise rounding errors introduced
3Statistically the Same
- Student A 30 2
- Student B 34 5
- Since the uncertainties for A B overlap, these
numbers are statistically the same
4Precision
- Mathematical Definition
- Precision of speed of sound 10/330 0.33 or 33
- So often we write speed of sound 330 33
5Propagation of Uncertainties Sums Differences
- Suppose that x, , w are independent
measurements with uncertainties ?x, , ?w and
you need to calculate - q xz-(u.w)
- If the uncertainties are independent i.e. ?w is
not sum function of ?x etc then - Note ?q lt ?x ?z ?u ?w
6Propagation of Uncertainties Products and
Quotients
- Suppose that x, , w are independent
measurements with uncertainties ?x, , ?w and
you need to calculate -
- If the uncertainties are independent i.e. ?w is
not sum function of ?x etc then
7Functions of 1 Variable
- Suppose ? 20 3 deg and want to find cos ?
- 3 deg is 0.05 rad
- (d(cos?)/d? -sin? sin?
- ?(cos?) sin? ?? sin(20o)(0.05)
- ?(cos 20o) 0.02 rad and cos 20o 0.94
- So cos? 0.94 0.02
8Power Law
9Types of Errors
- Measure the period of a revolution of a wheel
- As we repeat measurements some will be more or
some less - These are called random errors
- In this case, caused by reaction time
10What if the clock is slow?
- We would never know if our clock is slow we
would have to compare to another clock - This is a systematic error
- In some cases, there is not a clear difference
between random and systematic errors - Consider parallax
- Move head around random error
- Keep head in 1 place systematic
11Mean (or average)
12Deviation
Need to calculate an average or standard
deviation To eliminate the possibility of a zero
deviation, we square di
When you divide by N-1, it is called the
population standard deviation If dividing by N,
the sample standard deviation
13Standard Deviation of the Mean
The uncertainty in the best measurement is given
by the standard deviation of the mean (SDOM)
If the xbest the mean, then sbest smean
14Histograms
Number of times that value has occurred
Value
15Distribution of a Craps Game
Bell Curve Or Normal Distribution
16Bell Curve
Centroid or Mean
xs
x-s
68
Between x-2s to x2s, 95 of population 2s is
usually defined as Error
17Gaussian
In the Gaussian, x0 is the mean and sx is the
standard deviation. They are mathematically
equivalent to formulae shown earlier
18Error and Uncertainty
- While definitions vary between scientists, most
would agree to the following definitions - Uncertainty of measurement is the value of the
standard deviation (1 s) - Error of the measurement is the value of two
times the standard deviation (2 s)
19Full Width at Half Maximum
- A special quantity is the full width at half
maximum (FWHM) - The FWHM is measured by taking ½ of the maximum
value (usually at the centroid) - The width of distribution is measured from the
left side of the centroid at the point where the
frequency is this half value - It is measured to the corresponding value on the
right side of the centroid. - Mathematically, the FWHM is related to the
standard deviation by FWHM2.354sx
20Weighted Average
- Suppose each measurement has a unique uncertainty
such as - x1 s1
- x2 s2
-
- xN sN
- What is the best value of x?
21We need to construct statistical weights
- We desire that measurements with small errors
have the largest influence and the ones with the
largest errors have very little influence - Let wweight 1/si2
This formula can be used to determine the
centroid of a Gaussian where the weights are the
values of the frequency for each measurement
22Least Squares Fitting
- What if you want to fit a straight line through
your data? - In other words, yi Axi B
- First, you need to calculate residuals
- Residual Data Fit or
- Residual yi (AxiB)
- When as the Fit approaches the Data, the
residuals should be very small (or zero).
23Big Problem
- Some residuals gt0
- Some residuals lt0
- If there is no bias, then rj -rk and then
rj rk 0 - The way to correct this is to square rj and rk
and then the sum of the squares is positive and
greater than 0
24Chi-square, c2
We need to minimize this function with respect to
A and B so We take the partial derivative of
w.r.t. these variables and set the resulting
derivatives equal to 0
25Chi-square, c2
26Using Determinants
27A Pseudocode
- Dim x(100), y(100)
- xsum0
- x2sum0
- Xysum0
- N100
- Ysum0
- For i1 to 100
- xsumxsumx(i)
- ysumysumy(i)
- xysumxysumx(i)y(i)
- x2sumx2sumx(i)x(i)
- Next I
- Delta Nx2sum-(xsumxsum)
- A(Nxysum-xsumysum)/Delta
- B(x2sumysum-xsumxysum)/Delta
28c2 Values
- If calculated properly, c2 start at large values
and approach 1 - This is because the residual at a given point
should approach the value of the uncertainty - Your best fit is the values of A and B which give
the lowest c2 - What if c2 is less than 1?!
- Your solution is over determined i.e. a larger
number of degrees of freedom than the number of
data points - Now you must change A and B until the c2 doesnt
vary too much
29Without Proof
30Extending the Method
- Obviously, can be expanded to larger polynomials
i.e. - Becomes a matrix inversion problem
- Exponential Functions
- Linearize by taking logarithm
- Solve as straight line
31Extending the Method
- Power Law
- Multivariate multiplicative function
32Uglier Functions
- qf(x,y,z)
- Use a gradient search method
- Gradient is a vector which points in the
direction of steepest ascent - ?f a direction
- So follow ?f until it hits a minimum
33Correlation Coefficient, r2
- r2 starts at 0 and approaches 1 as fit gets
better - r2 shows the correlation of x and y i.e. is
yf(x)? - If r2 lt0.5 then there is no correlation.