Moments of More than One Random Variable - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Moments of More than One Random Variable

Description:

MOMENTS OF MORE THAN ONE RANDOM VARIABLE. Lecture IX. Covariance and ... Note that the covariance between any random variable and a constant is equal to zero. ... – PowerPoint PPT presentation

Number of Views:33
Avg rating:3.0/5.0
Slides: 26
Provided by: charle167
Category:
Tags: moments | more | one | random | use | variable

less

Transcript and Presenter's Notes

Title: Moments of More than One Random Variable


1
Moments of More than One Random Variable
  • Lecture IX

2
Covariance and Correlation
  • Definition 4.3.1

3
  • Note that this is simply a generalization of the
    standard variance formulation. Specifically,
    letting Y-gtX yields

4
  • From a sample perspective, we have

5
  • Together the variance and covariance matrices are
    typically written as a variance matrix

6
Sample Variance Matrix
  • Substituting the sample measures into the
    variance matrix yields

7
Matrix Form of Sample Variance
  • The sample covariance matrix can then be written
    as

8
Theoretical Variance Matrix
  • In terms of the theoretical distribution, the
    variance matrix can be written as

9
Example 4.3.2
10
(No Transcript)
11
(No Transcript)
12
  • Theorem 4.3.2. V(XY)V(X)V(Y)Cov(X,Y)

13
  • Note that this result can be obtained from the
    variance matrix. Specifically, XY can be
    written as a vector operation

14
  • Given this vectorization of the problem we can
    define the variance of the sum as

15
  • Theorem 4.3.3. Let Xi, i1,2, be pairwise
    independent. Then

16
  • The simplest proof to this theorem is to use the
    variance matrix. Note in the preceding example,
    if X and Y are independent, we have

17
  • Extending this result to three variables implies

18
Correlation
  • Definition 4.3.2. The correlation coefficient for
    two variables is defined as

19
  • Note that the covariance between any random
    variable and a constant is equal to zero.
    Letting Y equal to zero we have

20
Least Squares Regression
  • We define the ordinary least squares estimator as
    that set of parameters that minimizes the squared
    error of the estimate

21
  • The first order conditions for this minimization
    problem then becomes

22
  • Solving the first equation for a yields
  • Substituting this expression into the second
    first order condition yields

23
(No Transcript)
24
General Matrix Forms
25
  • Theorem 4.3.6. The best linear predictor (or more
    exactly, the minimum mean-squared-error linear
    predictor) of Y based on X is given by aßX,
    where a and ß are the least square estimates.
Write a Comment
User Comments (0)
About PowerShow.com