Random variables, distributions, and probability density functions - PowerPoint PPT Presentation

1 / 21
About This Presentation
Title:

Random variables, distributions, and probability density functions

Description:

Title: Bayesian Decision Theory Author: rudzsky Last modified by: rudzsky Created Date: 10/5/2001 9:28:03 AM Document presentation format: On-screen Show – PowerPoint PPT presentation

Number of Views:239
Avg rating:3.0/5.0
Slides: 22
Provided by: rudz6
Category:

less

Transcript and Presenter's Notes

Title: Random variables, distributions, and probability density functions


1
Contents
  • Random variables, distributions, and probability
    density functions
  • Discrete Random Variables
  • Continuous Random Variables
  • Expected Values and Moments
  • Joint and Marginal Probability
  • Means and variances
  • Covariance matrices
  • Univariate normal density
  • Multivariate Normal densities

2
Random variables, distributions, and probability
density functions
  • Random variable X is a variable which value is
    set as a consequence of random events, that is
    the events, which results is impossible to know
    in advance. A set of all possible results is
    called a sampling domain and is
  • denoted by . Such random variable can
    be treated as a indeterministic function X
    which relates every possible random event
  • with some value
    . We will be dealing with real random variables
  • Probability distribution function is a function
    for which
  • for every x

3
Discrete Random Variable
  • Let X be a random variable (d.r.v.) that can
    assume m different values in the countable set
  • Let pi be the probability that X assumes the
    value vi
  •  
  • pi must satisfy
  • Mass function satisfy
  • A connection between distribution and the mass
    function is given by

4
Continuous Random Variable
  • The domain of continuous random variable
    (c.r.v.) is uncountable.
  • The distribution function of c.r.v can be
    defined as
  • where the function p(x) is called a
    probability density function . It is important to
    mention, that a numerical value of p(x) is not a
    probability of x. In the continuous case p(x)dx
    is a value which approximately equals to
    probability PrxltXltxdx

5
Continuous Random Variable
  • Important features of the probability density
    function

6
Expected Values and Moments
  • The mean or expected value or average of x is
    defined by
  • If Yg(X) we have
  • The variance is defined as
  • where s is the standard deviation of x.

7
Expected Values and Moments
  • Intuitively variance of x indicates distribution
    of its samples around its expected value (mean).
    Important property of the mean is its linearity
  • At the same time variance is not linear
  • The k-th moment of r.v. X is EXk (the expected
    value is a first moment). The k -th central
    moment is

8
Joint and Marginal Probability
  • Let X and Y be 2 random variables with domains

  • and
  • For each pair of values we
    have a joint probability  
  • joint mass function
  • The marginal distributions for x and y are
    defined as
  • For c.r.v. marginal distributions can be
    calculated as

9
Means and variances
  • The variables x and y are said to be
    statistically independent if and only if
  • The expected value of a function f(x,y) of two
    random variables x and y is defined as
  •  The means and variances are

10
Covariance matrices
  • The covariance matrix S is defined as the square
    matrix
  • whose ijth element sij is the covariance of
    xi and xj

11
Cauchy-Schwartz inequality
  • From this we have the Cauchy-Schwartz inequality
  • The correlation coefficient is normalized
    covariance
  • It always . If
    the variables x
  • and y are uncorrelated. If yaxb and agt0, then
  • If alt0, then
  • Question.Prove that if X and Y are independent
    r.v. then

12
Covariance matrices
  • If the variables are statistically independent,
    the covariances are zero, and the covariance
    matrix is diagonal.
  • The covariance matrix is positive semi-definite
    if w is any d-dimensional vector, then
    . This is equivalent to the requirement
    that none of the eigenvalues of S can ever be
    negative.

13
Univariate normal density
  • The normal or Gaussian probability function is
    very important. In 1-dimension case, it is
    defined by probability density function
  • The normal density is described as a "bell-shaped
    curve", and it is completely determined by
    .
  • The probabilities obey

14
Multivariate Normal densities
  • Suppose that each of the d random variables xi is
    normally distributed, each with its own mean and
    variance
  • If these variables are independent, their joint
    density has the form
  • This can be written in a compact matrix form if
    we observe that for this case the covariance
    matrix is diagonal, i.e.,

15
Covariance matrices
  • and hence the inverse of the covariance matrix is
    easily written as

16
Covariance matrices
  • and
  • Finally, by noting that the determinant of S is
    just the product of the variances, we can write
    the joint density in the form
  • This is the general form of a multivariate normal
    density function, where the covariance matrix is
    no longer required to be diagonal.

17
Covariance matrices
  • The natural measure of the distance from x to the
    mean m is provided by the quantity
  • which is the square of the Mahalanobis
    distance from x to m.

18
ExampleBivariate Normal Density
  • where is a correlation
    coefficient
  • thus
  • and after doing dot products in
    we get the expression for
    bivariate normal density

19
Some Geometric Features
  • The level curves of the 2D Gaussian are ellipses
    the principal axes are in the direction of the
    eigenvectors of S, and the different width
    correspond to the corresponding eigenvalues.
  • For uncorrelated r.v. ( r0 ) the axes are
    parallel to the coordinate axes.
  • For the extreme case of the
    ellipses collapse into straight lines (in fact
    there is only one independent r.v.).
  • Marginal and conditional densities are
    unidimensional normal.

20
Some Geometric Features

21
Law of Large Numbers and Central Limit Theorem
  • Law of large numbers Let X1, X2,,be a series of
    i.i.d. (independent and identically distributed)
    random variables with EXim .
  • Then for Sn X1 Xn
  • Central Limit Theorem Let X1, X2,,be a series of
    i.i.d. r.v. with EXim and variance var(Xi)s
    2 . Then for Sn X1 Xn
Write a Comment
User Comments (0)
About PowerShow.com