Altiok / Melamed Simulation Modeling and Analysis with Arena - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

Altiok / Melamed Simulation Modeling and Analysis with Arena

Description:

Correlation Analysis is a modeling and analysis. approach that straddles both Input Analysis and ... and kurtosis, yields an even stronger signature ... – PowerPoint PPT presentation

Number of Views:258
Avg rating:3.0/5.0
Slides: 37
Provided by: ie679
Learn more at: https://courses.vcu.edu
Category:

less

Transcript and Presenter's Notes

Title: Altiok / Melamed Simulation Modeling and Analysis with Arena


1
SIMULATION MODELING AND ANALYSIS WITH ARENA T.
Altiok and B. Melamed Chapter 10 Correlation
Analysis
2
What is Correlation Analysis?
  • Correlation Analysis is a modeling and analysis
    approach that straddles both Input Analysis
    and Output Analysis
  • Correlation Analysis consists of two activities
  • modeling of correlated stochastic processes
  • studying the impact of correlations on
    performance measures of interest via
    Sensitivity Analysis

3
Correlation in Input Analysis
  • Correlation Analysis as part of Input Analysis
    is simply an approach to modeling and data
    fitting that
  • insists on high-quality models incorporating
    temporal dependence
  • strives to fit correlation-related statistics in
    a systematic way
  • To set the scene, consider a stationary time
    series , , that is, all statistics
    remain unchanged under the passage of time
  • in particular, all share a common mean,
    and common variance,
  • to fix the ideas, suppose that is to be
    used to model inter-arrival times at a queue
    (in which case the time series is non-negative)
  • What statistical aspects of should
    be carefully modeled?

4
Statistical Signatures
  • Let a collection of statistics of a random
    variable or time series be referred to as a
    statistical signature (signature, for short)
  • it is often possible to order signatures by
    strength, for example, signatures obviously
    become stronger under inclusion
  • To clarify the signature strength notion,
    consider a time series of inter-arrival
    times, , and the following set of
    signatures in increasing strength
  • the mean, , is a minimal signature,
    since its reciprocal, , is
    the arrival rate a key statistic in queueing
    models
  • the mean, , and variance, , is a
    stronger signature
  • adding moments of the inter-arrival
    distribution, such as the skewness and
    kurtosis, yields an even stronger signature
  • the (marginal) distribution, , determines
    all its moments, and so is stronger than all
    of the above

5
A Very Strong Signature
  • Given a stationary empirical time series, our
    goal here is to to fit a particular very
    strong signature to a time series, ,
    which includes both of the following
    statistics
  • the marginal distribution,
  • the autocorrelation function,
  • The marginal distribution,
  • is a first-order statistic of , that
    is, it involves only a single random variable
    from (by stationarity)
  • is estimated by an empirical histogram,
  • The autocorrelation function,
  • is a second-order statistic of , that
    is, it involves pairs of lagged random
    variables from , and
  • serves as a statistical proxy for temporal
    dependence in , where each
    correlation coefficient,
    , measures of linear dependence
  • is estimated by some estimator

6
Correlation in Output Analysis
  • Correlation Analysis as part of Output Analysis
    is the study of the sensitivity of output
    statistics to correlations in model components
  • autocorrelation can have a major impact on
    performance measures
  • consequently, they cannot always be ignored
    merely for the sake of simplified models
  • however, modelers routinely ignore correlations
    to simplify model construction and its
    analysis
  • A motivating example from the domain of queueing
    systems will illustrate the peril of ignoring
    correlations uncritically

7
Example correlation Impact
  • Consider a workstation operating as an M/M/1
    system with job arrival rate and
    processing (service) rate , such that
  • since all job arrivals and processing times are
    mutually independent, all corresponding
    autocorrelations and cross-correlations are
    identically zero
  • the system is stable with utilization
  • it is known that the equilibrium mean flow time
    is and so the mean waiting time in the buffer
    is
  • Next, modify the arrival process from a Poisson
    process to a (possibly autocorrelated) TES
    process, yielding a TES/M/1 system
  • The merit of TES processes is that they
    simultaneously admit arbitrary marginal
    distributions and a variety of autocorrelation
    functions
  • in particular, we can select TES inter-arrival
    processes with the same inter- arrival time
    distribution as in the Poisson process (i.e.,
    exponential with rate parameter ), but
    with autocorrelated inter-arrival times, yielding
    some
  • TES/M/1 equilibrium mean waiting time in the
    buffer,

8
Example correlation Impact (Cont.)
  • We wish to gauge the impact of autocorrelations
    in the job arrival stream on mean waiting
    times via the relative deviation
  •  
  • the relative deviation is viewed as a function
    of the lag-1 autocorrelation,
    , in the TES arrival process
  • The table below displays the relative deviations
    for two representative cases
  • and (light traffic
    regime with utilization )
  • and (heavy-traffic
    regime with utilization )

9
Introduction to TES Modeling
  • The definition of a TES process involves two
    related stochastic processes
  • an auxiliary process, called the background
    process
  • a target process, called the foreground process
  • The two processes operate in lockstep in the
    sense that they are connected by a
    deterministic transformation
  • more specifically, the state of the background
    process is mapped to a state of the foreground
    process
  • this is done in such a way that the foreground
    process has a prescribed marginal distribution
    and a prescribed autocorrelation function

10
Modulo-1 Arithmetic
  • The definition of TES processes makes use of a
    simple mathematical operation, called
    modulo-1 arithmetic
  • modulo-1 arithmetic is arithmetic restricted to
    the familiar fractions (values in the interval
    0,1), with the value 1 excluded)
  • the notation
    is used to denote the
    fractional value of any number
  • note that fractional values are defined for any
    real number (positive as well as negative)
  • Examples
  •  for zero, we simply have
  • for positive numbers, we have the familiar
    fractional values, for example,
  • for negative values, the fractional part is the
    complementary value relative to one, for
    example, .

11
Outline of TES Processes Theory
  • The Lemma of Iterated Uniformity is the
    foundation of the theory of TES processes
  • let be a uniform random variable on 0,1)
    and let be any random variable
    independent of
  • then is also uniform on 0,1),
    regardless of the distribution of !
  • Define a stochastic process
  • by the Lemma of Iterated Uniformity, each random
    variable above is uniform on 0,1)
  • furthermore, each could be further transformed
    into a foreground process
  • and by the Inverse Transform Method, each
    random variable above will have the prescribed
    distribution !

12
Background TES Processes
  • Define the following random variables
  • let be a random variable with a uniform
    distribution on 0,1)
  • let be an innovation sequence
    (that is, any iid sequence of random
    variables, independent of )
  • TES background processes come in two flavors
  • a background TES process, , is defined
    by the recursive scheme
  • a background TES- process, , is
    defined by

13
Visualizing Background TES Processes
  • Background TES processes can be visualized as a
    random walk on the unit circle
  • Consider a basic TES process, where the
    innovation variate is uniform on an
    interval , so its density is a single
    step of length not exceeding 1

14
Basic TES Processes
  • The following list summarizes qualitatively the
    effect of the parameters and on the
    autocorrelation of a basic background TES
    process
  • the width, , of the innovation-density
    support (the region over which the density is
    positive) has a major effect on the magnitude of
    the autocorrelations the larger the support,
    the smaller the magnitude (in fact, when
    , then the autocorrelations vanish
    altogether)
  • the location of the innovation-density support
    affects the shape of the autocorrelation
    function when the support is not symmetric
    about the origin, then the autocorrelation
    function assumes an oscillating form, and
    otherwise it is monotone decreasing

15
Basic TES Processes (Cont.)
Autocorrelation function of a basic TES
process (symmetric innovation density and narrow
support)
16
Basic TES Processes (Cont.)

Autocorrelation function of a basic TES
process (symmetric innovation density and wider
support)
17
Basic TES Processes (Cont.)
Autocorrelation function of a basic TES
process (non-symmetric innovation density)
18
Basic TES Processes (Cont.)
Autocorrelation function of a basic TES-
process (symmetric innovation density)
19
Basic TES Processes (Cont.)
Autocorrelation function of a basic TES-
process (non-symmetric innovation density)
20
Stitching Transformations
  • A background TES process can produce marked
    visual discontinuities in its sample paths,
    which are noticeable when
  • the innovation density has a narrow support
  • successive background variates on the unit
    circle straddle the circles origin
  • in the figure below we have a sudden drop from
    a relatively high value to a relatively small
    one as the process crosses the origin counter
    clock-wise

Sample path of a basic TES background process
with
21
Stitching Transformations (Cont.)
  • For modeling purposes, we would like sometimes
    to smooth (stitch together) such marked
    visual discontinuities
  • To this end, define a family of so-called
    stitching transformations
  • where is a so-called stitching parameter
    in the interval 0,1
  • a stitching transformation preserves uniformity,
    that is, if , then
    for any
  • therefore, any stitched background TES sequence
    is also a TES background sequence (and thus
    uniformly distributed), albeit a smoothed one

22
Stitching Transformations (Cont.)
  • The graph below displays typical stitching
    transformations for the following stitching
    parameters,
  • for ,
  • for , has a
    triangular shape
  • for , is the identity

Stitching transformations for (dashed curve),
(dotted curve) and (solid curve)
23
Stitching Transformations (Cont.)
  • The graphs below illustrate the smoothing effect
    of stitching

Sample path of a basic TES background process
with and without stitching ( )
Sample path of a basic TES background process
with and with stitching ( )
24
Foreground TES Processes
  • A foreground TES process is obtained from a
    background TES
  • process by a deterministic transformation,
    , called a distortion
  • a foreground TES process, , is of the
    form
  • a foreground TES- process, , is of the
    form
  • In practice, one often applies a stitching
    transformation followed by an application of
    the Inverse Transform method via a distortion
    of the form
  • where
  • is a stitching transformation (often
    )
  • is a cdf (typically, is an
    empirical histogram of data vector, )
  • for example, for the exponential cdf,
    , the inverse is
    , and the
    stitching transformation might be

25
Foreground TES Processes (Cont.)
  • Example applying the exponential Inverse
    Transform formula above to basic background
    TES processes to obtain foreground TES
    processes
  • two basic TES background processes are used
    with, respectively,
    and
  • the Inverse Transform applied to these TES
    background processes uses the same parameter,
  • The results are shown in the next few foils
  • both foreground TES processes have the same
    exponential marginal distribution of rate 1,
    as attested by their histograms
  • in contrast, the first foreground process
    exhibits significant autocorrelations, while
    the second has zero autocorrelations, as a
    consequence of its iid property (see the
    corresponding correlograms)

26
Foreground TES Processes (Cont.)

Sample path (top), histogram (middle) and
correlogram (bottom) of an exponential basic
TES process with background parameters
27
Foreground TES Processes (Cont.)

Sample path (top), histogram (middle) and
correlogram (bottom) of an exponential basic
TES process with background parameters
28
Generation of TES Sequences
  • TES processes are readily generated on a
    computer via algorithms that utilize random
    number generators (RNG)
  • we assume that the availability of a function,
    called mod1(x), which implements modulo-1
    reduction of any real number, and returns the
    corresponding fraction
  • For convenience, we separate the generation of
    TES processes from that of TES- processes
  • the corresponding algorithms have considerable
    overlaps

29
Generation of TES Sequences
  • Inputs
  • an innovation density // modeler choice
  • a stitching parameter // modeler choice
  • an inverse distribution // often inverse
    histogram (step) cdf,
  • Outputs
  • a background TES sequence,
  • a foreground TES sequence,
  • Algorithm
  • 1. sample a value , uniform on 0,1),
    // initial background variate and set
    and // more initializations
  • 2. go to Step 6. // go to generate initial
    foreground variate
  • 3. set // bump up running index for next
    iteration
  • 4. sample a value from // sample an
    innovation variate
  • 5. set // compute a TES
    background variate
  • 6. set // apply a stitching transformation
  • 7. set // compute a TES foreground variate
  • 8. go to Step 3. // loop to generate the next
    TES variate

30
Generation of TES- Sequences
  • Inputs
  • an innovation density // modeler choice
  • a stitching parameter // modeler choice
  • an inverse distribution // often inverse
    histogram (step) cdf,
  • Outputs
  • a background TES- sequence,
  • a foreground TES- sequence,
  • Algorithm
  • 1. sample a value , uniform on 0,1),
    // initial background variate and set
    and // more
    initializations
  • 2. go to Step 7. // go to generate initial
    foreground variate
  • 3. set // bump up running index for next
    iteration
  • 4. sample a value from // sample an
    innovation variate
  • 5. set // compute a TES
    background variate
  • 6. if is even, then set ,
    // compute a TES- background variate if
    is odd, then set
  • 7. set // apply a stitching transformation
  • 8. set // compute a TES- foreground variate
  • 9. go to Step 3. // loop to generate the next
    TES- variate

31
Generation of TES Sequences in Arena
  • Arena implementation of the algorithm to
    generate basic TES sequences with an
    exponential distribution (TES- is similar)
    assumes that the following parameters are given
    as inputs
  • a pair of parameters some and such
    that , which determine a
    basic innovation density
  • a stitching parameter (0.5 is
    typical)
  • an inverse of an exponential distribution
    function,
  • for some

32

Arena Model for Basic TES
Arena model implementing the generation of basic
TES sequences with exponential marginal
distribution
33
Arena Model for Basic TES (Cont.)
Arena Variable module for implementing the
generation of basic TES sequence with an
exponential marginal distribution
34
Arena Model for Basic TES (Cont.)
  • Arena variables in the model
  • the variables L and R hold the parameters of the
    basic innovation density
  • the variable xi holds the stitching parameter
  • the variable lambda holds the rate parameter of
    the requisite exponential distribution
  • the variable N holds the running index in the
    TES sequence (initially 0)
  • the variable V_N holds an innovation variate
  • the variable U_N holds an unstitched TES
    background variate
  • the variable US_N holds a stitched TES variate
  • the variable UP_N holds a stitched TES
    background variate
  • the variable X_N holds a TES foreground
    variate
  • The Arena Variable module
  • lists all model parameters and variables and
    their initial values, if any
  • those requiring initialization are identified by
    a 1 rows button label under the Initial Values
    heading), for example UP_N is initialized to a
    value between 0 and 1

35
Correlation Analysis Example
  • Consider a workstation subject to mutually
    independent failures
  • in the case, the workstation can be modeled as
    an M/G/1 queueing system,
  • where the processing time is, in fact, the
    process completion time, consisting of all the
    failures experienced by a job on the machine
  • The mean job waiting time is given by the
    modified P-K formula
  • where
  • is the arrival rate
  • is the average process
    completion time
  • is the squared
    coefficient of variation of the process
    completion time, where is the second moment
    of repair times
  • The probability that the machine is occupied
    (processing or down) is given by,
    and for stability we assume

36
Correlation Analysis Example (Cont.)
  • The table below displays the relative deviations
  • as function of , where
  • for
  • for

Lag-1 Autocorrelation of Time-to-Failure 0.00 0
.14 0.36 0.48 0.60 0.68 0.74 0.99 0.66 15.8 11.4
51.3 90.1 260 543.7 3,232 4,115 0.81 36.3
11 151.3 229.8 564.7 766 3,429 7,800 Rela
tive deviations of mean waiting time in a
workstation with failures/repairs
Write a Comment
User Comments (0)
About PowerShow.com