One Step Ahead Wind Speed Prediction Using Recurrent Neural Network

1 / 29
About This Presentation
Title:

One Step Ahead Wind Speed Prediction Using Recurrent Neural Network

Description:

Many data available going back over a decade ... Xn. Wjn. Wj2. Wj1. S. Tj. uj. ?(uj) fj. Neural Networks. Multi Layer Perceptron (MLP) ... –

Number of Views:810
Avg rating:3.0/5.0
Slides: 30
Provided by: UMR5
Category:

less

Transcript and Presenter's Notes

Title: One Step Ahead Wind Speed Prediction Using Recurrent Neural Network


1
One Step Ahead Wind Speed Prediction Using
Recurrent Neural Network
  • Richard Welch

2
Overview
  • Introduction
  • Data
  • Neural Networks
  • Training
  • GPU Approach
  • Conclusions Future Work

3
Introduction
  • North American power grid is the largest system
    ever built
  • 10,000 separate generation sources
  • Hundreds of thousands of miles of transmission
    lines
  • Voltage should be within /-5
  • Many types of generation devices with different
    operating characteristics
  • Nuclear, natural gas, hydro, thermal, wind,
    solar, etc
  • Many operating constraints exist
  • Above all, generation must equal load at all
    times. If not, then voltage and frequency will
    change.

4
Introduction
  • Penetration rate of wind farms is increasing
  • Many projects currently being planned or
    developed
  • Output of wind farm depends primarily on the cube
    of the wind speed
  • Other factors include air density, turbine
    design, etc
  • Because of the inconsistent nature of wind the
    output of the wind farm is also inconsistent
  • This can lead to undesirable voltage and
    frequency variations

5
Introduction
  • What can be done to minimize these disturbances?
  • Wind speed prediction
  • Use of these predictions for better dispatch
    planning
  • The problem is that wind speed prediction isnt a
    trivial task
  • Depending on the prediction horizon and data
    granularity, there may be varying degrees of
    statistical autocorrelation
  • Some studies have looked at predicting wind speed
    24 hours in advance with hourly data
  • For very short term energy dispatch, look at
    minutes (up to 15 minute time horizon)
  • Good method for time series prediction use
    neural networks
  • Isnt required to know underlying processes
  • Helpful when lots of historical data is available

6
Overview
  • Introduction
  • Data
  • Neural Networks
  • Training
  • GPU Approach
  • Conclusions Future Work

7
Data
  • The data proposed for this research can be
    obtained from the National Renewable Energy
    Laboratory (NREL)s National Wind Technology
    Center (NWTC) in Boulder, CO.
  • This data is freely available, and is given in
    minute intervals (2 second average at each
    minute)
  • Many data available going back over a decade
  • NREL maintains (or networks with) other sites
    around the country to provide data for other
    locations (although not as thorough). Another
    site in nearby Golden, CO provides extensive
    solar insolation data.
  • URL is http//www.nrel.gov/midc/nwtc_m2/

8
Image taken from http//www.nrel.gov/midc/nwtc_m2
/pictures/m2tower.jpg
Proposed data to be used Inputs Wind speed at
80 m Temperature at 2 m Relative humidity at 2
m Output Predicted wind speed 1 minute
ahead All data normalized
9
This graph was developed by the National
Renewable energy Laboratory for the U.S.
Department of Energy.
10
Overview
  • Introduction
  • Data
  • Neural Networks
  • Training
  • GPU Approach
  • Conclusions Future Work

11
Neural Networks
Biological Neuron
Artificial Neuron
12
Neural Networks
  • Multi Layer Perceptron (MLP)
  • Simple, no feedback
  • Recurrent Neural Network (RNN)
  • Discrete time delay feedback
  • Simultaneous Recurrent Neural Network (SRN)
  • Feedback, but no time delay
  • Echo State Network (ESN)
  • Multiple discrete time delay feedback
  • Random internal connections
  • Proposed neural network architecture RNN (good
    mix of simplicity and feedback possibilities)

13
Neural Networks - RNN
RNN 2nd Order (2 delays) Input Layer Size
4 Hidden Layer Size 10 Context Layer Size (x2)
10 Output Layer Size 1 Weightssize(W)size(V)
W(Inputs2Context)Hidden VHiddenOutput gtW
240 gtV10 Total Number of Weights 250
14
Overview
  • Introduction
  • Data
  • Neural Networks
  • Training
  • GPU Approach
  • Conclusions Future Work

15
Training
  • Proposed training method Particle Swarm
    Optimization (PSO)
  • Alternatives for training recurrent networks
    exist (Back Propagation Through Time, or BPTT),
    but are sequential
  • PSO parameters
  • w 0.8, c1 2.0, c2 2.0
  • Number of particles 30

16
Training
  • PSO is a Swarm Intelligence (SI) technique
    modeled after the schooling of fish and the
    flocking of birds. Developed by Dr. Eberhart and
    Dr. Kennedy in 1995.
  • In PSO, there is a swarm that is composed of
    n-dimensional individuals that fly through the
    solution space.

17
Training
  • Each individual represents a solution to the
    problem being addressed.
  • Associated with each position is a fitness value,
    which is a measure of the quality of solution
    (generally, try to minimize).
  • PSO uses the experience of individuals to guide
    the search of the entire swarm.

18
Training
  • In this research, each particle is 250 dimensions
    (one dimension for each network weight).
  • Fitness is a measure of the difference in output
    of the network and expected results, computed
    after one pass of all training data. Usually use
    Mean Squared Error (MSE).

19
Training
  • PSO pseudo code
  • Randomize particles and velocities
  • Do
  • Evaluate fitness of each particle
  • Share Pbest, and Gbest with all particles.
  • Calculate new velocities
  • Calculate new positions
  • Loop until desired fitness found

20
Training
Y
Xk1
Gbest
Gbest
V
V
Pbest
Pbest
Vk
Vk
Xk
X
21
Overview
  • Introduction
  • Data
  • Neural Networks
  • Training
  • GPU Approach
  • Conclusions Future Work

22
GPU Approach
  • Motivation for parallelizing sufficiently
    training network over reasonable amount of
    training data can take a long time.
  • Because of nature of wind, important to have
    better prediction quickly.
  • Any speedup able to produce better predictions
    can be very valuable.

23
GPU Approach
  • Planned approach was to parallelize the PSO
    operations and matrix multiplications.
  • Essentially parallelize each particle (block of
    threads), and handle both matrix multiplies in
    sequence (HiddenWInput, then OutputVHidden).

24
GPU Approach
  • Organize grid into 30 blocks of 256 threads each.
  • This maps to 30 particles, each using 256 threads
    for PSO calculations and matrix multiplies
    (particle size is 250)
  • Use all 7680 threads

25
GPU Approach
W weight matrices
Hidden Vectors
Output Vectors
V weight matrices
Particle 1
Particle 2
Particle 3
Input Vector
Particle 4
Particle 28
Particle 29
Particle 30
26
GPU Approach
  • Once all computations are complete, determine
    fitness and share amongst all particles.
  • Determine new Pbest, and Gbest.
  • Allow all particles to iterate again using new
    values.
  • Eventually, will converge upon an optimal
    solution (global or local).

27
Overview
  • Introduction
  • Data
  • Neural Networks
  • Training
  • GPU Approach
  • Conclusions

28
Conclusions
  • Proposed parallel PSO implementation should yield
    close to linear speed up.
  • Performance of GPU seems to hinge upon effective
    use of fast memory in GPU (minimize use of global
    memory, etc). Lots of information on performance
    in programming guide.

29
References
1 Wang X, Sideratos G, Hatziargyriou N,
Tsoukalas LH, Wind speed forecasting for power
system operational planning, 2004 International
Conference on Probabilistic Methods Applied to
Power Systems, 12-16 Sept. 2004, pp. 470-474. 2
Palangpour P, Venayagamoorthy GK, Duffy K,
Recurrent Neural Network Based Predictions of
Elephant Migration in a South African Game
Reserve, 2006 International Joint Conference on
Neural Networks, July 16-21, 2006, pp.
4084-4088. 3 Kiran R, Jetti SR,
Venayagamoorthy, GK, Online Training of a
Generalized Neuron with Particle Swarm
Optimization, 2006 International Joint
Conference on Neural Networks, July 16-21, 2006,
pp. 5088-5095. 4 NVIDIA CUDA,
http//www.nvidia.com/object/cuda_get.html
online 5 Ferreira AA, Ludermir TB, Using
Reservoir Computing for Forecasting Time Series
Brazilian Case Study, Eighth International
Conference on Hybrid Intelligent Systems, Sept.
10-12, 2008, pp.602-607. 6 National Renewable
Energy Laboratory, Golden, CO, http//www.nrel.gov
/ online 7 National Wind Technology Center,
Boulder, CO, http//www.nrel.gov/midc/nwtc_m2/
online 8 Li S, Wunsch DC, OHair E,
Giesselmann, MG, Wind turbine power estimation
by neural networks with Kalman filter training on
a SIMD parallel machine, International Joint
Conference on Neural Networks, Vol. 5, July 10-16
1999, pp. 3430-3434.
Write a Comment
User Comments (0)
About PowerShow.com