Title: Derivation%20of%20Recursive%20Least%20Squares
1Derivation of Recursive Least Squares Given that
is the collection Thus the
least squares solution is Now what happens
when we increase n by 1, when a new data point
comes in, we need to re-estimate this
requires repetitions calculations and
recalculating the inverse (expensive in computer
time and storage)
2Lets look at the expression
and and define
3(1)
(2)
The least squares estimate at data n
(3)
(4)
( Substitute (4) into (3) )
( Applying (1) )
4RLS Equations are
But we still require a matrix inverse to be
calculated in (8)
Matrix Inversion Lemma If A, C, BCD are
nonsigular square matrix ( the inverse exists)
then
5The best way to prove this is to multiply both
sides by ABCD Now, in (8), identify A, B,C,D
6Matrix inversion lemma is very important in
convert LS into RLS. To prove the above,
7RLS equations are
In practice, this recursive formula can be
initiated by setting to a large diagonal
matrix, and by letting be your best first
guess.
8RLS with forgetting We would like to modify the
recursive least squares algorithm so that older
data has less effect on the coefficient
estimation. This could be done by biasing the
objective function that we are trying to minimise
(i.e. the squared error) This same weighting
function when used on an ARMAX model can be used
to bias the calculation of the Pn matrix giving
more recent values greater prominence, as
follows. where ? is chosen to be between 0
and 1.
9When ? is 1 all time steps are of equal
importance but as ? smaller less emphasis is
given to older values. We can use this expression
to derive a recursive form of weighted
The Matrix inversion lemma will then give a
method of calculating given to
get
10RLS Algorithm with forgetting factor