Linear Prediction Algorithms
Mohit Garg (00D07015) http : //www.ee.iitb.ac.in/uma/mohit ˜ Indian Institute of Technology, Bombay,India April 24, 2003
Abstract Estimating a variable given some other variables, is one of the most commonly occuring problems in real life. More speciﬁcally, estimation or prediction is one of the most celebrated problems in time series analysis. This report is a MATLAB implementation of two algorithms for the same  LevinsonDurbin Algorithm and the Weighted Least Squ
Linear Prediction Algorithms
Mohit Garg (00D07015)
http
:
//www.ee.iitb.ac.in/uma/
˜
mohit
Indian Institute of Technology, Bombay,IndiaApril 24, 2003
Abstract
Estimating a variable given some other variables, is one of the mostcommonly occuring problems in real life. More speciﬁcally, estimationor prediction is one of the most celebrated problems in time seriesanalysis. This report is a MATLAB implementation of two algorithmsfor the same  LevinsonDurbin Algorithm and the Weighted LeastSquares Error Algorithm.
Keywords:
Linear Prediction, LevinsonDurbin, Least Squares
1 Introduction
One of the most celebrated problems in time series analysis is that of
prediction i.e.
given a series of sample values of a stationary discretetime process(
e.g.
a signal), we need to
predict
its future samples. Speciﬁcally, given
x
(
n
−
1)
,x
(
n
−
2)
.....,x
(
n
−
M
), we need to predict the value of
x
(
n
).In general, we may express this predicted value as a function of the given
M
past samples.
i
.e. iˆ
x
(
n

n
−
1
,n
−
2
,...,n
−
M
) = Ψ(
x
(
n
−
1)
,x
(
n
−
2)
,...,x
(
n
−
m
)) (1)Now, if this function Ψ is a linear function of the variables
x
(
n
−
1)
,x
(
n
−
2)
,...,x
(
n
−
M
) , we say that the
prediction
is
linear
. We can visualize this1
in a
M
−
dimensional
space spanned by
x
(
n
−
1)
,x
(
n
−
2)
,...,x
(
n
−
M
).Hence, we can writeˆ
x
(
n

n
−
1
,n
−
2
,...,n
−
M
) =
M
k
=1
a
k
x
(
n
−
k
) (2)where
a
k
are constant coeﬃcients. Such a predictor can be realised byusing a TappedDelay Line ﬁlter (Fig.1) .The prediction error is deﬁned as
f
M
(
n
) =
x
(
n
)
−
ˆ
x
(
n

n
−
1
,n
−
2
,...,n
−
M
) (3)Here, the subscript
M
in
f
M
(
n
) denotes the
order
of the prediction.
i.e.
the number of past samples that are used to predict the next sample.Hence, the problem of
Linear Prediction
, that we are interested in, reducesto determining these coeﬃcients subject to some condition. Diﬀerent algorithms and conditions on
a
k
s
have been proposed and are used. We willconsider two common ones in this report:
Minimum Mean Square Error
and
Weighted Least Square Error
.Figure 1: TappedDelay Line Realisation of the Linear Predictor(source [7])
2 Minimun Mean Square Estimate (MMSE)
Obviously, we would like to minimise the error in the predicted value andthe actual value. But since, the sequence of variables is probabilistic, wewill need to deﬁne some measure of equality or error. A commonly used2
measure for this is probability theory is the
RMS Error
,
i.e.
, Root MeanSquare Error.
1
. RMS error is deﬁned as
P
M
=
E
(

f
M
(
n
)

2
) (4)Minimising the prediction RMS error (
P
M
), we get the WeinerHopf equations (refer [5])
i.e.
Ra
=
r
(5)where,
a
=
a
1
a
2
a
3
. . a
M
T
r
=
R
XX
(1)
R
XX
(2)
R
XX
(3)
. . R
XX
(
M
)
T
R
=
R
XX
(0)
R
XX
(
−
1)
R
XX
(
−
2)
... R
XX
(1
−
M
)
R
XX
(1)
R
XX
(0)
R
XX
(
−
1)
... R
XX
(2
−
M
)
R
XX
(2)
R
XX
(1)
R
XX
(0)
... R
XX
(3
−
M
)
. . . ... .. . . ... .R
XX
(
M
−
1)
R
XX
(
M
−
2)
R
XX
(
M
−
3)
... R
XX
(0)
Here,
R
XX
(
k
) denotes the autocorrelation function (
E
(
x
(
n
)
x
(
n
−
k
))) of the sequence
x
(
n
) for a lag
k
. Note that
R
XX
(
−
k
) =
R
XX
(
k
) for real signals(which we are interested in), since the process is assumed to be stationary.In order to solve for the coeﬀ.
a
k
, we need to
ã
First, determine the autocorrelation function upto order
M
for theinput process
x
(
n
).
ã
Then, solve the above equations.Determination of the autocorrelation function is addressed at the end of the section. Here we proceed assuming that the autocorrelation functionupto order
M
is available.
1
An estimate found by minimising the Root Mean Square Error is often referred to asthe MMSE estimate
3
2.1 The LevinsonDurbin Recursion
Assuming we have already found out the autocorrelation functions for theinput process
x
(
n
), we need to solve the
M
×
M
system of equations toget the desired coeﬃcients (
a
k
). One can use standard methods for solvinglinear equations, but the structure of the
R
matrix gives us some very uniqueadvantages. The
LevinsonDurbin Recursion
2
, named in recognition of itsuse ﬁrst by Levinson(1947) and then its independednt reformulation at alater date by Durbin (1960), is a direct recursive method for solving for thecoeﬃcients of the prediction ﬁlter. It makes particular use of the
Toeplitz structure
[3] of the matrix
R
. A detailed derivation and proof of the methodcan be found in [2].Here we outline the basic steps involved in the algorithm. The algorithmuses, the prediction ﬁlter coeﬃcients of order
m
−
1 to compute the coeﬃcientsof the ﬁlter of order
m
.For
m
= 1 to
M
1. Calculate
mth
order reﬂection coeﬃcient Γ
m
=
−
∆
m
−
1
P
m
−
1
2. Calculate the coeﬃcients for the
mth
order predictionerror ﬁlter, givenbyˆ
a
m,k
= ˆ
a
m
−
1
,k
+ Γ
m
ˆ
a
∗
m
−
1
,m
−
k
, k
= 0
,
1
,...m
where,ˆ
a
M,k
=
1
k
= 0
−
a
M,k
k
= 1
,
2
,...,M
3. Calculate the RMS error for the
mth
order ﬁlter as
P
m
=
P
m
−
1
(1
−
Γ
m

2
)4. Calculate ∆
m
=
r
BT m
a
m
−
1
. Here
r
BT m
=
R
XX
(
m
)
R
XX
(
m
−
1)
... R
XX
(1)
.The algorithm is initialised by settingˆ
a
0
= 1
,P
0
=
R
XX
(0)
,
∆
0
=
R
XX
(1).
2.2 Calculation of the Autocorrelation coeﬃcients
Tha autocorrelation function of the input process may not be know apriori.Hence, we will need to estimate it based on the input process itself. In
2
The LevinsonDurbin algorithm is also applicable when we donot know the autocorrelation function beforehand[2]
4