Cooper Extracts

Published on May 2016 | Categories: Documents | Downloads: 87 | Comments: 0 | Views: 913
of x
Download PDF   Embed   Report

Cooper Extracts

Comments

Content


21 0 CHAPTE R 6 CORRE LATI ON F UNCTI ONS
The previously defned correlation was simply a number since the random vaables were not
necessarily defned as being associated with time fnctions. In the fllowing case, however, ever
pair cf random variables can be related by the time separation between them, and the corelation
will be a fnction of this separation. Thus, it becomes appropriate to defne a .-····.-- ,-.--
"in which the argument is· the time separation of the two random variables. If the two random
variables come fom the same random process, this fnction will be known as the ..-.-····.--
,-.-- If they come fom diferent random processes, it will be called the .·-··.-····.--
,-.--We will consider autocorrelation fnctions frst.
If ·.is a sample fnction fom a random process, and the random vaiables ae defned
to be
·÷· .
·,= ·, .
then the autocorrelation fnction is defned to be
(61 )
This defnition is valid fr both stationary and nonstationary random processes. However, our
interest is primarily in stationary processes, fr which further simplifcation of (61 ) is possible. It
may be recalled fom the previous chapter that fr a wide-sense stationary process al such ensem­
ble averages are independent of the time origi n. Accordingly, fr a wide-sense stationa process,
-, ,. ÷-, r, r.
÷c·+ r. ·,r. ,
Since this expression is independent of the choice of time origin, we can set T ¯

to give
-, ,. ÷ -, , .÷c ·. ·, .,
It is seen
·
that this expression depends only on the time diference , Setting this time
diference equal to ·÷ ,¯ and suppressing the zero in the argument of -,. , · we
.can rewrite . . as
-, ·.÷c· · ··. , (62)
This is the expression fr the autocorelation fnction of a stationary process and depends only
on ·and not on the value of Because of this lack of dependence on the paicula time

at which the ensemble averages are taken, it is common practice to write s:.witout the
subscript; thus,
-, ·. ÷c ·. ··. ,
6-1 I NTRODUCTI ON 21 1
Wenever corelation fnctions relate to nonstationary processes, since they a dependent on the
particular time at which the ensemble average is taken as well as on the time diference between
samples, they must be written as RxCti . t
2
) or Rx (t1 , r) . In all cases in this and subsequent
chapters, unless specifcally stated otherwise, it is assumed that all correlation fnctions relate
to wide-sense stationary random processes.
It is also possible to defne a -·..---····.--fnction fr a particular sample fnction asl
1 1
T
fx (r) = lim
2T
x(t)x(t+ r) .t =
(x(t)x(t r) )
T-o
-T
(
&3)
For the special case of an ··,-.-»·--···( (x(t)x(t r) ) is the same fr every x(t) and equal
to Rx(r) . That is,
fr an ergodic process (6)
The assumption of ergodicity, where it is not obviously invalid, ofen simplifes the computation
of corelation fnctions.
From(-2) it is seenreadily that fr r = O, since Rx(O) = E[X(
t
1 ) X(t1 )] ; the autocorelation
fnction is equal to the mean-square value of the process. For values of r other than r ÷0, the
autocorelation fnction Rx ( r) can be thought of as a measure of the similarity of the wavefrm
X(t) and the wavefrm X(t + r) . To illustate this point frther, let X(t) be a sample fnction
fom a zero-mean stationary random process and frm the new fnction
Y(t) =
X(t) - pX(t + r)
By determining the value of p that mnimzes the mean-square value of Y(t) we will have
a measure of how much of the wavefrm X(t r) is contained in the wavefrm X(t) . The
determination of p is made by computing the varance of Y(t) , setting the derivative of the
varance with respect to p equal to zero, and solving fr p. The operations are as fllows:
E{ [Y(
t
)
]
2
}
=
E{ [X(t) - p
X
(
t
+ r)]
2
}
=
E{X
2
(t) - 2pX(t) X(t + r) + p
2
X
2
(t r)
}
u;= ui - 2pRx(r) + p
2
ui
.u
2
_
Y
= -2Rx(r) 2pui = 0
.p
Rx(r)
p
=  -2.-
0
x
(
&5)
It is seen fom (&5) that p is directly related to Rx(r) and is exactly the --····.----·,-·-
defned in Secton 3-. The coefcient p can be thought of as the faction of the waveshape
1
Te symbol ( ) is used .to denote time averaging.
21 2 CHAPTER 6 CORRE LATI ON F UNCTI ONS
of ·.remaining after ·seconds has elapsed. It must be remembered that p was calculated
on a statistical basis ; and that it is the average retention of waveshape over the ensemble, and
not this property in any particular sample fnction, that is important. As shown previously, the
correlation coefcient p can vary fom +1 to . For a value of p -. the waveshapes would
be identical-that is, completely correlated. For p - the wavefrms would be completely
uncorrelated; that i s, no part of the wavefrm · ·.would be contained in ·, For p = .
the waveshapes would be identical, except fr opposite signs; that is, the wavefr xr)
would be the negative of · .
For an ergodic process or for nonrandom signals, the fregoing interpretation can be made in
terms of average power instead of variance and in terms of the time correlation fnction instead
of the ensemble correlation fnction.
Since -,·.is dependent both on the amount of correlation p and the vaance of the process,
ai, it i s not possible to estimate the signifcance of some particula value of s, ·.without
knowing one or the other of these quantities. For example, if the random process has a zero
mean and the autocorrelation function has a positive value, the most that can be said is that
the random variables ·

·and ··.probably have the same sign.2 If the autocorelation
fnction has a negative value, it is likely that the random variables have opposite signs. If it is
nearly zero, the random variables are about as likely to have opposite signs as they are to have
the same sign.
.
Exercise 6-1 . 1
A random process has sampl e functi ons of the form
·.- .
-
¸ ¸ .
elsewhere
where A is a random vari abl e that is uni forml y di stri buted from 0 to 1 0. Usi ng
the basi c defi ni ti on of the autocorrel ati on functi on as gi ven by Equati on (6-
1 ) g fi nd the autocorrel ati on functi on of thi s process.
Answer:

·
.-·· ·
- elsewhere
2 This is strictly tre only if f (x1 ) is symmetrical about the axis 7[ ^ O.
6-2 EXAMP L E : AUTOCORREL ATI ON F UNCTI ON OF A BI NARY PROCESS 21 3
Exerci se 6-1 .2
Defi ne a random vari abl e Z( t) as
z· = · · .·.r1)
where X( t) i s a sampl e functi on from a stati onary random process whose
autocorrel ati on functi on i s
Rx
( r) = exp¯r
2
·
Write an expressi on for the autocorrel ati on functi on of the random process
Z( t) .
Answer:
62 Exampl e: Autocorrelation Function of a Bi nar Process
Te above ideas may be made somewhat clearer by considering, as a special example, a random
process having a very simple autocorrelation fnction. Figure -.shows a typical sample fnction
fom a discrete, stationary; zero-mean random process in which only two values, .. are
possible. The sample fnction either can change fom one value to the other every ,seconds
or reman the same, wit equal probability. The time ,is a random variable with respect to the
ensemble of possible time fnctions and is unifrmly distributed over an interval of length ,
Ts means, as far as the ensemble is concered, that changes in value can occur at any time
with equal probability. It is also assumed that the value of ··in any one interval is statistically
independent of its value in any other interval.
Although the random process described in the above paragraph may seem contrived, it actually
represents a very practical situation. In modem digital communication systems, the messages
to be conveyed ae converted into binary symbols. Ths is done by frst sampling the message at
perodic time instants and then quantizing the samples into a fnite number of amplitude levels
as discussed in Section :·in connection with the unifrm probability density fnction. Each
aplitude level is then represented by a block of binary symbols; fr example, :s-amplitude
levels can each be uniquely represented by a block of ·binary symbols. The binary symbols
can in tm be represented by a voltage level of ..or _: A. Thus, a sequence ofbinary symbols
becomes a wavefrm of the type shown in Figure . Similarly, this wavefrm is typical of those
fund in digital computers or in comunication lins connecting computers together. Hence,
21 4 CHAPTER 6 • CORRE LATI ON F UNCTI ONS
^
(
'
)
A
l
i
|
l¸ ¬ 'a
'c
f
la 1'a
f
t
:
1^|';I¬^_
æ ->
-
A
f
Î
l
T
î
Figre óI A discrete, stationary sample fnction.
Figre c2 Autocorelation fnction of the process
in Figure 61 .
X( t ¡ +t)¬x _
î
l
t
Î
I
l f
l
+ t
|
l ¸
+2
'
_,
'¸+
1'a
t
l
f
f
t
f
l
|
î
l ¸¬ªl a
the random process being considered here is not only one of the simplest ones t o analyze, but i s
also one of the most practical ones i n the real world.
The autocorrelation fnction of this process will be determined by heuristic arguments rather
than by rigorous derivation. In the frst place, when
I r I
is larger than , then

and

r = t2
cannot lie in the same interval, and ·
and ·

are statistically independent. Since X1 and X2
have zero mean, the expected value of their product must be zero, as shown by (3-22) ; that is,


,
since ·

= ·

= When

• is less than , then

and r may or may not be in te
same interval, depending upon the value of , Since
can be anywhere, with equal probability,
the probability that they do lie in the same interval is proportional to the diference between
,and r . In particular, fr r =: it is seen that
_

¸- r , which yields
r
¬
, ¸ Hence,
Pr (t1 and r are in the same interval)
= Pr r ,¸ 

. ,
. , r
= � r
^
, . ,=
, ,
6-2 EXAMPL E : AUTOCORRE LATI ON FUNCTI ON OF A B I NARY PROCESS 21 5
since te probability density fnction fr i s just When r i t i s seen that t0 : + r :
ti < t
o
+ which yields � .r . Thus,
Hence, in general,
Pr and + rare in the same interal)
. · : + r) ]
.r
. .r .



P ( d
o . 1
I r I
r an + r are m same mtera .

Wen tey ae in the same interval, the product of X
1
and X
2
is always +

when they are not,
te expected product is zero. Hence,
R
x ( r) .+
·
¸

-
l r l ¸
.+
·
¸
¸¸

(6)
.
Tis fnction is sketched in Figure .
It is interesting to consider the physical interpretation of this autocorrelation fnction io light
of the previous discussion. Note that when l r l i s small (less than there is an increased
prbability that and + r) will have the same value, and the autocorrelation fnction
is positive. Wen l r l is greater than it is equally probable that and + r) will have
te sae value as that they will have opposite values, and the autocorelation fnction is zero.
For r ÷0 the autocorelation fnction yields the mean-square valle of +


Exercise 6-2. 1
A speech waveform i s sampl ed 4000 ti mes a second and each sampl e
i s quanti zed i nto 256 ampl i tude l evel s. The resul ti ng ampl i tude l evel s are
represented by 3bi nary vol tage havi ng val ues of ±5. Assumi ng that succes­
si ve bi nar symbol s are stati sti cal l y i ndependent, wri te the autocorrel ati on
functi on of the bi nar process.
Answer:
R
x (r) ÷. . l r l ]
.

l r l



.
7
elsewhere
21 6 CHAPTER 6 • CORRELATI ON F UNCTI ONS
Exercise 62. 2
x(t)
Û

A sampl e functi on from a stati onar random process is shown above. The
quanti ty t0 i s a random vari abl e that i s uni forml y di stri buted from O to ta and
the pul se ampl i tudes are ±A wi th equal - probabi l ity and are i ndependent from
pul se to pul se. Fi nd the autocorrel ati on functi on of thi s process.
Answer:
Rx(r) ..

�[ 1 -
I i i ]
l
r
l ::
-
.
l •
I -
63 Properties of Autocorrelation Functions
If autocorrelation fnctions are to play a usefl role in representing random processes ad in the
analysis of systems with random inputs, it is necessary to be able to relate the properties of te
autocorrelation fnction to the properties of the random process it represents. In ts section, a
number of the properties that are possessed by all autocorrelation fnctions of stationa and
ergodic _random processes
·
are sumarized. The student should pay particular attention to tese
properties because they will come up many times in fture discussions.
1. Rx .

•Hence, the mean-square value of te random process ca always be obtained
simply by setting r .
It should be emphasized that Rx gives the mean-square value whether the process has a
nonzero mean value or not. If the process is zero mean, then the mean-squae vaue is equal to
the vaance of the process.
Z. Rx(r) .Rx ( -r) . The autocorrelation fnction is a cVcM fnction of t.
This i s most easily seen, perhaps, by tining of the time-averaged autocorelation fnction,
which is the same as the ensemble-averaged autocorelation fnction. fr a ergodic radom
process. mths case, the time average is taken over exactly the sae product fnctionregadess
6-3 PROPERTI ES OF AUTOCORREL ATI ON F UNCTI ONS 21 7
of which direction one of the time fnctions is shifted. This symmetry property is extremely
usefl in deriving the autocorrelation fnction of a random process because it implis that te
derivation needs t be carried out only fr positive values of and the- result fr negative t
determned by symmetry. Thus, in the derivation shown in the example in Section .it would
have been necessary to consider only the case fr _ For a nonstationar process, the
symmetry property does not necessarily apply.
3. I Rx(r) I _. The largest value of the autocorelation fnction always occurs at
÷0. There may be other values of fr which it is just as bir (fr example, see the periodic
case below), but it cannot be larger. This is shown easily. by considerng
· .



÷·¦. ¦ ..

_
·¦+ ¦÷.. _

·.



÷ ..

and thus,
. _

.

(67)
4. If has a de component or mean value, tben . will have a constant component.
For example, if ÷ ., then
.
Rx(r) ÷ . ÷·.. ÷.

More generally, if has a mean value .--a zero mean component so that
then
since
÷.
Rx(r)
÷· .
..
÷·

. .. .r) ]
÷( X ..
· ÷·. ÷
Tus, even in tis case, . contains a constant component.
(68)
(69
)
For ergodic processes the magnitude of the mean value of the process can be determined
by lookng at the autocorelation fnction as approaches infnity, provided that any periodic
components in te autocorelation fnction are ignored in the limt. Since only the ·,..·-of
the mea value is obtained fom ts calculation, it is not possible to determine the sign of the
mea value. If te process is stationary, but not ergodic, te value of Rx (t) may not yield any
21 8 CHAPTER 6 CORRELATI ON F UNCTI ONS
information regarding the mean value. For example, a random process having sample fnctions
of the fr
X(t) = A
where A is a random variable with zero mean and variance a
;
, has an autocotelaton fnction of
Rx(r) = a;
fr all r . Thus, the autocorelation fnction does not vanish at r = o even though the process
has zero mean. This strange result is a consequence of the process being nonergodic and would
not occur fr an ergodic process.
5. If X(t) has a periodic component, then Rx(r) will also have a periodic component, with
the same period. For example, let
.
X(t) = A cos (wt + 0)
where A and w are constants and 0 i s a random variable unifrmly distributed over a range of
21. That is,
Then
f
(O) =
2

= 0 elsewhere
Rx(r) = E[A cos (wt1 + O) A cos (wt1 + wr + O)]
¸
A
2
A
2
¡
= E z cos (2wt1 + wr + 20) + z cos wr
A
2
.
=
T

 
21
[cos (2wt1 + wr + 20) + cos wr] dO
A
2
= cos UÏ
2
In the more general case, in which
X(t) = A cos (wt + 0) + N(t)
(610)
where 0 and N(t1 ) are statistically independent fr all t1 , by the method used in obtainng (5-9),
it is easy to show that
6-3 PROPE RTI ES OF AUTOCORRE LATI ON F UNCT! ONS 21 9
Az
-, ·.¯
T
cos w·-
·
·. (61 1 )
Hence, the autocorrelation fnction still contains a perodic component.
Te above property can be extended to consider random processes that contain any number
of periodic !Omponents. If the random variables associated with the periodic components are
statistically independent, then the autocorrelation fnction of the sum of the periodic components
is simply the sum of the periodic autocorrelation fnctions of C'ch component. This statement
is tre regardless of whether the periodic components are haonically related or not.
If every sample fnction of the random process is periodic and can be represented by a Fourier
series, the resulting autocorelation is also perodic and can also be represented by a Fourier
seres. However, this Fourier series will include more than just the sum of the autocorrelation
fnctions of the individual terms if the random variables

ssociated with the various components
of the sample fnction are not statistically independent. A common situation in which the random
variables ae not independent is the case in which there is only one random variable fr the
process, namely a random delay on each sample f!ction that is unifrly distrbuted over the
fndamental period.
é. If · . , is ergodic and zero mean, and has no periodic components, then
Jim -, ·.*
I T l -o
(61 2)
For lage values of · since the efect of past values tends to die out as time progresses, the
radom variables tend to become statistically independent.
7. Autocorrelation fnctions cannot have an arbitrary shape. One way of specifying shapes
that are permssible is in terms of the Fourier transfrm of the autocorelation function. That
is, if
--, ·. , ª ¸¸-, ·. ·


·

then the restriction is
--, ·. , � 0 all w (61 3)
Te reason fr this restriction will become apparent after the discussion of spectral density
in Chapter ·Among other things, this restriction precludes the existence of autocorrelation
fnctions with fat tops, vertical sides, or any discontinuity inamplitude.
.
Tere is one further point that should be emphasized in connection with autocorrelation
fnctions. Although a kow ledge of the joint probability density fnctfons of the random process
is sufcient to obtain a unique autocorrelation fnction, the converse is not true. There may be
many diferent random processes that can yield the same autocorrelation function. Furhermore,
as will be shown later, the efect of linear systems on the autocorelation fnction of the input
ca be computed .--.kowing anything about the probability density fnctions. Hence, the
220 CHAPTER 6 CORRE LATI ON F UNCTI ONS
specifcation of the correlation fnction of a random process i s not equivalent to the specifcation
of the probability density functions and, in fct, represents a considerably smaller amount of
infrmation.
Exerci se 6-3. 1
a) An ergodi c random process has an autocorrel ati on functi on of the form
-, •.= 9e-4I TI .-cos .·. -
Fi nd the mean-square val ue, mean val ue, and vari ance of thi s process.
b) An ergodi c random process has an autocorrel ati on functi on of the form
.·2 -
-, •.=
r2 .
Fi nd the mean-square val ue; mean val ue, and vari ance of thi s process.
Answers: 2, 6, 41 , ±2, ±4, 33
Exercise 6-3. 2
For each of the fol l owi ng functi ons of r, determi ne the l argest val ue of the
constant Afor whi ch the functi on coul d be a val i d autocorrel ati on functi on:
a)
e-4
1
T
I _
A
e
-
21
T
I
b)
e
-I
T
+AI
c) .cos :·.-
A
cos ·.
Answers: 0, 2, O
64 Measurement of Aut
o
correlation Functions
Since the au
t
ocorel
a
tio
n
fnction plays an imporant role in the analysis of linear systems
with random inputs, an imporant practical problem is that of deterning these fnctions fr
6-4 MEAS UREMENT OF AUTOCORRELATI ON F UNCTI ONS 221 ¨
experimentally ober.ed random processes. In general, they cannot be calculated fiom the joint
density fnctions, since these density fnctions are seldom known. Nor can an ensemble average
be made, because there is usually only one sample fnction fom the ensemble available. Under
these circumstances, the only available procedure is to calculate a time autocorrelation function
fr a fnite time interval, under the assumption that the process is ergodic.
To illustrate this, assume that a particular voltage or current wavefrm x has been observeu
over a time interval from to T seconds. It is then possible to defne an ...corelation
fnction as fr this particular wavefrm as
, ÷- . .. .
. 1
T
-r
0
. (614)
Over the ensemble of sample functions, this esti
m
ate is a random variable denoted by ,
Note that the averaging time i s T rather than T because this i s the only portion of the
observed data in which both x(t) and . . are available.
In most practical cases it is not possible to carry out the integration called fr in . ..
because a mathematical expression fr x(t) is not available. An alterative procedure is to
approximate the integral by sampling the continuous time fnCtion . at discrete instants of time
and perfonng the discrete equivalent to ... Thus, if the samples of a particular sample
fnction a taken at time instants of . .. . and if the corresponding values of
.are xo , x1 , x2 , ø + o , X
N
, the discrete equivalent to s. ..is
÷ . . .
(61 5)
.
This estimate is also a random variable over the ensemble and, as such, is denoted by ,.
Since is quite large (on the order of several thousand), this operation i s best performed !y a
digital computer.
To evaluate the quality of ts estimate it is necessary to determine the mean and te variaGe
of . . since it is a random vaable whose precise value depends upon the particular
sample fnction being used and the particular set of samples taken. The mean is easy to
obtan since
[
.
N-n ]
·,.=
.L
X
k
X
k+n
.
.
k=O
I
N-n
I
N-n

 
E[
X
k
X
k+n
] =
 
,.
.   .  
k=O k=O
= ,.
222 CHAPTER 6 • CORRE LATI ON F UNCTI ONS
Thus, the expected value of the estimate is the true value of the autocorrelation fnction and this
is æ.--.·-.estimate of te autocorrelation fnction.
Although the estimate descrbed by (615) is unbiased, it is not necessarily the best estimate
in the mean-square error sense and is not the frm that is most commonly used. Instead it is
customary to use
-= 0, 1 , 2, . . . , M (6-1 6)
This is a biased estimate, as can be seen readily fom the e"aluation of c t, -..,given above
fr the estimate of (6-15) . since only the fctor by which the sum is divided is diferent in the
present case, the expected value of tis new estimate is simply
c·,-·., = ¸1
-
e,-..
N + l
Note that if N » - the bias is small. Although this estimate is biased, in most cases, the total
mean-square eror is slightly less than fr the estimate of (6-1 5). Furthermore, (6-1 6) is slightly
easier to calculate.
It is much more difcult to determine the variance of the estimate, and the details of this ae
beyond the scope of the pres
e
nt discussion. It is possible to show, however, that the vaance of
the estimate must be smaller than
2
M
Var t,-.., ::
N
L
c,·..
k=
-
M
(61 7)
This expression fr the vaance assumes that the 2M + 1 estimated values of the autocoreltion
fnction span te region in which the autocorelation fnction has a signifcant amplitude. If
te value of (2M + . · .is too small, the vaiance given by (617) may be too small. If
the matematical fr of the autocorelation fnction is kown, or can be deduced fom the
measurements that a made, a iore accurate measure of the vaance of the estimate is
2 1
0
Va t,-.., ::
T


e,·.d• (61 8)
wher T ¯ N .is te lengt of the observed saple.
As a illustation of what this result means in ters of te number of saples required fr
a given degree of accuracy, suppose that it is desired to estimate a corelation fnction of te
fr shown in Figure 6-2 with fur points on either side of center (M = .. If an rms error of
5%
3
or less is required, then (61 7) implies that (since ,= ...
3 This implies that the standad deviation of the estimate should be no greater than 5% of the te ÛÖ
value of the random variable Rx (ntt).
.
6-4 MEAS UREMENT OF AUTOCORRE LATI ON F UNCTI ONS 223
(0. 05A
2
)
2 � � t
A
4
[
.
·Lt ¡
2
N
k=-4
4Lt
This can be solved fr N to obtain
·� 2200
It is clear that long samples of data and extensive calculations are necessary if accurate estimates
of corelation fnctions a to be made.
The Student' s Edition of MATLAB does not have a fnction fr computing the autocorelation
fnction of a vector of data samples. However, there are s
e
veral ways to readily accomplish the
calculation. The one considered here makes use of the convolution fnction and a method
described i Chapter 7 makes use of the fst Fourier transfrm. The raw convolution of two
vectors, aand b, of data leads to a new vector of data whose elements a of the frm
c(k) =
L
a(j)b(·- j)
j
where the summation is taken over all values ofj fr which x(j) and ·· j) are valid elements
of te .vectors of data. The most widely used estimate of the autocorelation fnction, i. e. , the
biased estimate, has elements of the frm
I
R(k) = -�a(j)a(j ·.
N   I �
1
·= 0, . 2 . . . N - I
Thus te autocorelation fnction can be computed by convolution of the data vector with a
reversed copy of itself and weighting the result with the fctor I / ( ·I ) . The fllowing special
MATA fnction caries out this calculation.
functi on [ndt, R] = corb(a, b, f}
% corb. m bi ased correlation functi on
% a, b are equal l ength sampl ed ti me functi ons
% f i s the sampl i ng frequency
% ndt is the lag val ue for ± ti me del ays
N=l ength(a) ;
R=conv(a,fl i pl r(b))/( N+1 ) ; °cale of correl ati on functi on
ndt=(-(N-1 ) : N- 1 )*1 /f; °cale of l ag val ues
Tis fnction calculates values of R(nLt) fr ·- 1) : M :  (N - 1) fr a total of 2N - I
elements. The maximum value occurs at R(N) coresponding to Rx (0) and the autocorelation
fnction is symmetrical about this point. As an example of the use of this fnction, it will be
224 CHAPTER 6 • CORRELATI ON F UNCTI ONS
used to estimate the autocorrelation fnction of a sample of a Gaussian random process. The
MATAB program is straightfrward as fllows.
%corxmp1 . m exampl e of autocorrel ati on cal cul ati on
rand(' seed' , 1 000} ; % use seed to make repeatabl e
x=1 O*randn( 1 , 1 001 ) ; % generate random sampl es
t 1 =0: . 001 : 1 ; % sampl i ng i ndex
[t, R]=corb(x, x, 1 000) ; % autocorrel ati on
subplot(2, 1 , 1 ); pl ot(t1 , x) ; xl abel ('TI ME' ) ; ylabel (' X' )
subpl ot(2, 1 , 2) ; pl ot(t, R) ; xl abel ( ' LAG' ) ; yl abel ( ' Rx' )
The resulting sample fnction and autocorrelation function are shown "in Figure s·It is seen
that the autocorrelation function is essentially zero away from the origin where it is concentrated.
This is characteristic of signals whose samples are uncorrelated, as they are in this case. From
the program, it is seen that the standard deviation of the random signal is and, terefre, the
varace is corresponding to a lag of zero on the graph of the autocorrelation fnction.
40
20
A
0
-20
-40
0
1 50
1 00
` 50 -
0
-50
·T
0. 2
• t
- 8
-0. 5
8
0. 4
Þ ^º
a .
TI ME
I
t e
0

0. 6
. a
• •
0. 8
0. 5
Figre 63 Saple fnction ad autocorelation fnction of uncorelated noise.
1
M
T
6-4 MEAS URE ME NT OF AUTOCORRE LATI ON F UNCTI ONS 225
Consider now an example in which the samples are not uncorrelated. The data vector will be
obtained fom that used in the previous example by carrying out a running average of the data
with the average extending over 5 1 points. The program that carries out this calculation is as
fllows.
%corxmp2. m exampl e 2 of autocorrel ati on cal cul ati on
rand(' seed' , 1 000) ;
x1 =1 O*randn( 1 , 1 001 ) ;
h=( 1 /51 )*ones( 1 , 51 ) ;
x2=conv(x1 , h) ; %l ength of vector i s 1 001 +51 -1
x=x2(25: 25+ 1 000) ; %keep vector l ength at1 001
t1 =0: . 001 : 1 ; %sampl i ng i ndex
[t , R]=corb(x, x, 1 000) ; %autocorrel ati on
subpl ot(2, 1 , 1 ); pl ot(t1 , x) ; xl abel (' TI ME' ) ; yl abel (' X' )
subpl ot(2, 1 , 2) ; pl ot(t, R) ; xl abel (' LAG' ) ; yl abei (' Rx' )
Figure shows the resulting sample fnction and the autocorelation fnction. It is seen
that there is considerably more correlation away fom the origin and the mean-square value is
reduced. The reduction in mean-square value occurs because the convolution with the rectangular
fnction is a type of low-pass fltering operation that eliminates energy fom the high-fequency
components in the waveform as can be seen in the upper part of Figure 6-.
The standard deviation of the autocorrelation estimate in the example of Figure 6- can be
fund using -.·. The MATLAB program for this is as follows.
%corxmp3. m cal c. of standard devi ati on of correl ati on esti mate
M = l ength( R) ;
V = (2/M) *sum( R. "2) ;
S = sqr(V)
The result i s S ÷ s-s· It is evident that a much longer sample would be required if a high
degree of accuracy was desired.
Exercise 6-. 1
An ergodi c random process has an autocorrel ati on functi on of the form
Rx ( r) = . ·

·



a) Over what range of r-val ues must the autocorrel ati on functi on of thi s
process be esti mated i n order to i ncl ude al l val ues of Rx r) greater than
1 % of the maxi mum.
226
2
´ 0
-2
CHAPTER 6 CORRE LATI ON F UNCTI ONS
-4  
_
0. 2 0. 4 0. 6 0. 8 1
TI ME
T
" 0. 5
0
_        �          ��
·1 -0. 5 0
LAG
Figre 6 Autocorelation fnction of patially corelated noise.
0. 5
b) I f 23 esti mates ( M = 22) of the autocorrel ati on functi on are to be made
i n the i nteral speci fi ed i n (a) , what shoul d the sampl i ng i nteral be?
c) How many sampl e val ues of the random process are requi red so that
the rms error of the esti mate i s l ess than 5% of the true maxi mum val ue
of the autocorrelati on functi on?
Answers: 0, 1 , 2. 3, 4053
Exercise 6-.2
Usi ng the vari ance bounds gi ven by the i ntegral of (6-1 8) , fi nd the number of
sampl e poi nts requi red for the autocorrel ati on functi on esti mate of Exercise
6. 1 .
Answer: 2000
1
6-5 EXAMPLES OF AUTOCORRE LATI ON F UNCTI ONS 227
65 Exampl es of Autocorrelation Functi ons
Befre going on to consider crosscorrelation functions, it is worthwhile to look at some typical
autocorelation fnctions, suggest the circumstances under which they might ari se, and list
possible applications. This discussion is not intended to be exhaustive, but is intended primarily
to intoduce some ideas.
Te trangula correlation fnction shown in Figure -:is typical of random binary signals in
which the switching must occur at unifrmly spaced time intervals. Such a signal arises in many
types of communication and control systems in which the continuous signals are sampled at
perodic instants of time and the resulting sample amplitudes converted to binary numbers. The
corelation function shown in Figure -:assumes that the random process has a mean value of
zero, but this is not always the case. If, fr example, the random signal could assume values
of A and 0 (rather than ..then the process has a mean value of .:and a mean-squae
value of A
2
/2. The resulting autocorrelation fnction, shown in Figure -sfllows fom an
application of (69).
Not all binar time fnctions have triangular autocorrelation fnctions, however. For example,
another common type of binar signal is one in which the switching occurs at randomly spaced
instants of time. If all times are equally probable, then the probability density fnction associated
with the duration of each interal is exponential, as shown in Section :The resulting
autocorelation fnction is also
.
exponential, as shown in Figure The qsual mathematical
representation of such an autocorrelation fnction is
(61 9)
where Û is the average number of interals per second.
Bina signals and correlation fnctions of the type shown in Figure --fequen
'
Hy arise in
connection with radioactive monitoring devices. The randomly occurring pulses at the output
of a paicle detector are used to trigger a fip-fop circuit that generates the binary signal.
Tis type of signal is a convenient one fr measuring either the average time interval between
paicles or the average rate of occurrence. It is usually refrred to in the literature as the -.-.--
t···,·.»-+..·
Figre 6 Autocorelation fnction of a
binary process with a nonzero mean value.
228 CHAPTER 6 • CORRE LATI ON F UNCTI ONS
A
X(t)
m ~~
æ
- A
(a) |¤ì
Figre 6 (a) A binary signal with randoml y spaced switching times and (b) the coresponding
autocorrelation fnction.
Nonbinary signals can also have exponential correlation fnctions. For example, if ver
wideband noise (having almost any probability density fnction) is passed though a low­
pass RC flter, the signal appearing at the output of the flter will have a nearly exponential
autocorrelation fnction. This result is shown in detail in Chapter ·
Both the trangular autocorrelation fnction · and the exponential autocorelation fnction
share one fature that is worth noting. That is, in both cases the autocorrelation fnction has
a discontinuous derivative at the origin. Random processes whose autocorrelation fnctions
have this property are said to be . .A nondifereitiable process is one whose
derivative has an infnite variance. For example, if a random voltage having an �xponential
autocorrelation fnction is applied to a capacitor, the resulting curent is proportional to the
derivative of the voltage, and this current would have an infnite variance. Since this does not
make sense on a physical basis, the implication is that random processes having truly triangular
or truly exponential autocorelation fnctions cannot exist in the real world. In spite of this
conclusion, which is indeed true, both the triangular and exponential autocorrelation fnctions
provide usefl models in many situations. One must be carefl, however, not to use these models
in ary situation in which the derivative of the random process is needed, because the resulting
calculation is almost certain to be wrong.
All of the correlation fnctions discussed so far have been positive fr all values of · This is
not necessary, however, and two common types of autocorrelation functions that have negative
regions are given by
and
-
A
2
sin -
·
·
, ·
·
÷

-
·
·
(620)
(621 )
and are illustrated in Figure -·The autocorrelation fnction of :.ases at the output of
the narrow band bandpass flter whose input is very wideband noise, while that of :. . is
6-5 EXAMPLES OF AUTOCORRE LATI ON F UNCTI ONS
X
X
\ '
(a)
é
(b)
229
Figre 67 The autocorelation fnctions arising at the outputs of (a) a bandpass flter and (b) an ideal
low pass flter.
typical of the autocorrelation at the output of an ideal low pass flter. Both of these results will
be derived in Chapters ·and ·
Although there are many other types of autocorelation functions that arise i n connection with
signal and system analysis, the fw discussed here are the ones most commonly encountered.
The student should refr to the properties of auto corelation functions discussed in Section 63
and verify that all these correlation fnctions possess those properies.
Exercise 6-5. 1
a) Determi ne whether each of the random processes descri bed by the
autocorrel ati on functi ons of (6-20) and (6-21 ) i s di ferenti abl e . .
b) I ndi cate Whether the fol l owi ng statement is true or fal se: The product of
a functi on that is di fferenti abl e at the ori gi n and a functi on that is non­
di fferenti abl e at the ori gi n i s al ways di ferenti abl e. Test your concl usion
on the autocorrel ati on functi on of (6-20) .
Answers: Yes, yes, true
Exerci se 65. 2
Whi ch of the fol l owi ng functi ons of r cannot be val i d mathemati cal model s
for autocorrel ati on functi ons? Expl ai n why.
.
230
b}
c}
d}
e}
I • I










.


·
CHAPTER 6 CORRELATI ON F UNCTI ONS
Answers: b, c, e are not val i d model s.
6 Crosscorrelation Functons
It i s also possible to consider the corelation between two random variables fom diferent
random processes. This situation arses when there is more than one random signal being applied
to a system or when one wishes to compare random voltages or curents occurng at diferent
points in the system. If the random processes are jointly stationa in the wide sense, and íl
sample fnctions fom these processes are designated as and · . ten for two random
vaables
·÷·

)


÷Y+ •)
i t i s possible t o defne the ..
(622)
The order of subscripts is signifcant; the second subscript refrs to the radom vaable m-a 
at
•) . 4
There is also another crosscorrelation fnction that can be defned fr the sae two a±- 
instants. Thus, let
÷
·

÷.

•)
and defne
4 This is an arbitray convention, which is by no means universal with all authors. Te defnitions should
be checked in every case.
6-6 CROSSCORRELATI ON F UNCTI ONS
231
(623)
Note tit because both random processes are assumed to be jointly stationary, these crosscorre­
lation fnctions depend only upon the time· diference r.
I t i s important that the processes be jointly stationary and not just individually stationary.
It is quite possible to have two individually stationa random processes that are not jointly
stationary. In such a case, the crosscorrelation fnction depends upon time, as well as the time
difernce r.
Te time crsscorrelationfnctions may be defned as befre fr a particula pair of sample
fnctions as
ad
1 1
T
el x
y
(r) =  lim ¯ x(t) y(t + r) dt
T-o 2T
-
T
. 1 1
T
el
y
x (r) =  lim ¯ y(t)x(t + r) dt
T-o 2T -
T
(624)
(6-25)
ümcrandom processes are jointly ergodic, then (624) and (625) yield the same value fr
cvcqpa of saple fnctions. Hence, fr ergodic processes,
el x
y
(r) = Rxy (r)
el
y
x (r) = R
Y
X (r)
(6-26)
(6-27)
In
g
cn-ralg te physical interpretation of crosscorelation fnctions is no more concrete
ta tat of autocorelaton fnctions. It is simply a measure of how much these two random
vaables depend upon one anoter. In the later study of system analysis, however, the specifc
crsscorelation fnction between system input and outut will take on a very defnite and
import physical signfcance.
Exercise 6. 1
Two joi ntly stati onar random processes have sampl e functi ons of the form
X(t) = 2 cos (St + ()
and
Y(t) = 10 sin (5t + J)
232 CHAPTER 6 CORRE L ATI ON F UNCTI ONS
where J i s a random vari abl e that i s uni forml y di stri buted from O to 2r. Fi nd
the crosscorrel ati on funcUon
R
XY (r) for these two processes.
Answer: 20 sin (5r)
Exerci se 6-6.2
Two sampl e functi ons from two random processes have the form
= 2 cos
and
= 1 0 sin .
Fi nd the ti me crosscorrel ati on functi on for x( t) and y( t ¯) .
Answer: 20 sin (5r)
67 Properties of Crosscorrelation Functions
The general properties of all crosscorrelation fnctions are quite diferent fom those of auto­
correlation fnctions. They may be sumarized as fllows:
1 . The quantities Rxr (O) and Rrx (O) have particular physical signifcance and do
represent mean-square values. It i s true, however, that Rxr (O) = Rrx (O) .
2. Crosscorrelation fnctions are not generally even fnctions of · There is a type of
symmetry, however, as indicated by the relations
Rrx ( r) ~ Rxr ( -r) (628)
Thi s result fllows fom the fct that a shift of in one direction (in time) i s equivalent
to a shift of ·(/) in the other direction.
3. Te crosscorrelation fnction does not necessarily have its maximum value at ¯ ~ 0. It
can be shown, however, that
I Rxr (r) I
:  [ Rx (O)
Rr (0)]
1
12
(629)
with a similar relationship fr Ry x ( r) . The maximum of the crosscorelation fnction ca
occur anywhere, but it cannot exceed th� above value. Furthermore, it may not achieve
this value anywhere.
6-7 PROPERTI ES OF CROSSCORRE LATI ON F UNCTI ONS
4. I f the two random processes are statistically independent, then
-, ·.= c· ·
·
,= c · , c ·
·,= ·:
= -,
·
.
233
(630)
If, in addition, ·-··process has zero mean, then the crosscorrelation fnction vanishes fr
all · The converse of this is not necessarily true, howeve'. The fct that the crosscorelation
fnction is zero and that one process has zero mean does --imply that the random
processes are statistically independent, except fr jointly Gaussian random vaables.
5. If ·.is a stationary random process and ·.is its derivative with respect to time, the
crosscorrelation fnction
.
of ·.and ·.is given by
.-, ·.
-
,,
·.=
.

·
(63 1 )
i n which the right side of (63 1 ) is the derivative of the autocorrelation fnction with
respect to ¯ . This is easily shown by employing the fundamental defnition of a derivative
Hence,
·.= lim
·. ·.



-
,,
·.= c · . ·.·. ,
= c
¸
lim
·.·.·+ ·. · . ·.·. , }


·

s, ·.·. -, ·. .s, ·.
= hm =
· . ·.
The interchange of the limit operation and the expectation is permissible whenever .
exists. If the above process i s repeated, it i s also possible to show that the autocorelation
fnction of ·.i s
(632)
where the right side is the second derivative of the basic autocorrelation fnction with
respect to ·
It is worth noting that the requirements fr the existence of crosscorrelation fnctions
are more relaxed than those fr the existence of autocorrelation fnctions. Crosscorrelation
functions are generally not even fnctions of · their Fourier transfors do not have to be
positive fr all values of w, and it is not even necessary that the Fourier trarsforms be real.
These latter two points are discussed in more detail in the next chapter.
23 CHAPTER 6 • CORRELATI ON F UNCTI ONS
Exerci se 67. 1
Prove the i nequatity shown i n Equation (629) . Thi s i s most easi l y done by
eval uati ng the expected val ue of the quantity
Exercise 67. 2
X
i- f
2
¸ ¸2
.
Rx (0)
J
Ry (0)
Two random processes have sampl e functi ons of the form
X(t) = A .a. (wot + 0).a± Y(t) = c.ia (Wot + J)
where e i s a random vari abl e that i s uniforml y di stri buted between 0 and Z
and A and b are constants.
a) Fi nd the crosscorrel ati on functi ons R) and Ryx-).
b) What i s the si gnificance of the val ues of these crosscorrelation functi ons
ah = O?
Answer: (�) ..a wo-
6 Eamples and Appl ictions of Crosscrrelation Functions
t: i. ae:-a ¡:-«.aa.|, :a.: aa- ar :a- .¡¡|i..:iaa. ar .:a...ao-|.:iaa ¬.:iaa. i. ia .aaa-.:iaa 
«i:a .,.:-±. «i:a :«a a: ±a:- :.aaa± ia¡a:. fa -.¡|a:- :u. ia ±a:- a-:.ii, .aa.ia-: . :uaa± 
¡:a.-.. «aa.- ..±¡|- aa.:iaa. u- ar :a- fr
Z(t) = X(t) ¤ Y(t)
m«ai.a X (t) .aa Y (t) u- .|.a ..±¡|- aa.:iaa. ar:.a±a± ¡:a.-..-. fa-a a-a±¡ ±- :uaa± 
«.:i.t|-. .. 
Z1 = X1 ± f1 = X(t1 ) ± Y(t1 )
Z2 = X2 ± Y2 ¯ X(t1 + -) ± Y(t1 + t)
:a- .a:a.ao-|.:iaa aa.:iaa ar 
Z(t) i. 
6-8 EXAMPLES AND APPLI CATI ONS
Rz(') = E[ Z1 Z2]
=
E[ (X1 ± Y1 ) (Xi ± Y2) ]
= E[X1 X2 + Y1 Y2 ± X1 Y2 ± Y1 X2]
= Rx(r) + Ry (r) ± RX
Y
(r) ± R
Y
X(r)
235
(
633)
Ts result is easily extended to the sum of any number of random variables. In general, the
autocorelation fnction of such a sum will be the sum of .··the autoccrelation fnctions plus
te sum of .··the crosscorelation fnctions.
üthe two r_andom
p
rocesses being considered ae statistically independent and one of them has
zero mean, then both of the crosscorrelation fnctions in (633).vanish and the autocorelation
fnction of the sum is just the sum of the autocorelation fnctions. An example of the importance
of ts result arises in connection with the extraction of periodic signals fom radom noise. Let
X (t) be a desired signal sample fnction of the frm
X(t) = A cos (wt + () (
6
34)
where ( is a random variable unfrmly distibuted over (0, 21) . It is shown previously that the
autocorelation fnction of this process is
1
Rx(r) = -A
2
cos wr
.
2
Next, let Y(t) be a sample fnction of zero-mean random noise that is statistically independent
of the signal and specify that it has an autocorrelation fnction of the frm
· Y °º
The observed quantity is Z(t) , which fom (633) has an autocorrelation fnction of
Rz (r) = Rx(r) + Ry (r)
1
= -
A
2
cos wr
+ B
2e-a1 TI
2
(
635)
This fnction is sketched in Figure 68 fr a case in which the average noise power, Y
2
, is much
lager than the average signal power, � A
2
• It is clear fom the sketch that fr large values of
t, the autocorelation fnction depends mostly upon the sign
a
, since te noise autocorelation
fnction tends to zero as r tends to infnity. Thus, it should be possible to extact tiny amounts
of sinusoidal signal fom large amounts of noise by using an appropriate method fr measuring
te autocorelation fnction of the received signal plus noise.
Another method of extracting a small known signal fom a combination of signal and noise is
to peror a crosscorelation operation. A typical example of this might be a radar system that
is tansmtting a signal X (t) . The signal that is retured fom any ·target is a very much smaller
version of X (t) and has been delayed in time by the propagation time to te target and back.
236 CHAPTER 6 CORRELATI ON F UNCTI ONS
M2 ¬ a2
ftgre 6 Autocorelation fnction o f sinusoidal signal plus noise.
Since noise is always present at the input to the radar receiver, the total received signal Y(t)
may be represented as
Y(t) = aX(t - •1
) N(t) (6-36)
where a is a number very much smaller than .
·
,is the round-trip delay time of the signal,
and N(t) is the receiver noise. In a typical situation the average power of the retured signal,
a
X (t - -1 ) , is very much smaller than the average power of the noise, N (t) .
The crosscorrelation fnction of the transmitted signal and the total receiver input is
RXY (
•) = E[X(t) Y(t + ·. ,
= E[aX(t) X(t_ ·- •1
) X(t) N(t ·.,

aRx(• -
•1
) + Rx
N
(

)
(6-37)
Since the signal and noise are statistically independent and have zero mean (because they are
R bandpass sign
a
ls), the crosscorrelation fnction between X(t) and N(t) is zero fr all values
of ·Thus, (637) becomes
RXY (

) = aRx(

-
•1
) (6-38)
Rememberng that autocorelation fnctions have their maximum values at the origin, it is clear
tat if . is adjusted so that the measured value of RXY (•} is a
m
aximum, then . = 
•1
and this
value indicates the distance to the t&rget.
In some situations involving two random processes it is possible to observe both the sum
and the diference of the two processes, . but not each one individually. In this case, one may
be interested in the crosscorrelation between the sum and diference as Û means of leaing
somethi
p
g about them. Suppose, fr example, that we have available two processes described
b
y
U(t) = X(t) + Y(t)
V(t) = X(t) - Y(t)
(6-39)
(60)
6-8 EXAMPLES AND APPLI CATI ONS 237
in which X(t) and Y(t) are not necessarily zero
·
mean nor statistiCally independent. The
crosscorelation fnction between U (t) and V (t) is
e. ·.= E[ U(t) V(t + ·. ,
= E[X(t) + Y(t) ] [X(t + ·. Y(t + ·. ,
(6
• 1 )
= E[X(t) X(t + ·.+ Y(t) X(t + ·. X(t) Y(t + ·. Y(t) Y(t + ·. .
Each of the expected values in . . may be identifed as an autocorrelation fnction or a
crosscorrelation fnction. Thus,
e. ·.= -, ·.+ -,, ·. -, ·. -, ·. (6  2)
In a simlar way, the reader may verify easily that the other crosscorelation fnction is
e. ·.= -, ·. -,, ·.+ -,, ·. - ·. (6 3)
If both X and Y are zero mean and statistically independent, both crosscorrelation fnctions
reduce to the same fnction, namely
-. ·.= e. ·.= e,·. e ·. |> )
The actual measurement of crosscorrelation fnctions can be carried out in much the same
way as that suggested fr measuring autocorelation fnctions in Section e This type of
measurement is still unbiased when crosscorelation fnctions are being considered, but the
result given in .·.fr the variance of the estimate i s no longer strictly tre-particularly if
one of the signals contains additive uncorrelated noise, as in the radar example just discussed.
Generally speakng, the number of samples required to obtain a given variance in tle estimate
of a crosscorelation fnction is much greater than that required fr an autocorrelation fnction.
To illustrate crosscorelation computations using the computer consider the fllowing exam­
ple. A signal x(t) = 2 sin( l OOOrt + 8) is measured in the presence of Gaussian noise having a
bandwidth of 50 Hz and a standard deviation of 5. This corresponds to a signal-to-noise (power)
ratio of0. 5 ? 2
2
/5
2
= 0. 08 or . . dB. This signal is sampled at a rate of 1 000 samples per second
fr 0.5 second giving 501 samples. These samples are processed in two ways: by computing
the autocorelation fnction of the signal and by computing the crosscorrelation fnction of the
signal and another deternistic signal, sin( l OOOm) . For purposes of this example it will be
assumed that the random vaable ( takes on the value of J/ 4. The fllowing MATLAB program
generates the signals, cares qut the processing, and plots the results.
% cormp4. m ceosscorrel ati on exampl e
T = 0. 5; fs = 1 000; dt = 1 /fs; fo = 50; N = T/dt;
t 1 =0: . 001 : . 5;
x =2*si n(2*fo*pi *tf + . 25*pi *ones(si ze(t1 ) ) ) ;
rand(' seed' , 1 000) ;
238 CHAPTER 6 CORRE LATI ON F UNCTI ONS
y1 =randn( 1 , N+1 ) ;
[b, a]=butter(2, 50/500) ; %2nd order 50Hz LP fi lter
y=fi l ter(b, a, y1 ) ; %fi l ter noi se
y=5*y/std(y) ;
Z= X + y;
[t2, u] = corb(z, z,fs) ;
x1 = si n(2*fo*pi *t1 ) ;
[t3, v] = corb(x, x1 ,fs) ;
subpl ot(3, 1 , 1 ) ; pl ot(t1 , z) ; xl abel ('TI ME' ) ; yl abel (' z(t) ' ) ;
subpl ot(3, 1 , 2) ; pl ot(t2, u) ; xl abel ( ' LAG' ) ; yl abel (' Rzz' ) ;
subpl ot(3, 1 , 3) ; pl ot(t2, v) ; xl abel ( ' LAG' ) ; yl abel (' Rxz' ) ;
The results are shown in Figure -. The autocorelation function of the signal indicates the
possibility of a sinusoidal signal being present but not distinctly. However the crosscorrelation

z¦t)
Û
~ZÛ
Û

3zz
Û
~DÛ
~Û. D
1
Mxz
Û
~1
~Û. D
Û. 1 Û. Z
Û
Û
LAG
Û. O Û. º
ftgre 6 Signal, autocorelation fnction, ad crosscorelation fnction CORP4.
Û. D
Û. D
Û. D
6-9 CORRELATI ON MATRI CES F OR SAMPLED F UNCTI ONS 139
fnction clearly shows te presence of the signal. It would be possible the determine the phase
ar t

'sinusoid by measuring the time lag of the pea of the cros
s
corelation fnction fom the
orgin and multiplying by 21 / T, where T is the perod of the sinusoid.
Exercise 6. 1
A random process has sampl e functi ons of the form X(t) ¯ A i n whi ch A
i s a random vari abl e that has a mean val ue of 5 and a variance of 1 0.
Sampl e functi ons from thi s process can be obsered only in the presence
of i ndependent noi se havi ng an autocorrel ati on funct
i
on of
RN(•) = 10 exp (-2 1 r l )
a) Fi nd the autocorrel ati on functi on of the sum of these two processes.
b) I f the autocorrel ati on functi on of the sum i s obsered, fi nd the val ue of r
at whi ch thi s autocorrel ati on function is withi n 1 % of its val ue at r = o.
Answers: 1 . 68, 35 + 1 oe-
21
i
I
Exercise 6.2
A random bi nar process such as that described i n Secti on 62 has sampl e
functions wi th ampl itudes of ± 1 2 and ,~ 0. 01 . It i s appl i ed to the hal f-wave
- rectifi er ci rcuit shown bel ow.
I deal
R1 di ode
X|lìr
··i
+
Y(t)
a) Fi nd the autocorrel ati on functi on oft he output, Ryr) .
b) Fi nd the crosscorrelation functi on R
x
Y•) .
c) Fi nd the crosscorrel ation functi on Ryxr) .
Answers: 9 + 9(1 -
o

1
) . 36[1 - l r l ]
Z CHAPTER 6 CORRE LATI ON F UNCTI ONS
69 Correlatin Matrices for Sampl ed Functions
The discussion of correlation thus far has concentrated on only two random variables. Thus,
fr stationary processes the correlation fnctions can be expressed as a fnction of the single
variable ¯ . There are many practical situations, however, in which there may be many ra­
dom variables and it is necessary to develop some convenient method fr representing the
many autocorrelations and crosscorrelations that arise. The use of vector notation provides
a convenient way of representing a set of random variables, and the product of vectors that
is necessary to obtain correlations results in a matrix. It is imporant, therefre, to discuss
some situations in which the vector representation is usefl and to describe some of the
properties of the resulting correlation matrices. A situation in which vector notation is use­
fl in representing a signal arises in the case of a single time fnction that is sampled at
periodic time instants
.
. If only a fnite number of such samples are to be considered, say
N, t4en each sample value can become a component of an ·? ..vector Thus, if the
sampling times are t
1
, t2 , . . . , tN, the vector representing the time fnction X(t) may be ex­
pressed as
[ X(t
1
) ]
X ÷
X

t2)
- X(tN)
If X (t) is a sample fnction fom a random process, then each of the components of the vector
·
X is a random variable.
It is now possible to defne a correlation matrix that is ·? ·.and gives the correlation
between every pair of random variables. Thus,
[ X (t
1 )X (ti )
T
X(t2) X(t
1
)
R
x = E[ X X ] = E
:
X(tN) X(t
1
)
X(ti ) X(t2)
X(t2) X(t2)
X(t
1
) X(tN) ]
X(tN) X(tN)
where X
T
i s the transpose of X. When the expected value of each element of the matrix i s taken,
that element becomes a particular value of the autocorrelation function of the random process
fom which X(t) came. Thus.
[ Rx
(t1 ,
t
1
)
R
x
=
Rx
(

2
,
t
1
)
Rx
(tN
,
t
1
)
R
x
(t1 ,
t
2)
Rx
(t2
,
t
2)
(6
5)
6-9 CORRELATI ON MATRI CES FOR SAMPLED F UNCTI ONS 21
wa-a :a- :.a±a± ¡:a.-.. aa± «a..a  X (t) ..±- .. «.±-·.-a.- .:.:.aa.:,,  :a-a  .|| :a- 
.a±¡aa-a:. ar K¿ t-.a±- ma.:.aa. ar :.±- ±.a-:-a.- aa|, tr :a- .a:-:«.| t-:«--a ..±ç|- 
«.|a-. i. 1t, :a-a 
.a± 
t1 = t1 + 1t
t
3
= t1 + 21t
tN = ti + (N - 1 ) 1t
[ Rx[O] Rx[lt]
K
_ :: Rx
:
[lt] Rx[O]
Rx[ (N - 1 ) 1t]
¯ ¯ ¯ Rx[(N - l ) /t] ]
Rx[O]
(6-6)
«a-:- a.- a.. t--a ±.±- ar :a- .,¬-::, ar :a- .a:a.ao-|.:.aa aa.:.aa, :a.: .., s, ·, = 
Rx[ -i
.
t ] . ×a:- :a.: .. . .aa.-,a-a.- ar :a- .,±±-a,, K¿ .. . .,±±-::.. ±.::.· ,-«-- ìa 
±- aaa.:.:iaa.:, ...-·, .a± :a.: .. . .aa.-,a-a.- ar .:.:.aa.:.:,, :a- ±.¡a: ±..,aa.| ,.a± .|| 
ai.,aa.|. ¡u.||-| :a i:· a.«- i±-a:i..| -|-±-a:. 
m±aa,a ±- K¿ ¡a.: ±-aa-± i. . ia,...| .aa.-,a-a.- ar ¡:-«.aa. ±-aa.:.aa., .: .. aa: :a- 
±a.: .a.:a±.:, «., ar ±-..,a.:.a, :a- .ao-|.:iaa ±.::.· ar . :.a±a± «-.:a: .aa...:.a, ar 
.w¡|- «.ia-. A ±a:- .a±±aa ¡:a.-±a:- .. :a ±-aa- . covariance mtri, «a..a .aa:..a. :a- 
«.:i.a.-. .a± .a«.:..a.-� ar .a- :.a±a± «.:i.t|-.  fa- ,-a-:.| .a«.:..a.- t-:«--a :«a :.-±a± 
«.:..ti-. .. ±-aa-± .. 
E{ [X(t; ) - X(t; ) ] [
X
(t
j
) - X(t
j
)]
}
=
Uj Uj Pij
«a-:-  X(t; ) = ±-.a «.|a- ar X(t; )
X (ti ) = ±-.a «.|a- ar X (ti )
a
´
= «.:..a.- ar X (t; )
a
, = «.:.±.- ar X(t
j
)
Pi
i = aa¬.|.:-± .a«.:..a.- .a-a..-a: ar X (t; ) .a± X (ti )
= 1 , «a-a i = ) 
n- .a«.u.a.- ±.¬. .. ±-aa-± .. 
-
T -
T
Ax = 
r,
x - x· ,x  - x  )]
«a-:- x .. :a- ±-.a «.|a- ar x u..a, :a- .a«.:i.a.- ±-aa.:.aa. |-.±. .±±-±..:-|, :a 
(6-7)
(6-8)
242 CHAPTER 6 CORRELATI ON FUNCTI ONS
[ u
2
U
�U
1
P21
.

.

U
N
UI P
N
I
(69)
since p; ; . fr . .. . . , N. By expanding it is easy to show that a, i. related to
..by
-
-
T
.. ... . .
If the random process has a zero mean, then .....
t>50)
The above representation fr the covariance matrix is valid fr both stationa and nonsta­
tionary processes. In the case of a wide-sense stationary process, however, all te vaances ae
the same and the correlation coeffcients in a given diagonal are the same. ¬a., 
Pij
.
P
l
i -j
l
and
Pl
P
1
1
P2 Pl
..= u
2
P
N
-1
Such a matrix is said to be .
. . s . , N
.I . . . . , N
P
2
P
1
1
P
1
PN-1
PN
-
2

P1
Pl
1
t>5 I )
As an illustration of some of the above concepts, suppose we have a stationa random ¡:a.-.. 
whose autocorrelation fnction is given by
Rx (r) .



.
t>º?)
To keep the example simple, assume that three random vaiables sepaated t, 1 .-.aaa ae to
be considered. Thus, ·.3 and .. Evaluating .fr ·.0, 1 , 2 ,ield. the «na-. 
that are needed fr the corelation matrix. Thus, the corelation matrix becomes
[
. .
.. . . .
1 0. 35 .. .
1 0. 35
]
. .
1
Since the variance of this process is I 0 and its mean value is ±3, the covaace matx i. 
6-9 CORRELATI ON MATRI CES FOR SAMPLED FUNCTI ONS 23
[ 1 0. 368 0. 1 35
J
Ax = 1 0 0. 368 1 0. 368
0. 1 35 0. 368 1
taa:a-: ..:a.:.aa .a «a..a :a- a.- ar «-.:a: aa:.:.aa .. .aa«-a.-a: .:..-. «a-a :a- :.a±a± 
«.:..t|-. .a±- aa± ±.a-:-a: :.a±a± ¡:a.-..-. ta :a.. ...-, :a- «-.:a: :-¡:-.-a:.a, .|| ±- 
:.a±a± «.:i.t|-. ±.,a: t- «:.::-a .. 
fa- .ao-|.:.aa ±.::.· .. aa« ±-aa-± .. 
.a «ai.a 
öO)
= E[X(t) X
T
(t + ·· ,
[
R
i
(
'
)

R
z�
(
')
R
N
1 (
'
)
R; ('
) = E[X; (t) X; (t + ··,
Rij (
'
) = E[ X; (t
)
Xj (t + ·· ,
(653)
×a:- :a.: .a :a.. ...-, ±- -|-±-a:. ar :a- .ao-|.:.aa ±.a.· u- aa.:.aa. ar ·:.:a-: :a.a «a±t-:. 
.. :a-, «-:- .a :a- ...- ar ±- .ao-|.:.aa ±.::.· ...a...:-± «.± ..±¡|-. :.t-a aa± . ..a,|- :.a·
±a± ¡:a.-.. 
_
.:a.:.aa. .a «ai.a .a.a . .ao-|.:.aa ±.a.· ±i,a: a..a: .:..- .a .aaa-.:.aa «.:a 
.a:

aa. .o.,. a: .o.,. ar .-..±i. ±-:-.:a:. m.a.a .,.:-±
,
, :a- aa..- ..,a.|. .: -..a .a:-aa. 
-|-±-a:, a: -..a .-..±i. ±-:-.:a:, ±., t- aa± ±.a-:-a:, ta: .a::-|.:-±, :.a±a± ¡:a.-..-. 
e-r:- «- |-.«- :a- .at¡-.: ar .a«.:..a.- ±.::..-., .: .. «aaa aa:.a, :a- .±¡a¬a: :a|- 
±.: :a-.- ±.::..-. ¡|., .a .aaa-.:.aa «.:a :a- ¡a.a: ¡:at.t.|.:, ±-a..:, aa.:.aa r: ·:.a±a± 
«.:..t|-. aa± . c.a....a ¡:a.-.. t: «.. aa:-± -.:|.-: ±.: :a- c.a....a ¡:a.-.. «.. aa- ar 
:a- a« r: «ai.a .: .. ¡a...t|- :a wt . ¡a.a: ¡:at.t.|.:, ±-a..:, aa.:.aa r: .a, aa±t-: 
ar :.a±a± «.:..t|-.. fa- ±-:.«.:.aa ar :ai. ¡a.a: ±-a..:, aa.:.aa .. t-,aa± :a- ..a¡- ar 
,
a.. 
a...a...aa, ta: .: ..a t- .aa«a :a.: .: t-.a±-. 
f l
x
I
=
f
[x(t1 ) , x(tz) , . . . , x(t
N
·,
.
-·  -- x - x A x - x
1
¸
1
T -T ~Ì
'
- (27)
N
/
2 i Ax
l
l /
2
¡ 
2 `
(

«a-:- I A I x .. ±- ±-:-:±.a.a: ar Ax .a± A:
1
.. .:. .a«-:.- 
(654)
u CHAPTER 6 • CORRE LATI ON F UNCTI ONS
The concept of correlation matrices can also be extended to represent crosscorelation
fnctions. Suppose we have two random vectors x.and r.where each vector contains
random variables. Thus, let
[ · . ]
·
·
.
x.=  e
·
·
.
[ ·

. ]
·
·
.
r.= 
o
·
·
.
By analogy to (6-53) the crosscorelation matrix can be defned as
where now
Rxy
( r) = cx . r

·. ,
[
-

·.
-
·

(r)
-
·
r.
-
·
·.
-
··
·.
-

·.= c·

. ·


-
·. ,
-

·.= c·

. ·

+ ·. ,
(
655)
In many situations the vector of random processes r . is the sum of the vector X(t) and
Û statistically independent noise vector s.that has zero mean. In ts case, (655) reduces .
to the autocorrelation matrix of (653) because the crosscorelation between X(t) and N(t) is
identically zero. There are other situations in which the elements of the vector Y(t ) are time
delayed versions of a single random process · . Also unlike the autocorelation matix ol
(6-53), it is not necessary that x.and r.have the same number of dimensions. If X(t) is
a column vector of size M and r.is a column vector of size N, the crosscorelation matix
will be an M ? matrix instead of a square matrix. This type of matrix may arise if X(t ) is
the single wideband random input to a system and the vector r.is composed of responses at
various points in the system. As discussed frther in a subsequent chapter, the crosscorelation
matrix, which is now a 1 ? N row vector, can be interpreted as the set of impulse responses at
these various points.
Exercise 6-9. 1
A random process has an autocorrel ati on functi on of t he form
-, ·.= l
O
e
-
1
'
1
cos 2J
PROBLEMS
Write the correl ati on matri x associ ated wi th four random vari abl es defi ned
for ti me i nstants separated by 0. 5 second.
Answers: El ements i n the fi rst row i ncl ude 3. 677, 2. 228, 1 0. 0, 6. 064
Exercise 69. 2
A covari ance matri x for a stati onary random process has the form
[ 1 0. 6
- 1
0. 4 0. 6
0. 2 -
Fi l l in the bl ank spaces i n thi s matri x.
Answers: 1 , 0. 6, 0. 2, 0. 4
0. 4
,
]
0. 6 -
0. 6
~
1
25
61 . 1 A stationary random process having sample fnctions of ·.has an autocorelation
fnction of
-, ·.÷s··


Another random process has sample fnctions of
·.÷·.+ -·¯ 0. 1 )
a) Find the value of -that minimizes the mean-square value of ·�.
b) Find the value of the minimum mean-square value of · .
c) I f -:: 1 , fnd the maximum mean-square value of · .
61 . 2 For each of the autocorrelation fctions given below, state whether the process if
represents mght be wide-sense stationary or cannot be widesense stationary.
2 CHAPTER 6 CORRELATI ON F UNCTI ONS
sin cos ,¯ cos

sin ,
d) s,


·
.÷  
,
62. 1 Consider a stationary random process having sample functions of the fr shown
below:
X, | ì
| ¸ ¬ T |¸ ¬ 2T
At periodic time instants
± -r a rectangular pulse of unit height and width t
may appear, or not appear, with equal probability and independently fom interval

interval. The time

is a random variable that is unifrly distributed over the perod
rand r¸r/2.
a) Find the mean value and the mean-square value of this process.
b) Find the autocorrelation fnction of thi s process.
62.2. Find the time autocorrelation fnction of the sample fnction in Problem 62. 1 .
62.3 Consider a stationary random process having sample fnctions of the fr
O
·.÷ L +

, -r.



in which the +,are independent random variables
.
that are + 1 or -1 with equal
probability and to is a random variable that is unifrmly distributed over the perod
rDefne a fnction
G(r) ÷L: , . ,+ ·..
and express the autocorrelatlon fnction of the process in ters of o ··
PROBLEMS 27
óå.t Which of the functions shown below cannot be valid autocorrelation fnctions? For
each case explain why it is not an autocorrelation fnction.
- 2
- 1 0
(a)
g( T)
l
0
(cl
- 2
2 - 2
óå.Z A random process has sample fnctions of the frm
· · -·cos
·
+ 8)
0
(b)
0
(d)
2
in which ·,and e are statistically independent random variables. Assume the ·has
a mean value of 3 and a variance of . that e is unifrmy distributed fom -r to r,
and that wa i s unifrmy distributed fom -6 to +6.
a) Is this process stationary? Is it ergodic?
b) Find the mean and mean-square value of the process.
c) Find the autocorrelation function of the process.
óå.å A stationary random process has an autocorelation fnction of the frm
2
s, •· - l
OO
e
-
T
cos 2rr + 1 0 cos 6rr + 36
a) Find the mean value, mean-square value, and the variance ofthis process.
b) What discrete fequency components are present?
c) Find the smallest value of r fr which the random variables ··and ·+ r) are
uncorelated.
2 · cHAPTER 6 • CORRELATI ON F UNCTI ONS
63A Consider a fnction of ·of the frm
¸ • ¡

«·· - .
T
:
-
>
Tae the Fourier transfrm of this fnction and show that it is a valid autocorrelation
fnction .fr

-:
6. 1 A stationary random process is sampled at time instants separated by .seconds.
The sample values are
·
Xk
k
Xk
·
Xk
. . · . :. . . . .s
. :. · . ·· .s ·:
: . .. . · . . s :s
· ·· . . . · . · :·
. . . . . · . · ..
s . :· . : s· . . . .
s . .· . · .s : :.
a) Find the sample mean.
b) Find the estimated autocorrelation fnction t .fr - . :·using
equation s. s·
c) Repeat (b) using equation s. s·
6. 2 a) For the data of Problem s. . fnd an upper bound on the variance of the estimated
autocorrelation fnction using the estimated values of part (b ) .
b) Repeat (a) using the estimated values of part (c).
6.3 An ergodic random process has an autocorrelation function of the frm s

··-
1 0sinc
2
(r) .
a) Over what range of r-values must the autocorrelation fnction of this process be
estimated in order to include the frst two zeros of the autocorrelation fnction?
b) If 21 estimates (M=20)of the autocorelation are to be made in the interval specifed
in (a), what should the sampling interval be?
PROBLE 29
c) How many sample values of the random process are required so that thenns error of
the estimate is less than 5 percent of the true maximum value of the autocorelation
fnction?
. 6.4 Assume that the true autocorrelation fnction of the random process fom which the
data of Problem 6. 1 comes has the frm
and is zero elsewhere.
a) Find the values of A and T that provide the best ft to te estimated autocorelation
fnction values of Problem 6. l (b) in the leastmean-square sense. (See Sec. 4. )
b) Using the results of part (a) and equation (61 8), fnd another upper bound on the
variance of the estimate of the autocorrelation fnction. Compare with the result of
Problem 6. 2(a).
ó.ã A random process has an autocorelation fnction of the frm
Rx(o) ÷1 0e-
5l

I
cos 20t
If this process is sampled ever 0. 01 second, fnd the number of samples required to
estimate the autocorrelation fnction with a standard deviation.that is no more m 1 %
of the variance of the process.
ó.ó The fllowing MATLAB program generates 1000 samples of a bandlimted noise
process. Make a plot of the sample fnction and the time autocorelation fnc
t
ion of
the process. Mak
e
an expanded plot of the autocorelation fnction fr lag vaues of
±0. 1 second around the origin. The sampling rate is 10  Hz.
x ~ randn( 1 , 2000) ;
[b, a] ~ buter(4, 20/500) ;
y ~ fi lter(b, a, x) ;
y ~ y/std(y) ;
ó.J Use the computer t
o
make plots of the time autoorelation fnctions of te fllowin
g
deterministic signals. .
a) rect (40t)
b) sin (20 it) rt (40t).
250 CHAPTE R 6 • CORRE LATI ON F UNCTI ONS
c) cos . rect
óã�I Consider a random process having sample fnctions of the form shown in Figure ô
4(a) and assume that the time intervals between switching times ae independent, ex­
ponenti ally distributed random variables. (See Sec. .o ) Show that the autocorelation
fnction of this process is a two-sided exponential as shown in Figure 6(b).
óã.Z Suppose that each sample fnction of the random process in Problem .is switching
between and 2A instead of between .+ Find the autocorelation fnction of the
process now
óã. J Determine the mean value and the variance of each of the random processes having
the fllowing autocorrelation fnctions:
a) l oe-r
2
b)




cos 2
.
r2
c)
r
2
+ .
¯
r2 + 4
óã.4 Consider a random process having an autocorrelation fnction of
a) Find the mean and variance of this process.
b) Is this process diferentiable? Why?
óJ. I Two independent stationary random processes having sample fnctions of and
Y have autocorrelation fnctions of
and
-_ r ) ÷25e- J
O
l r l cos
sin
-

.=  

a) Find the autocorelation fnction of +
b) Find the autocorrelation fnction of
c ) Find both crosscorrelation fnctions of the two processes defned by (a) and (b) .
PROBLEMS 251
d) Find the autocorrelation function of X(t ) Y(t ) .
67.2 For the two processes of Problem 6-7. l (c) fnd the maximum value that the crosscor­
relation functions can have using the bound of equation (6-29)
.
Compare this bound
with the actual maximum values that these crosscorelation functions have.
67. 3 A stationary random process has an autocorrelation fncti on of
a) Find Rx
x
(r) .
b) Find R- (r) .
sin r
Rx ( r) =
r
67.4 Two stationary random processes have a crosscorel ation function of
Rxy (r) = 1 6e-
<
r -
I J
2
Find the crosscorrelation fnction of the derivative of X(t ) and Y(t ) . That is, fnd
R-(r) .
ó. I A sinusoidal signal has the frm
X(t ) = 0. 0 1 sin ( I OOt + 8)
i n which e i s a random variable that i s uni frmly distributed between -r and ¯.
This signal i s observed i n the presence of independent noise whose autocorelation
fnction is
a) Find the value of the autocorelation fnction of the sum of signal and noise at
r = 0.
b) Find the smallest value of r for which the peak value of the autocorrelation function
of the signal iis 10 times larger than the autocorrelation fnction of the noise.
6.2 One way of detecting a si nusoidal signal in noise is to use a correlator. In this device,
the incoming si gnal plus noise is multiplied by a locally generated refrence signal
having the same frm as the signal to be detected and the average value of the product
is extracted with a low-pass flter. Suppose the signal and noise of Problem 68. 1 ae
multiplied by a refrence signal of the fr
252 CHAPTER 6 • CORRE LATI ON F UNCTI ONS
· .÷1 0 cos . ..+ <)
The product i s
z·÷· . ··+ · . ·.
a) Find the expected value of z.where the expectation i s taken with respect to the
noise and < is assumed to be a fxed, but unknown, value.
b) For what value 0f < is the expected value of z.the greatest?
ó.å Detection of a pulse of sinusoidal oscillation in the presence of noise can be ac­
complished by crosscorrelating the signal plus noise with a pulse signal at the same
fequency as the sinusoid. The fllowing MATLAB program generates 1 000 samples
of a random process with such a pulse. The sampling fequency is 1 000 Hz. Compute
the crosscorrelation function of this signal and a sinusoid pulse, sin ( 1 60Jt), that is
1 00 ms long. o-Convolve a 1 00-ms reversed sinusoidal pulse with the
·
signal using
the MATLAB commands fiplr and conv. )
%P6_8_3
t1 =0. 0: 0. 001 : 0. 099;
s1 = si n(1 OO*pi *t1 );
s = zeros( 1 , 1 000) ;
s(700: 799) = s1 ;
randn('seed
'
, 1 000)
n1 = randn( 1 , 1 000) ;
x = s + n1 ;
ó.4 Use the computer to make plots of the time crosscorelation functions of the fllowing
pairs of signals.
a) x(t) = rct .::· ,.÷sin (2000Jt) rect (400t)
b) x(t) ÷sin (2000Jt) rect .::. ,·÷cos (2000Jt) rect .::·
ó.bAssume ·(t) i s a zero mean, stationary Gaussian random process. Let ·

÷·

·and
·
·
= ·
·
·be samples of the process at

and
·
having a correlation coefcient of
c ·

·
·
, s,
·
.
p
÷
-

-
·
÷
s,..
Furer let ·

= ,

· ·and ·
·
= ,
·
·
·
.be random vaables obtained fom ·

and
·
·
by deternistic (not necessarily linear) fnctions ,

( · ) and ,
·
¯ ) ø Then an importat
PROBLEMS 253
result fom probability theory called Price' s Theorem relates the corelation fnction
of Y1 and Y
2
to p in the fllowing manner.
dn
Ry
= R
n
(
O
) E
¸ dn g1 (X1 )
.
d
n
g
2
(X
2
) ¸
dpn
x
dX' dXi
This theorem can be used to evaluate readily the corelation fnction of Gaussian
random processes after cerain nonlinear operations. Consider the case of hard limiting
such that
g1 (X) = g1 (X) = + 1
- 1
a) Using M = 1 in Price' s Theorem show that
x
> 0
X < O
Rr (t1 , t
2
) = � sin-
1
(p) or p = sin [ �
Rr (t1 , t
2
)]
b) Show how Rr (t1 , t
2
) can be computed without car ying out multiplication by using
an "exclusive or" circuit. This procedure is called polarity coincidence correlation.
ó.ó It is desired to estimate the time delay between the occurrence of a zero mean, stationary
Gaussian random process and an echo of that process. The problem is complicated by
the presence of an additive noise that is also a zero mean, stationary Gaussian random
process. Let X(t) be the original process and Y(t) = aX(t -r) +N(t) be the echo with
relative ampl
i
tude a,_ time delay r, and noise N(t) . The fllowing MATLAB M-fle
generates samples of the signals X (t) and Y (t) . It can be assumed that the signals are
white, bandlimted signals sampled at a 1 MHz rate.
%P6_8_6. m
cl ear w; cl ear y
randn('seed' , 2000)
g=round(200*sqr(pi ) ) ;
z
=randn( 1 , 1 0poo + g) ;
y=sqr(0. 1 )*z(g: 1 0000+g- 1 ) + randn( 1 , 1 0000) ; % - 1 0dB SNR
X=Z( 1 : 1 0000) ;
a) Write a program to fnd r using the peak of the correlation fnction fund using
polarity coincidence correlation. o-use the sign fnction and the = = operator
to make a polarity coincidence correlator.)
b) Estimate the value of Û given that the variance of X(t) is unity.
25 CHAPTER 6 CORRE LATI ON F UNCTI ONS
ó.J Vibration sensors are mounted on the font and rear axles of a moving vehicle to pick
up the random vibrations due to the roughness of the road surface. The signal fom the
font sensor may be modeled as
,.÷· .+ - .
where the signal · .and the noise - .are fom independent random processes. The
signal fom the rear sensor is modeled as
· .÷· ¨ •
1
) + -
· .
where -
·
·is noise that is independent of both · .and -

. All processes have
zero mean. The delay · depends upon the spacing of.the sensors and the speed of the
vehicle.
a) If the sensors are placed 5 m apart, derive a relationship between ·and the vehicle
speed V.
b) Sketch a block diagram of a system that can be used to measure vehicle speed over
a range of 5 m per second to 50 m per second. Specify the maximum and minimum
delay values that are required if an analog correlator is used.
c) Why is there a minimum speed that can be measured this way?
d) If a digital corelator is used, and the signals are each sampled at a rate of 1 2 samples
per second, what is the maximum vehicle speed that can be measured?
ó.8 The angle to distant stars can be measured by crosscorrelating the outputs of two widely
separated ant�nnas and measuring the delay required to maximize the crosscorrelation
fnction. The geometry to be considered is shown below. In this system, the distance
between antennas is nominally 500 m, but has a standard deviation of 0. 01 m. It is
desired to measure the angle e with a standard deviation of no more than I milliradian
fr any e between 0 and 1 . 4 radians. Find an upper bound on the standard deviation of
the delay measurement in order to accomplish this. o- Use the total diferential to
linearize the relation. )
I ×
I
~
1-
×
I f . ´
| �/
I ,
~
~
s¦ | ¬ T1 ) ¬ n, ¦ | ì s¦ | ì ¬ ¤ , ¦ | ì
PROBLEMS

55
ô. I A stationary random process having an autocorrelation fnction of
-, ·.÷36e-Z
l
r l
cos
@]
i s sampled at periodic time instants separated by 0. 5 second. Wrte the CO\''riance
matrx fr fur consecutive samples taken fom this process.
ô.Z A Gaussian random vector
has a covarance matrix of
. - [
0
0
1
. s

. 5

. 5 ]
0. 5 1 .
Find the expected value, E[XT A
-
1
X] .
ô.å A transversal flter is a tapped delay line with the outputs fom the various taps weighted
·
and sumed as shown below.
.
Y( t)
If the delay between taps is .the outputs fom the taps can be expressed as a vector by
[ ·. ]
X(t) -
- ^)




Likewise, the weighting factors on the various taps can be written as a vector
25 CHAPTER 6 • CORRELATI ON FUNCTI ONS

=
[ I]
a) Write an expression fr the output of the transversal flter, f(t) , in tens of the
vectors X(t) and a.
b) I f X(t) i s fom a stationar random process with an autocorelation fnction of
r,,•·,  write an expression fr the autocorelation fnction r, ,:,  
0M Let the input to the transversal flter of Problem 69. 3 have an autocorrelation fnction
of
and zero elsewhere.

• 
 
r,,•· =  1 - ¯
! .t
a) If the tansversal flter has 4 taps (i. e. , ·= 3) and the weighting fctor fr each tap
is Û[ = 1 fr all detenne and sketch the autocorrelation fnction of the output.
b) Repeat part (a) if the weighting fctors are a; = 4 ¯ = 0, 1, 2, 3.
References
See the rfernces for Chapter 1. Ofparticular interest for the material ofthis chapter are the book by
Davenport and Root, Helstrm, and Papoulis.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close