Least Square

Published on December 2016 | Categories: Documents | Downloads: 36 | Comments: 0 | Views: 382
of 3
Download PDF   Embed   Report

The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation.The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. When the problem has substantial uncertainties in the independent variable (the x variable), then simple regression and least squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares.

Comments

Content

1

Least Square Estimate

The Least Square Estimation method finds the estimation in such a way
that following cost function (Error) is minimized. Most important thing
why least squares are so popular is ease of computaion (less computational
complexities). The method of least squares is about estimating parameters
by minimizing square difference (variance) between observed data and their
expected value.
In least square the parameters to be estimated must arise in expression
for mean of observation. For example,
ε(y) = θ1 x1 + θ2 x2 + . . . θp xp
where x1 , x2 . . . xp are known variables and θ1 , θ2 , . . . θp are unknown variables.
y = θ1 x1 + θ2 x2 + . . . θp xp + 
where  = error. Now suppose we measure y for n times then, yi for i =
1, 2, . . . N using x1 , x2 , . . . xp which can be used as xi1 , xi2 , . . . , x1p then
yi = θ1 xi1 + θ2 xi2 + . . . θp xip + i
Now using whole of this equation we want estimate the θ1 , θ2 , . . . θp in such
a way that it minimize the error function. The principle of Least Squares
chooses as estimates of θ1 , θ2 , . . . θp to those values which minimizes
S(θ1 , θ2 , . . . θp ) =

N
X

(yi − θ1 xi1 + θ2 xi2 + . . . θp xip )2 .

i=1

It is simply shown as E(S) = E[(yi − E(y)]2 .

2

An LS Frequency Offset Estimator

In our case we are estimating the frequency offset f and phase parameter
φ1 , . . . , φK−1 by minimizing the mean square error between phase angles of
received symbol. The phase angle of the received symbol can be given using
argument function on the received symbol. The received symbol is derived
as
ri+nK = ti ej(φi +2πf nK) + wi+nK
The argument of any function
√ is given as arg(f (x)). For example argument of complex number 2 + 2 3i is given as


2 3
arg (2 + 2 3i) = tanθ =
2
1


π
θ = arctan( 3) =
3


Hence here arg (2 + 2 3) = π3 .
So the phase of received symbols can be given as
arg(ri+nK ) = arg (ti ej(φi +2πf nK) + wi+nK ).
Now for our simplicity we denote the phase angle as αi+nK and it is given as
αi+nK ) = arg (ti ej(φi +2πf nK) + wi+nK )
. The values of angle for received symbols without considering the noise is ,
arg (ti ej(φi +2πf nk) = φi + 2πf nK.
This angles are illustrated in figure 2. Since phase of wi+nK is uniformly
distributed in the range of (−π, π) therefore angle,
βi+nK = arg(wi+nK ) − arg (ti ej(φi +2πf nk)

(1)

are also distributed in the same range (−π, π). From this we can calculate
the phase error which we will try to minimize as
δi+nK = αi+nK − φi + 2πf nK.

(2)

This is just the angle between the received symbols with noise and without noise and is caused due to additive noise random variables which are
symmetric about zero hence is expectation is zero. This implies
E(αi+nK ) = E(φi + 2πf nK + δi+nK ) = φi + 2πf nK

(3)

The estimates of fˆ obtained from individual sequences of αi+nK will be
combined according to variances of corresponding error δi+nK . So we need to
variances of δi+nK in order to calculate the estimates of frequency offset f
and phase parameter φ1 , . . . , φK−1 . Now,


|wi+nK |sinβi+nK
(4)
δi+nK = arctan
ti + wi+nK cosβi+nK
This can be obtained from trigonometric rules of tangent. At high SNR the
value wi+nK << ti ,


|wi+nK |sinβi+nK
(5)
δi+nK = arctan
ti
2

Moreover E(|sinβi+nK |2 ) = 21 which is obtained by finding expectation of
sine square function over range (−π, π). Hence,


 1 |wi+nK |2
2
(6)
E |δi+nK | =
2
t2i
In this case the error analysis motivates weighted lease squares estimator
which is described below. Consider weighted parameters ui which is inverse
of the V ar(δi+nK ). Thus estimating frequency offset f and phase parameter
φ1 , . . . , φK−1 in order to minimize the error function is
ε=

K−1
X
1=0

N/K−1

ui

X

(αi+nK − φi + 2πf nK)2

(7)

n=0

Solving the above error derived above, using minimization criteria of
the error function over f and φ1 , . . . , φK−1 will give the estimate of f and
φ1 , . . . , φK−1 .

3

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close