Testing of Survey Network Adjustments

Published on December 2016 | Categories: Documents | Downloads: 35 | Comments: 0 | Views: 253
of 73
Download PDF   Embed   Report

Comments

Content

Testing Of Survey Network Adjustments

Least squares adjustments of survey networks rarely give acceptable adjustment results immediately due to a number of different factors. One or more of the factors are likely to be present unless a standard adjustment technique is applied to measurements acquired by well tried procedures and experienced observers. Therefore adjustment results must be tested to detect and/or eliminate any extraneous influences. The testing of adjustments may be applied to any least squares solution of any type(s) of measurements, but find their most frequent use in testing the adjustment of survey networks.

Factors Affecting Adjustments
The Mathematical Model
The general term "mathematical model" refers to the equations used to emulate the physical situation, or in other words, the mathematical relationship between the measurements and the unknown parameters. It also refers to the number and type of unknowns carried in the adjustment to describe the physical aspects of the measurements. Mathematical models may be:
• •

Inaccurate, inappropriate or just plain wrong. Under-parameterised There are insufficient unknown parameters to account for the physical aspects of measurement. Such a mathematical model is said to have unmodelled systematic errors. A good example is additive constants or scale errors in an EDM which have not been calibrated or the effects have not been removed prior to the adjustment.



Over-parameterised There are too many unknown parameters in the adjustment and some have no physical reality or function. An example is carrying EDM errors in a survey adjustment where none exist or they have been removed prior to the adjustment.

The Stochastic Model
The general term "stochastic model" refers to the statistical properties of the measurements, as described by the weight coefficient matrix used in the adjustment. This includes the relationships between the measurement types and their precision (constant, linear, ...), the magnitudes of the precisions, the relative weights for different measurement types, and the presence or absence of correlation terms. Stochastic models may be incorrect in any of the above aspects, a typical example being the neglect of correlations between horizontal angles. Common problems with untested measurement techniques, unexperienced observers or simply unfamiliar situations are:


Under-estimation of precisions The measurements are made to a better precision than that expected.



Over-estimation of precisions The measurements are made to a worse precision than that expected.

Gross Errors
Gross errors, blunders or outliers are caused by human mistakes in measurements, reductions, transcriptions, and equipment errors or anomalous physical circumstances. Gross errors are an integral part of survey adjustments and are a statistical certainty! This is because measurements are assumed to follow a normal distribution, which implies that there is no theoretical limit to how far an individual value may depart from the mean.

Confidence Levels
The basis for all testing of adjustments is the specification of confidence levels or limits. Levels are specified in terms of probabilities which can be

related the departure of a value from the mean by the distribution function. A confidence level of 99.9% implies that 999 times out of 1000 acceptable measurements are made, whilst 1 time in 1000 an unacceptable measurement or gross error is made, which is rejected. If the probability function of the measurements is known their limits can be set based on the departure from the mean, by "cutting off" the area under the probability density curve. The area inside the cut-offs or limits is set by the probability level. The limits are conveniently expressed in terms of standard deviations of the measurement. The "rule of thumb" of three standard deviations (3s) for measurement rejection corresponds to a confidence level of 99.75% for a normally distributed measurement:

In effect the rule of thumb says that 25 times in 10000 measurements a gross error is expected. (m - 3s) and (m + 3s) are said to be "critical values" or CVs.

Redundant Measurements
Increasing the number of redundancies in an adjustment (by taking more measurements) increases the effectiveness of testing. The more measurements there are, the more likely it is that tests will correctly eliminate problems in the adjustment or reject outliers. Statistically, as the sample size of the measurements approaches the population of all possible measurements (an infinite sample) the results of a precision analysis approaches the "truth". An infinite sample is not possible, but certainly 1000 horizontal angle measurements gives a much better estimate of the precision of each individual angle than 10 measurements. In practice, a measurement scientist gains a knowledge of measurement systems from experience under a variety of circumstances, and can eventually estimate the expected performance for average and

unusual conditions from that experience.

Specific Tests
Estimate of the Variance Factor (Global Test)
The first test which should be applied to any least squares adjustment is the test of the estimate of the variance factor. This test determines whether the residuals of the adjustment are in accord with the precision of measurement obtained from an "infinite sample", also known as the statistical analysis of variance (ANOVA) test. Practically, the test determines whether the residuals are those expected from the precisions of measurement used in the adjustment. The quantity: cr2 = v' Q-1 v is a test statistic which follows a Chi-squared distribution with r degrees of freedom, where r is the number of redundancies. The expectation of cr2 is the number of redundancies : E ( cr2 ) =r -1 hence E ( v' Q v ) = r v' Q-1 v r or E( )= r r therefore E ( so2 ) =1 The estimate of the variance factor is often said to have a C2 distribution, where C2= cr2 r

The estimate of the variance factor can be tested by specifying a confidence limit, usually 95%, or probability level, a = 0.05, and determining critical values from statistical tables or computation of the probability density function. The C2 distribution is shown below:

A table of critical values is shown below. Redundancies Lower Critical Value Upper Critical Value r a = 0.025 a = 0.975 10 30 60 120 0.33 0.56 0.68 0.76 1.00 Examples: so2 = 1.75, r = 30, CV = 1.57 \ reject so2 = 0.86, r = 120, CV = 0.76 \ accept If so2 falls below the lower critical value then either the mathematical model is over-parameterised or the measurement precisions have been under-estimated. If so2 falls above the upper critical value then either
• • •

2.05 1.57 1.39 1.27 1.00

the mathematical model is under-parameterised the measurement precisions have been over-estimated there are gross errors in the measurements.

Residuals (Local Test)
Once the variance factor test has been carried out, individual residuals can be tested to determine whether they are gross errors. Although such testing may be only absolutely necessary when so2 fails the global test, it is generally always carried out. Failure of the global test may imply that

there are gross errors, but a pass of the global test does not guarantee that there are no gross errors. If so2 passes the global test, the quantity ni = ui/qiis the test statistic and has a N(0,1) distribution, where qi is the weight coefficient of the residual. If so2 fails the global test, the Student-t distribution must be used. The quantity ti = ui soqi

becomes the test statistic and has a T(0, so, r) distribution, where r is again the number of redundancies. Note 1. As r –> X then so –> 1, so T(0, 1, X) is equivalent to N(0, 1) The interpretation of this is that so2 passing the global test implies that the sample is representative of the population. r 2. qi ª si is a common approximation. n The Student-t distribution is shown below:

A table of critical values is shown below. Redundancies Lower Critical Value Upper Critical Value r a = 0.025 a = 0.975 10 30 -2.23 -2.04 2.23 2.04

60 120 or Normal

-2.00 -1.98 -1.96

2.00 1.98 1.96

Because the distribution is symmetric, the test usually carried out is either | ni | = | or | ti | = | ui | > CVT so qi ui | > CVN qi

depending on whether the global test passes or fails. Examples: so2 passes , CVN = 1.96, ui = -3.24, qi =2.31, | ni | = 1.40, \ accept so2 =1.73 and fails at r = 30, CVT = 2.04, ui = 10.35, qi = 2.50, | ti | = 3.14, \ reject Because all residuals in an adjustment are generally correlated, the rejection of measurements must proceed in a step by step fashion, eliminating the largest residuals one at a time. The removal of one measurement in an adjustment may significantly effect the results and change the pattern of errors and the associated test statistics. Hence, the removal of multiple measurements may result in good data being incorrectly discarded.

Testing Procedure

Accuracy
• •

accuracy can only be checked absolutely by comparisons with previously established information typically this checking involves computing root mean square errors of differences at check points ( ) , that is survey stations with known coordinates which are deliberately omitted from the fixed stations



another method is to initially adjust the network using minimal constraints or a free network approach, in either case there are no external influences on the shape of the network a 7 (or 4 for a 2D network) parameter transformation is then used to "fit" the free network to all fixed points via post-processing - the residuals from the transformation are then effectively the same as an RMS error of check points

• •

the problems which are commonly detected by accuracy checking are scale errors and/or reference station coordinate errors scale errors are sometimes detected in old surveys because of errors propagated from baselines or from older EDM traverse surveys where calibration or velocity corrections were inaccurate the scale error can be modelled by introducing an additional parameter into the network adjustment or by estimation from the post-processing approach



errors in reference station coordinates (fixed stations used to define the datum of a new survey) are commonly caused because the stations were determined during a previous survey which used older, less accurate equipment the errors may be systematic, but in all likelihood will be random within the statistical variations (precisions) of the derived coordinates predicted by the previous adjustment this problem is particularly relevant to regional surveys or national geodetic networks covering large areas, which typically realised station coordinate precisions to ±0.1m or poorer, whereas a local, small area survey with modern equipment may be good to ±0.01m





sequential or phased adjustment (constrained station coordinates) is the answer to this problem, as this technique allows the statistical variations of the "fixed" stations to be accommodated in the adjustment of the new survey - without this sequential adjustment process the new survey will be distorted in shape or scale, and accuracy checks will indicate a poor match to the previous survey however sequential adjustment often raises as many problems as it solves
o

previous survey adjustment data/results may be difficult to obtain was the previous survey adequately tested? are the precisions of the "fixed" station coordinates appropriate? are the previous stations original and stable? the new coordinates of the "fixed" stations are commonly ignored

o o

o o

Reliability
• •

reliability can be defined as the ability of a survey network to detect errors in the measurements reliability is directly related to redundancy, as the more measurements which are available to define the coordinates of a station, then there is a greater possibility of detecting an error in an individual measurement to allow a relative gauge of reliability, various types of reliability factors or redundancy numbers are used as indicators reliability factors can be determined for each measurement or the network as a whole factors for each measurement readily show those measurements which have poor reliability factors for the network as a whole can only be assessed by experience with many such network adjustments

• •



• •

one common reliability indicator is the Pelzer criterion which is computed by : sm sm t= or t = sv sm - sl where sm sl, sv = the precision of the measurement, adjusted measurement and residual respectively the Pelzer factor varies between unity and infinity, the larger the number the poorer reliability the factor is effectively based on the variation of the precision of the adjusted measurement, which will be zero in the best circumstances and equal to the precision of the measurement in the worst circumstances a practical limit on this factor must be imposed to avoid division by zero, a measurement with no reliability (for example an unchecked radiation) will have the maximum factor





the global factor is computed as : 1 T2 = S (t2 -1) m and will vary between zero and infinity (m is the number of measurements)

• •

T values for networks (not traverses) would normally be in the range of 0.5-2.0 there are a number of other reliability indicators with different ranges, but all attempt to give a relative measure of the reliability of measurements or networks

Created: 3 June, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

An Introduction To Survey Networks

A Few Words On Precision And Accuracy Networks Redundancy & Reliability Statistical Testing Survey Network Uses Least Squares Network Design The Shape Of The Earth Vertical And Horizontal Observations And Networks References

A Few Words On Precision And Accuracy
In this subject, and others that deal with least squares and other aspects of surveying, precision and accuracy will often be mentioned. A distinction between the two terms will be useful: a general rule of thumb is that precision refers to repeatability, whilst accuracy refers to the "closeness to the truth". If I measured the distance from Melbourne to Sydney 25 times, and the spread of my measurements was 5mm, I may conclude that my measurements were precise (repeatable). However if the mean of these precise measurements was 32.231128 kilometres my answer, whilst being precise, would not be very accurate.

Consider Figure 1 - here the target shooter has been fairly precise (repeatably hitting roughly the same spot), but assuming the shooter was aiming for the bull's eye they have not been very accurate. In Figure 2 the shooter has been less precise (the spread is larger), but has been more accurate - that is closer to the bull's eye, than in figure 1. Figure 3 represents precise and accurate shooting. Note that quite often in surveying accuracy, or a closeness to the truth, is hard to quantify. This issue will become especially apparent with datums.

Networks
What is a (good) network - where as many connections between stations as possible are measured, with the goal of introducing redundancy. For our purposes a traverse will not count as a network. The following diagram represents a survey network with good redundancy:

Using geodetic networks (networks covering the continent providing first order control) as an example the evolution of survey networks can be followed. Before EDM most geodetic surveys used triangulation. In triangulation only angles are

measured, with very accurate baselines providing scale, and Laplace stations azimuth:

In such networks scale and orientation errors accumulate with distance away from the baseline(s) and Laplace station(s). In these type of networks plan and height coordinates were treated as separate problems. The invention of EDM allowed the length of all lines in the network to be easily measured, thus alleviating the scale error problems. Most of Australia's geodetic network was surveyed with this technique. Plan and height were still treated as independent problems. More recently GPS baselines added to such networks also improved scale as well as orientation (the GPS baseline is basically a 3D vector observation). GPS also has the advantage of not requiring a line of sight. Today, with GPS and 3D traversing, most survey networks integrate both plan and height. With each new measurement technology the achievable measurement precision has increased. This presents a problem for modern surveys where the level of precision obtained can sometimes be several orders of magnitude better than the geodetic frame work. The aim of virtually all surveys is to produce coordinates - the use of a well designed network (as opposed to a traverse or radiations for example) provides several advantages:

Redundancy & Reliability
Reliability is the ability of the network to detect errors in the observations. Generally, the greater the redundancy (ie. number of observations) the greater the network's reliability. As networks increase in "importance", for example

deformation monitoring, good reliability becomes crucial.

Statistical Testing
Adjusting a network (with least squares) provides a means for statistically testing the observations to detect gross errors, provides an estimate of the precision of the network's coordinates, and allows the reliability of the network and individual measurements to be estimated.

Survey Network Uses


• •

Geodesy - 1st order control. The geodetic network provides coordination for virtually all other surveys especially the cadastral framework and mapping. Control surveys - engineering works, roads, subdivision Deformation - dams, bridges, production components. Often in cases of deformation monitoring, failure to detect the deformation can be costly (in human life), for example dam failure

In general a network can be used to solve any surveying problem, and tend to be used where reliability is important.

Least Squares
When a survey is over-determined - that is there are more measurements than the minimum required to compute the coordinates - the least squares algorithm is commonly used to compute the coordinates, making use of the redundant observations. The least squares algorithm was proposed independently by both Gauss and Legendre in 1795, with Legendre being the first to actually publish its description. Some features of least squares that make it useful for determining the coordinates in a survey network include:


• •

Least squares allows all observations to be combined using observation weights (and correlations between observations, especially horizontal angles, phased adjustments and GPS baseline components). The least squares estimate is an unbiased estimate "…on average the least squares solution is equal to the true solution" (Cross, 1983, p. 98). the least squares parameter estimates will be the maximum likelihood estimates.

As mentioned previously least squares also provides an estimate of the parameter (coordinates in this case) precision, and allows for statistical testing

of the observations for error detection. In this course, only the least squares case of observation equations will be used.

Network Design
The design of a network can often be as much an art as a science. Generally a required coordinate precision must be reached given certain restrictions such as: measurement time, available instrumentation and physical restrictions. Often knowing what can be achieved under such conditions is a matter of experience. Usually the geometric configuration of a network is designed, and a simulation using certain measurements and precisions is performed to estimate the achievable coordinate precisions and network reliability. If the initial design is not suitable a new configuration of geometry/observations and precisions is tried until a suitable configuration is found. (This process can be carried out in TDVC and is part of the major prac assignment.) Achieving a certain network criteria can be broken into two parts: instrumentation and field techniques (the measuring of the network) and the network design. The precision and accuracy of a network can be influenced by such things as:


• • •

Instrument choice - use a 20" theodolite or a 1" theodolite, a chain or EDM, a builder's level or precise level? Selecting the appropriate instruments for the task. Instrument calibration - ignorance of a prism constant may give a suitable precision, but poor accuracy. Field techniques - specific observation time, reciprocal observations, repeating measurements, forced centring. Modelling external factors - atmospheric effects, earth curvature.

Issues concerning network design include (most of these issues are merely mentioned here, and will be taken up in detail over the next few weeks):
• • • • •

Control - how much and whereabouts in the network, is the control likely to distort the network and can I tell if it does. Physical constraints - lines of sight, access. Intersection geometry - angle intersections, distance intersections, both. Propagation of variance. Network simulation - knowing the rough coordinates of network

stations, measurements and precision of measurements between stations can be varied, and the network simulated to predict the eventual coordinate precision.


Placement of survey points - static and dynamic considerations.

The Shape Of The Earth
All survey network adjustments are based on some model of the shape of the earth:


Plane: used where the ellipsoidal nature of the earth can be ignored. The area over which a flat earth assumption remain valid largely depends the on the required precision of the network. Observation equations are based on plane trigonometry.





Map grid: network adjustment on the map grid takes some account of the curved earth. It is a step between a planar system and the rigorous solutions (which follow). A map grid approach remains valid for networks up to approximately 20km by 20km. One problem with the map grid approach is surveys that cross zone boundaries. Observation equations are based on plane trigonometry with corrections derived from the map projection. Ellipsoid: considers the shape of the earth. The adjustment is performed on an ellipsoid that is a mathematical approximation of the shape of the earth. Observations are usually reduced to the ellipsoid (eg. a slope distance is reduced to an ellipsoidal distance). Observation equations are based on spherical trigonometry.



Ellipsoid centred cartesian coordinated: again considers the shape of the earth. This model has come into favour recently since it is suited to the use of GPS observations, and GPS baselines are easily implemented as observation equations. Observation equations are based on vector geometry.

We will concentrate on planar network adjustment since it provides the least complicated introduction to survey network adjustment, however all principles remain valid, and are easily extended into the other models listed above.

Vertical And Horizontal Observations And Networks
Traditionally vertical and horizontal networks were treated separately, especially in adjustments performed on an ellipsoid datum. One major reason for this separation was lack of computational power and the independence of vertical and horizontal observations. Older networks tended to consist of horizontal angles and distances reduced to the ellipsoid to define plan coordinates, and levelling to define height. This approach also reduced the effects of unknown geoid/ellipsoid separation. Today networks tend to be treated as a 3D problem for several reasons:
• • •

Computation power permits larger adjustments Adjustments based on ellipsoid centred cartesian coordinate systems 3D measurement techniques such as 3D traversing and especially GPS make separating surveys into plan and vertical components difficult



Better geoid models (again GPS)

References
Cross, P. A., 1983. Advances in Least Squares Applied to Position Fixing. Department of Land Surveying Working Paper No. 6, North East London Polytechnic, Essex, England.

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Solutions By Variation

Table Of Contents
Solutions By Variation - Solving Non-Linear Equations
• •

A lead into non-linear least squares (network adjustment) Linearisation
o o

By calculus By numerical methods



Some simple examples

Variance/Covariance Matrices


The notion of o Expected value
o o o o

Mean Variance Covariance Standard error / standard deviation

• • • •

Rules for expected values Definition of a covariance matrix Correlation coefficient Weight matrices
o

With uncorrelated observations

Propagation Of Variance
• •

Derivation of equation for propagating variance Applications to simple problems:
o o

Correlation between directions and angles (other implications) Distance intersections



Propagation of variance applied to least squares
o

Example with intersection geometry

Solutions By Variation
Non-Linear Equations:
• •

Example: f(x) = (x - 3)3 + ex is non-linear in x Observation equations for survey network adjustment are generally non-linear in terms of the coordinates.

Linearising Non-Linear Equations


The Taylor expansion: for f(x) = l
l = f(x0) d )x0 Dx d2 )x0 Dx2 + higher order +( f +( f terms d dx

x


2

Ignoring second and higher order terms gives equation that is linear inDx:
lf(x0) Dx ( d ) ª f x0 d x



To solve this equation several things are required:
o o o

f(x) must be differentiated the value of x that satisfies l (x0) must be reasonably estimated the solution updates x: x0 = x0 + Dx and must be iterated to account for the neglect of higher order terms from the Taylor series. Note the similarities with TDVC - initial coordinate estimates & iteration

o

Example
f(x) = (x - 3)3 + ex = 15 Estimate: x0 = 2, f(x0) = 0.6389 Differentiate: f'(x) = 3(x - 3)2 + ex Solve:
15 15 - (x0 - 3)3 + ex0 Dx f(x0) = = = 3(x0 - 3)2 + 0.829 f'(x0) ex0

Next estimate: x0 = x0 + Dx = 2.829 Now, f(x0) = 16.923, continue procedure until f(x0) is

sufficiently close to 15


Importance of initial estimate.

Note

Because the solution must be iterated due to ignorance of higher order terms from the Taylor expansion, the computation of the derivatives does not need to be all that accurate - this allows some cheating to take place. Note that the reduced observation l - f(x) must be accurately computed. This applies to Least Squares adjustment as well.

Variance / Covariance matrices
• • •

The notation E(x) means the expected value of x The mean of a variate is its expected value: mx = E(x) Some handy rules for expected values:
o o

E(kx) = kE(x) where k is non-stochastic (does not depend on chance) E(x) + E(y) + E(z) + … = E(x + y + z + …)



If x is a vector: x = [ x1 x2 . . . xn ] its covariance matrix Cx is defined as: Cx = E [(x - m)(x - m)t]
x1 m1 x2 = m2 E( xn mn E{(x1 - m1)2} E{(x1 m1)2(x2 E{(x1 m1)2(xn [x1 - m1, x2 - m2 mn] ) xn -

m2)2} = E{(x2 E{(x2 - m2)2} m2)2(x1 - m1)2}

mn)2} E{(x2 m2)2(x2 m2)2}

E{(xn - mn)2(x1 - m1)2}

E{(xn mn)2(x2 m2)2}

E{(xn - mn)2}

• •

The definition of variance of : xi : s2xi = E [( xi - mi )2] The square root of the variance of x (= sxi) is the standard error or standard deviation of x The definition of covariance of xi and xj : sxixj = E [( xi - mi )( xi - mj ) ] Using these definitions:
s2x sx1x
1 2





sx1x
n

sx1x s2x Cx =
2 2

sx2x
n

sxnx sxnx
1 2

s2xn



The coefficient of correlation is used to express the strength of dependence of one variable on another:
sxiyj rij = sxi sxj



r has a range [-1, 1], where +1 indicates total correlation, and 0 indicates nor correlation at all. As part of a least squares solution a weight matrix of



observations can be used to indicate some observations are more or less precise than others, and some observations are correlated with others. A weight matrix of observations (P1) is the inverse of the covariance matrix of the observations: P1 = C-11


In some circumstances C1 is diagonal (or assumed to be diagonal) since this makes the formation of the least squares solution simpler - e.g. forming P1 is simple.

Propagation Of Variance
The Law Of Propagation Of Variance
• •

Let y = Ax y, x are vectors, A a non-stochastic matrix (we know or measure x, and know the relationship between x and y) Cx is the covariance matrix of x (known from the measurement process to stick to the surveying emphasis).





Using the relationship between mean and expected value (shown previously): my = E(y) = E(Ax) = AE(x) = Amx



Using the definition of the covariance matrix (also shown previously) Cy = E{(y - my)(y - my)t} = E{(Ax - Amx)(Ax - Amx)t} = AE{(x - mx)(x - mx)t}At = ACxAt



Thus if A, x and Cx are known, y and Cy can be computed:

y = Ax

Cy = ACxAt (propagation of variance)
Example 1 - Angles From Measured Directions

Measured: 3 directions measured with equal precision sd (not correlated) to give 2 angles.
d1 Equation a1 : = -1 1

a1

0 0 -1 1

say y d2 = Ax d3

Precisions Cx = s2d I : Cy =

-1 1 0 0 -1 1

s2d 0 0 -1 0 0 s2d 0 1 -1 0 0 s2d 0 1
2 2 = 2s d -s d

-s2d 2s2d

Notes:
• •

The standard deviation of the angles is 2sd (root 2 worse than the directions)

The correlation coefficient (see previous) is

- = , quite -1 2 sd large.

2s2
d

2



Some survey network adjustment programs use angles and ignore the correlation, the general claim being that in a network with strong geometry the ignorance of the correlation has negligible effect on the end result. Other programs use either angles with correlations, or directions (no reduction to angles at all) their claim being the correlation between angles is significant and should not be ignored.

Example 2 - Distance Intersections Propagation Of Variance And Least Squares • The equation for observations:

Bx = l + v B - design matrix x - unknown parameters (increments to the assumed coordinates) l - observations v - residuals Cl = P-1l - Covariance matrix of the observations Cx - Covariance matrix of the parameters ^ - least squares estimate of the parameter beneath


Without to much regard for detail at this stage, the solution via least squares is: x = (BtPlB)-1BtPll



C by propagation of variance is: C = [(BtPlB)-1BtPl] Cl [(BtPlB)-1BtPl] C = [(BtPlB)-1Bt] [PlB (BtPlB)-1]

C = (BtPlB)-1 [BtPlB ] [BtPlB]-1 C = (BtPlB)-1


The precision of the parameter estimate is the inverse of the normal matrix. The formation and inversion of the normal matrix (BtPlB) is all that is required to perform network simulations (as in TDVC), note that the actual observations are not required, only the approximate coordinates of the network stations.

Example - Intersection Geometry

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Least Squares Adjustment and Survey Networks

Mathematical justification for the use of least squares will be given in the maths course, however least squares:
• • • • •

Easy to apply since normal equations are linear Gives a unique solution Provides a covariance matrix of parameters, allowing statistical testing Can be applied to a wide variety of problems LS estimate is unbiased (on average equal to the true solution)

Observation Equations
Bx = m + v B - design matrix x - unknown parameters (increments to the assumed coordinates) m - observations v - residuals Cm = P-1m- Covariance matrix of the observations Cx- Covariance matrix of the parameters ^ - least squares estimate of the parameter beneath

Generally P1 is diagonal except for:
• • •

GPS baselines Previously adjusted coordinates with full covariance matrix Consideration of correlated angles (previous notes)

which makes computation simpler. Because virtually all observation equations - equations that express the observations in terms of the unknown coordinates - are non linear, these equations are linearised with the first using a Taylor expansion ignoring second and higher order terms (see previous notes on linearisation and numerical linearisation). The least squares solution then involves:
• • •

Estimating the coordinates of unknown stations Solving x - a set of increments to the estimated coordinates Iteration

The linearised observation equations are set up in matrix form as follows: Dx1 Dy1 ff1 Dz1 m1 - c1 m2 - c2 = mn - cn + vn v1 v2

f f1 f f1 f f1 f f1 f f1 f f1 fx1 fy1 fz1 fx2 fy2 fz2 f f2 f f2 f f2 f f2 f f2 f f2 fx1 fy1 fz1 fx2 fy2 fz2 f fn f fn f fn f fn f fn f fn fx1 fy1 fz1 fx2 fy2 fz2

f f1 f f1

fxm fym fzm Dx2 ff2 ff2 ff2 Dy2 fxm fym fzm Dz2 f fn f fn f fn fxp fyp fzp Dxp Dyp Dyp

p - the number of unknown (x, y, z) points n - the number of observations mi - observation i ci - value of observation computed with the assumed coordinates for observation i vi - residual for observation i f - function that expresses the observation as a function of the unknowns (coordinates), a different f is required for each different observation type D - increments to the assumed coordinate values Again without mathematical justification the least squares solution is based on minimising the weighted sum squares of the residuals: vt Pmv = vt ( C-1m ) v = minimum This gives a prescription: Bt P m v = 0 This leads to normal equations: Bt Pm Bx = Bt Pm m Nx = l With N being the normal matrix. The solution is: x = N-1 l = (Bt Pm B)-1 Bt Pm m
• •

Note the use of the ^ notation to denote a least squares estimate. P1 allows for observations to be of differing weights and be correlated.

The covariance of the parameter estimates is given by the inverted normal matrix:

C = N-1 Covariance matrices for the residuals and adjusted measurements are computed as (these matrices are required for statistical analysis of the adjustment results): C = BN-1Bt C = Cm - BN-1 Bt = Cm - C

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

More Directions, Correlated Angles And Propagation Of Variance

Other Observation Types: Coordinates Other Observation Types: GPS Baselines

Assume all angles are of equal precision (sd), and uncorrelated, Cd = s2d I


Consider only three angles (similar to last weeks example): -1 1 0 0 = 0 -1 1 0 0 0 -1 1 d1 d2 Ca = s2d d3 d4

a1 a2 a3

2 -1 0 -1 2 1 0 -1 2



Now assume all 4 angles with d1 being used to obtain a1 and a2: -1 1 0 0 a1 a2 = a3 a4 1 0 0 1 0 0 -1 1 d3 d4 -1 0 -1 2 0 -1 1 0 d1 d2 Ca = s2d 0 -1 2 1 -1 2 -1 0 2 -1 0 1

Other Observation Types
Coordinates
These can be coordinates from a previous adjustment with a covariance matrix, or may be "dummy" observations with a large observation weight to fix the datum, orientation and scale of the network. The observation equation is simple. The observation may be for x, y, or z - but will tend to be for a coordinate, or a group of coordinates where correlation is a concern. Considering a point only let the measurement and covariance matrix be: xm ym zm and Cm (3 by 3 matrix) Sticking with the notation in the hand-written notes: xm = x' or linearised xm - x' = Dx ym = y' or linearised ym - y' = Dy zm = z' or linearised zm - z' = Dz

GPS baselines
Generally speaking GPS baselines would not be used in an adjustment on a local coordinate system (as we are considering). The baseline components will be known in an ellipsoid centred cartesian coordinate system (as will their covariance matrix) and the relationship between

this frame of reference and the local frame of reference is not always known. With reference to the figure: the GPS baseline is known in the [X, Y, Z] coordinate system. The local system used for our adjustment is [x, y, z]. If the latitude and longitude of the origin of the local system (point P) are known, then a rotational relationship between [X, Y, Z] and [x, y, z] can be established in the form of a rotation matrix: R = f (j, l ) (3 by 3 orthogonal matrix) -sinl cosl 0

R = -sinjcosl -sinjsinl cosj cosjcosl cosjsinl sinj R is constructed so that a GPS baseline [DX DY DZ] can be expressed in the local frame of reference as [Dx Dy Dz]: Dx Dz -sinl cosl 0 DX

Dy = -sinjcosl -sinjsinl cosj DY , say cosjcosl cosjsinl sinj DZ

x = RX

Where x is the local frame of reference representation of the ellipsoid centred cartesian X. Note this equation is useful for propagation of variance. The GPS baseline covariance matrix (CX) is also in the ellipsoid centred cartesian frame of reference and must be transformed (Cx in the local frame): Cx = RCx Rt Now we have x and Cx the observation equations are linear, and similar to the previous point observations: f(Dx) f(Dx) Dx1 + Dx2 fx'1 fx'2 f(Dy) f(Dy) Dy = y'2 - y'1 which linearises to Dy - ( y'2 - y'1 ) = Dy1 + Dy2 fy'1 fy'2 f(Dz) f(Dz) Dz = z'2 - z'1 which linearises to Dz - ( z'2 - z'1 ) = Dz1 + fz'1 fz'2 Dz2 Dx = x'2 - x'1 which linearises to Dx - ( x'2 - x'1 ) = because of the linearity all the derivatives evaluate to either +1 or -1. Notice how all references to the original baseline X and Cx disappear after the rotation to a local frame of reference. Again note that if GPS baselines are to be used, the adjustment would usually be performed

on the ellipsoid or in ellipsoid centred cartesian coordinates.

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Testing Of Survey Network Adjustments

Least squares adjustments of survey networks rarely give acceptable adjustment results immediately due to a number of different factors. One or more of the factors are likely to be present unless a standard adjustment technique is applied to measurements acquired by well tried procedures and experienced observers. Therefore adjustment results must be tested to detect and/or eliminate any extraneous influences. The testing of adjustments may be applied to any least squares solution of any type(s) of measurements, but find their most frequent use in testing the adjustment of survey networks.

Factors Affecting Adjustments
The Mathematical Model
The general term "mathematical model" refers to the equations used to emulate the physical situation, or in other words, the mathematical relationship between the measurements and the unknown parameters. It also refers to the number and type of unknowns carried in the adjustment to describe the physical aspects of the measurements. Mathematical models may be:
• •

Inaccurate, inappropriate or just plain wrong. Under-parameterised There are insufficient unknown parameters to account for the physical aspects of measurement. Such a mathematical model is said to have unmodelled systematic errors. A good example is additive constants or scale errors in an EDM which have not been calibrated or the effects have not been removed prior to the

adjustment.


Over-parameterised There are too many unknown parameters in the adjustment and some have no physical reality or function. An example is carrying EDM errors in a survey adjustment where none exist or they have been removed prior to the adjustment.

The Stochastic Model
The general term "stochastic model" refers to the statistical properties of the measurements, as described by the weight coefficient matrix used in the adjustment. This includes the relationships between the measurement types and their precision (constant, linear, ...), the magnitudes of the precisions, the relative weights for different measurement types, and the presence or absence of correlation terms. Stochastic models may be incorrect in any of the above aspects, a typical example being the neglect of correlations between horizontal angles. Common problems with untested measurement techniques, unexperienced observers or simply unfamiliar situations are:


Under-estimation of precisions The measurements are made to a better precision than that expected.



Over-estimation of precisions The measurements are made to a worse precision than that expected.

Gross Errors
Gross errors, blunders or outliers are caused by human mistakes in measurements, reductions, transcriptions, and equipment errors or anomalous physical circumstances. Gross errors are an integral part of survey adjustments and are a statistical certainty! This is because measurements are assumed to follow a normal distribution, which implies that there is no theoretical limit to how far an individual value may depart from the mean.

Confidence Levels
The basis for all testing of adjustments is the specification of confidence levels or limits. Levels are specified in terms of probabilities which can be related the departure of a value from the mean by the distribution function. A confidence level of 99.9% implies that 999 times out of 1000 acceptable measurements are made, whilst 1 time in 1000 an unacceptable measurement or gross error is made, which is rejected. If the probability function of the measurements is known their limits can be set based on the departure from the mean, by "cutting off" the area under the probability density curve. The area inside the cut-offs or limits is set by the probability level. The limits are conveniently expressed in terms of standard deviations of the measurement. The "rule of thumb" of three standard deviations (3s) for measurement rejection corresponds to a confidence level of 99.75% for a normally distributed measurement:

In effect the rule of thumb says that 25 times in 10000 measurements a gross error is expected. (m - 3s) and (m + 3s) are said to be "critical values" or CVs.

Redundant Measurements
Increasing the number of redundancies in an adjustment (by taking more measurements) increases the effectiveness of testing. The more measurements there are, the more likely it is that tests will correctly eliminate problems in the adjustment or reject outliers. Statistically, as the sample size of the measurements approaches the population of all possible measurements (an infinite sample) the results of a precision analysis approaches the "truth". An infinite sample is not possible, but certainly 1000 horizontal angle measurements gives a much

better estimate of the precision of each individual angle than 10 measurements. In practice, a measurement scientist gains a knowledge of measurement systems from experience under a variety of circumstances, and can eventually estimate the expected performance for average and unusual conditions from that experience.

Specific Tests
Estimate of the Variance Factor (Global Test)
The first test which should be applied to any least squares adjustment is the test of the estimate of the variance factor. This test determines whether the residuals of the adjustment are in accord with the precision of measurement obtained from an "infinite sample", also known as the statistical analysis of variance (ANOVA) test. Practically, the test determines whether the residuals are those expected from the precisions of measurement used in the adjustment. The quantity: cr2 = v' Q-1 v is a test statistic which follows a Chi-squared distribution with r degrees of freedom, where r is the number of redundancies. The expectation of cr2 is the number of redundancies : E ( cr2 ) =r hence E ( v' Q-1 v ) = r v' Q-1 v r or E( )= r r therefore E ( so2 ) =1 The estimate of the variance factor is often said to have a C2 distribution, where C2= cr2 r

The estimate of the variance factor can be tested by specifying a confidence limit, usually 95%, or probability level, a = 0.05, and determining critical values from statistical tables or computation of the probability density function. The C2 distribution is shown below:

A table of critical values is shown below. Redundancies Lower Critical Value Upper Critical Value r a = 0.025 a = 0.975 10 30 60 120 0.33 0.56 0.68 0.76 1.00 Examples: so2 = 1.75, r = 30, CV = 1.57 \ reject so2 = 0.86, r = 120, CV = 0.76 \ accept If so2 falls below the lower critical value then either the mathematical model is over-parameterised or the measurement precisions have been under-estimated. If so2 falls above the upper critical value then either
• • •

2.05 1.57 1.39 1.27 1.00

the mathematical model is under-parameterised the measurement precisions have been over-estimated there are gross errors in the measurements.

Residuals (Local Test)
Once the variance factor test has been carried out, individual residuals can be tested to determine whether they are gross errors. Although such testing may be only absolutely necessary when so2 fails the global test, it is generally always carried out. Failure of the global test may imply that

there are gross errors, but a pass of the global test does not guarantee that there are no gross errors. If so2 passes the global test, the quantity ni = ui/qiis the test statistic and has a N(0,1) distribution, where qi is the weight coefficient of the residual. If so2 fails the global test, the Student-t distribution must be used. The quantity ti = ui soqi

becomes the test statistic and has a T(0, so, r) distribution, where r is again the number of redundancies. Note 1. As r –> X then so –> 1, so T(0, 1, X) is equivalent to N(0, 1) The interpretation of this is that so2 passing the global test implies that the sample is representative of the population. r 2. qi ª si is a common approximation. n The Student-t distribution is shown below:

A table of critical values is shown below. Redundancies Lower Critical Value Upper Critical Value r a = 0.025 a = 0.975 10 30 -2.23 -2.04 2.23 2.04

60 120 or Normal

-2.00 -1.98 -1.96

2.00 1.98 1.96

Because the distribution is symmetric, the test usually carried out is either | ni | = | or | ti | = | ui | > CVT so qi ui | > CVN qi

depending on whether the global test passes or fails. Examples: so2 passes , CVN = 1.96, ui = -3.24, qi =2.31, | ni | = 1.40, \ accept so2 =1.73 and fails at r = 30, CVT = 2.04, ui = 10.35, qi = 2.50, | ti | = 3.14, \ reject Because all residuals in an adjustment are generally correlated, the rejection of measurements must proceed in a step by step fashion, eliminating the largest residuals one at a time. The removal of one measurement in an adjustment may significantly effect the results and change the pattern of errors and the associated test statistics. Hence, the removal of multiple measurements may result in good data being incorrectly discarded.

Testing Procedure

Accuracy
• •

accuracy can only be checked absolutely by comparisons with previously established information typically this checking involves computing root mean square errors of differences at check points ( ) , that is survey stations with known coordinates which are deliberately omitted from the fixed stations



another method is to initially adjust the network using minimal constraints or a free network approach, in either case there are no external influences on the shape of the network a 7 (or 4 for a 2D network) parameter transformation is then used to "fit" the free network to all fixed points via post-processing - the residuals from the transformation are then effectively the same as an RMS error of check points

• •

the problems which are commonly detected by accuracy checking are scale errors and/or reference station coordinate errors scale errors are sometimes detected in old surveys because of errors propagated from baselines or from older EDM traverse surveys where calibration or velocity corrections were inaccurate the scale error can be modelled by introducing an additional parameter into the network adjustment or by estimation from the post-processing approach



errors in reference station coordinates (fixed stations used to define the datum of a new survey) are commonly caused because the stations were determined during a previous survey which used older, less accurate equipment the errors may be systematic, but in all likelihood will be random within the statistical variations (precisions) of the derived coordinates predicted by the previous adjustment this problem is particularly relevant to regional surveys or national geodetic networks covering large areas, which typically realised station coordinate precisions to ±0.1m or poorer, whereas a local, small area survey with modern equipment may be good to ±0.01m





sequential or phased adjustment (constrained station coordinates) is the answer to this problem, as this technique allows the statistical variations of the "fixed" stations to be accommodated in the adjustment of the new survey - without this sequential adjustment process the new survey will be distorted in shape or scale, and accuracy checks will indicate a poor match to the previous survey however sequential adjustment often raises as many problems as it solves
o

previous survey adjustment data/results may be difficult to obtain was the previous survey adequately tested? are the precisions of the "fixed" station coordinates appropriate? are the previous stations original and stable? the new coordinates of the "fixed" stations are commonly ignored

o o

o o

Reliability
• •

reliability can be defined as the ability of a survey network to detect errors in the measurements reliability is directly related to redundancy, as the more measurements which are available to define the coordinates of a station, then there is a greater possibility of detecting an error in an individual measurement to allow a relative gauge of reliability, various types of reliability factors or redundancy numbers are used as indicators reliability factors can be determined for each measurement or the network as a whole factors for each measurement readily show those measurements which have poor reliability factors for the network as a whole can only be assessed by experience with many such network adjustments

• •



• •

one common reliability indicator is the Pelzer criterion which is computed by : sm sm t= or t = sv sm - sl where sm sl, sv = the precision of the measurement, adjusted measurement and residual respectively the Pelzer factor varies between unity and infinity, the larger the number the poorer reliability the factor is effectively based on the variation of the precision of the adjusted measurement, which will be zero in the best circumstances and equal to the precision of the measurement in the worst circumstances a practical limit on this factor must be imposed to avoid division by zero, a measurement with no reliability (for example an unchecked radiation) will have the maximum factor





the global factor is computed as : 1 T2 = S (t2 -1) m and will vary between zero and infinity (m is the number of measurements)

• •

T values for networks (not traverses) would normally be in the range of 0.5-2.0 there are a number of other reliability indicators with different ranges, but all attempt to give a relative measure of the reliability of measurements or networks

Created: 3 June, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Error Ellipses

Error Ellipses Derivation From Rotations Derivation From Eigen Values And Vectors Relative Error Ellipses Standard Deviation From A Defined Direction Network Adjustment Software: Datum Network Adjustment Software: Data Input Network Adjustment Software: Adjustment Algorithms Network Adjustment Software: Provision For Statistical Testing Network Adjustment Software: Results Network Adjustment Software: Tools

Error Ellipses


Error ellipses are derived from the covariance matrix, and provide a graphical means of viewing the results of a network adjustment. Error ellipses can show: o Orientation weakness (minor axes pointing to the datum)
o

Scale weakness (major axes pointing to the datum) etc.

o •

Generally the standard deviation in x, y, and z is used as a guide to the precision of a point, the error ellipse gives more detailed information - the maximum and minimum standard deviations, and their associated directions. The orientation of the ellipse is basically given by the correlation term.



The error ellipse is an approximation of the pedal curve, which is the true shape of the standard error in all directions. A standard error ellipse shows the region, if centred at the true point position, where the least squares estimate falls with a confidence of 39.4%. Since the truth is rarely known, the error ellipse is taken to represent the 39.4% confidence region, and when drawn is centred at the least squares estimate of the position (rather than the true position). To obtain different confidence regions the length of the ellipse axis is just multiplied by an appropriate factor: Confidence region 39.4 % 86.5 % 95.0 % 98.9 %





Factor 1.000 2.000 2.447 3.000 These multiplicative factors are determined from the c2 distribution. Generally the 95% confidence region ellipses are plotted (ie. TDVC's error ellipses). Absolute error ellipses show the effects of the datum for example, points further away from the datum point in a survey network generally have larger error ellipses than points close to the datum point Relative error ellipses are not influenced by the choice of datum. Relative error ellipses are still derived from the covariance matrix, only they involve 2 points, and an exercise in propagation of variance. 2 methods for computing the error ellipse are introduced from a theoretical point of view and with an example:







Derivation From Rotations
We start with the covariance matrix of a 2D point based on the local [x, y] coordinate system: Cxy = s2x sxy s2y

There is another coordinate system [u, v], that we can rotate into using:

u v

=

sin(q) cos(q) x -cos(q) sin(q) y

The corresponding covariance matrix for the [u, v] system is gained via propagation of variance: Cuv = s2u suv s2v = sin(q) cos(q) -cos(q) sin(q) Cxy sin(q) -cos(q) cos(q) sin(q)

We are only really interested in su AND sv which evaluate to: s2u = s2xsin2(q) + 2sxy sin(q) cos(q) + s2ycos2(q) s2v = s2xcos2(q) + 2sxy sin(q) cos(q) + s2ysin2(q) If these equations are plotted for 0 < q < 360 the pedal curve mentioned previously will be obtained. The maximum and minimum values of su and sv can be found by setting the derivative (w.r.t. q ) of either of the above equations to zero and solving for q (note these maxima and minima will correspond to the directions of the major and minor axes of the error ellipse): f(s2u) = 2s2x sin(q) cos(q) - 2s2y sin(q) cos(q) - 2s2xy sin2(q) + 2 2 fq 2s xy cos (q) = 2 sin(q) cos(q) {s2x - s2y} + s2xy {cos2(q) - sin2(q)} = sin(2q) {s2x - s2y} + 2sxy cos(2q) = 0 1 - 2sxy q = a tan ( 2 ) 2 s x - s2y to which there are 2 solutions, 90 degrees apart. The selection of rotation matrix means that q is computed as a bearing. The values of q are then substituted back into the equation for s2u to determine the maximum and minimum values, and which axis they correspond to. As an example take: Cxy = 6.82 5.315 2

12.92 1 Find q: 1 - 2sxy q = a tan ( 2 ) = ~30 and 120 degrees (as a bearing) 2 s x - s2y Evaluating these angles gives: su(30) = 4.00 and su(120) = 1.93 which are the major and minor axis lengths and orientations all that is required to plot the error ellipse. Note that these axis lengths correspond to the standard error ellipse. To plot the 95% confidence region the axis lengths are increased by a factor of 2.447: su95%(30) = 9.79 and su95%(120) = 4.72

Derivation From Eigen Values And Vectors
It can be shown that the square root of the eigen values of Cxy correspond to the error ellipse axis lengths, and the eigen vectors that correspond to the eigen values define the error ellipse axis directions. Start by finding the eigen values: |Cxy - l I| = 0 s2x - l sxy syx
2

= sy-l 0

(s2x - l)(s2y - l) - s2xy = 0 (s2x - l)(s2y - l) - s2xy = 0 l2 + l(-s2x - s2y) + s2xs2y - s2xy = 0 Solving the quadratic in l (gives the 2 eigen values): l= (s2x + s2y) + (s2x + s2y)2 - 4(s2xs2y - s2xy)
2 2 2

2 l = (s x + s y) + (s x + s2y)2 + 4s2xy

2 The 2 corresponding eigen vectors e1 and e2 = [x2 then be found using non-trivial solutions of: s2x - l1 sxy x1 0 = sxy s2y - l1 y1 0 s2x - l2 sxy x2 0 = sxy s2y - l2 y2 0 These eigen vectors specify the direction of the ellipse axis (and should be at right-angles). The lengths of the axes are given, for the standard ellipse, by respectively l1 and l2. Using the previous example: 6.82 5.315 2 12.92 1 2 = 19.743 + 12.255 2 y2] can

Cxy =

l=

(s2x + s2y) + (s2x + s2y)2 + 4s2xy

l1 = 16

l2 = 3.744 (Standard axis lengths 4.00 and 1.93)

Eigen vector for l1: 6.822 16 5.315 x1 0

= 12.921 5.315 y1 0 16 x1 1 y1 = 1.727x1, giving a direction (bearing) 30.0 = 1.72 i.e. degrees y1 7 Eigen vector for l2: 6.822 3.744 5.315 x2 = 1.727y2, 5.315 x2 0

= 12.921 y2 0 3.744 x2 = - giving a direction (bearing) 60.0

i.e.

y2

1.727 degrees 1

Which all agrees with the previous method. This algorithm can be used to obtain a 3D error ellipse for a point. In fact the same approach is extendible to produce an n dimensional hyper-ellipsoid, although these may bit a little difficult to conceptualise, let alone draw.


Extracting covariance data from TDVC (for major prac), use -lv flag.

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Dealing With Larger Networks

The Design Matrix Dealing With Sparse Design Matrices Solving The Normal Equations Don't Invert N Re-Order The Parameters In X Phased And Sequential Solutions

The Design Matrix


Consider the observation equations model and solution: Bx = m + v with Cm Bt C-1mBx = Bt C-1m m or Nx = l (normal equations) = N-1 l



For survey networks, the design matrix B is: o Sparse - the horizontal angle and slope distance observations have 6 differential coefficients, most other observations only have 4 (horizontal distance, bearing), 2 (level difference), or 1 (direct observation of a coordinate).
o

Can be very large in dimension. If there are 100 unknowns and 500 observations B will have dimension 500 * 100, where the normal matrix will only be 100 *100 ! This may become an issue with a computer implementation of network adjustment.



Fortunately the full design matrix does not need to be formed. Observations can be added directly to the normal equations one at a time if they are not correlated with other observations. Correlated observations must be added to the normal equations as a group.

a1 Let B = a2

b1

s12

0 s12

0 0 (3 obs. solving 2 unk.)

b2 , and Cm = 0





a3 b3 0 0 s12 Forming the normal matrix Bt C-1m m gives: a12 a22 a32 a1b1 a2b2 + + + + 2 2 2 2 2 s1 s2 s3 s1 s2 Notice how the normal matrix is a sum of components unique to each observation, ie: B
t

a2b2 s32

a12 s12 b1a1

+

a1b1 s12 b12

+

a22 s22 b2a2

a2b2 s22 b22

+

a32 s32 b2a2

+

a2b2 s32 b32

C
1 m

+ s1
2

+ s1
2

+ s2
2

+ s3
2

m =

s2

2

s32



[a3 b3] s12 s22 s32 b1 b2 b3 Thus the normal matrix can be formed one observation at a time, where the observations are not correlated with each other (that is Cm is diagonal). If observations are correlated they can be added to the normal equations as a group, for example the 3 components of a GPS baseline would be added to the normal equations together. The right hand side of the normal equation (Bt C-1m m ) can be formed in exactly the same manner, observation by observation. (Again with the correlation free stipulation).

=

1

a1

[a1 b1] +

1

a2

[a2 b2] +

1

a3



Dealing With Sparse Design Matrices


Example: Consider a network with 4 unknown stations in 3D, and one observation of slope distance l with precision s between stations 2 and 4. The observation equation is: l = (x4 - x2)2 + (y4 - y2)2 Linearised: lm lc fl fx2 Dx2 + fl fy2 Dy2 + fl fx4 Dx4 + fl fy4 Dy4 +

= where: lm is the measured value lc is the length computed with assumed coordinates The derivatives (with reference to previous notes on planar observation equations) are: a= c= fl fx2 fl fx4 ==x4 - x2 lc x4 - x2 lc b= d= fl fy2 fl fy4 ==y4 - y2 lc y4 - y2 lc

For this single observation: Dx1 Dy1 Dz1 Dx2 Dy2 [ 0 0 0 a b 0 0 0 0 c d 0 Dz2 = [ lm - lc ] + [v] ] Dx3 Dy3 Dz3 Dx4 Dy4 Dz4 Note the number of zeroes in the design matrix, and imagine the case if there were 100 unknowns rather than 12. When the normal equations are formed there will be no entries for coefficient not involved in the observation - stations 1 and 3, and the heights of 2 and 4: 1 0 0 0 s2 0 0 0 0 0 0 a2 0 0 0 ab b2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ac ad bc bd 0 0 0 0 0 0 0 0 0 0 0 Dx1 = 1 0 Dy1 0 Dz1 s2 0 Dx2 a(lm - lc) Dy2 b(lm - lc) Dz2 0 Dx3 0

• •



0 Dy3 0 Sym 0 Dz3 0 0 Dx4 c(lm - lc) 0 Dy4 d(lm - lc) 0 Dz4 0 The pattern of where entries in the normal equations will appear, established in the above matrix, is usually used by adjustment software to reduce the size of the design matrix. All that really needs to be computed is: Dx2 [a b c d] Dy2 = [ lm - lc] + [v] (obs. Equation) Dx4

0 0 0 0 0 0 0 c2 cd d2





Dy4 1 a ab ac ad Dx2 1 a(lm - lc) 2 b bc bd Dy2 b(lm - lc) = 2 c cd Dx4 c(lm - lc) 2 s2 s 2 d Dy4 d(lm - lc) Then the elements of this minimal representation of the normal equations are added (as preivously demonstrated) to the appropriate place in the full normal equations. Note that the design matrix is also required to compute the covariance matrix of the adjusted measurements C and the residuals C (see previous notes). The above method of forming only the required part of the design matrix can be also be applied in this situation.
2

Solving The Normal Equations
• • •

Solving the normal equations is the most computationally expensive part of the least squares adjustment. The set of linear equations Nx = l must be solved - either by inverting N, or solving a set of n linear equations where n is the size (rows and columns) of N. The time (computer instructions required) taken to invert a matrix is a fuction of the size of the matrix cubed. For example if the size of the normal matrix is doubled, the time taken to solve the normal equations is increased by a factor of 8. For large networks the solution time for the normal equations can become excessive. There are a few techniques that can be applied:
o



Don't Invert N

o o

Re-order the parameters in x, to reduce the bandwidth of the matrix Phased solutions

Don't Invert N




Nx = l can be solved without finding N-1 (like solving a set of linear equations by repeatedly eliminating unknowns then backsubstituting , e.g. Gaussian elimination). Solving a set of linear equations in such a manner is approximately 3 times faster than solving the equations by computing a matrix inverse. The problem with not inverting N is that you do not get the covariance matrix for the adjusted parameters (and residuals and adjusted measurements) - and the whole basis of our approach has been to use these covariance matrices to statistcally evaluate the network ! One approach is to solve Nx = l without inversion until the network converges, and only invert N on the final iteration of the solution



Re-Order The Parameters In X, To Reduce The Bandwidth Of The Matrix
• •

Depending on network geometry and the measurement configuration, N can be sparse (have a lot of zero elements). There are certain algorithms for inverting a matrix (for example the Cholesky decomposition) that allow a large number of the zero terms to be ignored in the computation of the matrix inverse. Generally the order of the parameters in x is re-organised. This re-organisation does not increase or decrease the number of non-zero terms in N, but moves the zero terms into a favourable position for a more efficient matrix inversion. The aim is to minimise bandwidth (The maximum number of non-zero elements from the leading diagonal, until the rest of the elements are zero). This technique is not always worth applying. The matrix may not be sparse enough to be worth the computation effort of re-organising the order of the parameters - there is some cost in optimizing the order of the parameters in x. In some network situations - such as linear features, N is sparse and structured in a minimal bandwidth format naturally.







Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering.

Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Comparison Of Matrix Solution And Inversion Times
Sparse Matrix Full Matrix

Parametric Number of Solution Inverse Solution Inverse Sections Unknowns (Secs.) (Mins.) (Secs.) (Mins.) 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 122 242 362 482 602 722 842 962 1082 1202 1322 1442 1562 1682 1802 1922 .55 .99 1.63 2.13 2.75 3.30 3.79 4.39 5.00 5.54 6.10 6.64 7.20 7.69 8.30 8.89 .23 .46 1.34 2.16 3.96 5.31 6.20 8.83 11.06 14.17 15.93 19.81 23.09 24.86 29.92 34.92 5.17 38.33 132.70 318.02 622.97 1074.72 1704.06 2541.90 3778.21 5179.31 6871.57 0.24 1.87 6.62 15.84 31.07 53.78 85.41 128.43

170 180 190 200 210 220 230 240 250 260 270 280 290 300 310 320 330 340 350 360 370 380 390 400

2042 2162 2282 2402 2522 2642 2762 2882 3002 3122 3242 3362 3482 3602 3722 3842 3962 4082 4202 4322 4442 4562 4682 4802

9.51 9.99 10.60 11.15 11.75 12.36 12.90 13.57 14.12 14.78 15.38 15.98 16.58 17.19 17.85 18.40 19.01 19.67 20.21 20.88 21.42 22.08 22.79 23.29

39.49 43.78 49.78 55.21 60.28 65.35 70.62 81.86 85.76 97.51 98.02 110.62 114.57 125.13 136.75

Table 4

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Table 4

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Deformation Surveys and Analysis

Introduction Measurements Processing Testing Error Ellipses And Residuals References

Introduction


Applications of deformation surveys include: o Dam walls - earth and concrete
o o o

Bridges Buildings Earth movements, for example continental drift, or subsidence as a result of mining



There can be considerable cost (human and $) for failing to detect deformations, and/or interpret them accordingly. Failure of the Teton dam in Idaho, 1977, killed 11, left 2500 homeless and resulted in $400 million in claims. Besides the monitoring of engineering type structures other deformations to be aware of include (Krakivsky, 1986):
o



Tidal effects - the earth's crust can deform by as much as 0.5m, although this happens uniformly over a large area. Crustal loading - caused by natural phenomena such as glacial advance and retreat, siltation in river basins, or with human intervention the draining/filling of lakes.

o

o

Plate tectonics - tectonic plate drift is of the order of a few cm per annum, although violent earthquakes can cause deformation of metres Ground consolidation - especially due to the extraction of oil/gas or artesian water

o

Measurements




High precision networks use survey equipment such as first order theodolites, precise levelling, Mekometer EDM and GPS translocation in order to detect the movement of survey stations or targets. Photogrammetry can be an effective tool for deformation monitoring depending on the scale of the object being measured. Besides convention survey measurements other geotechnical measurements can be made (Chrzanowski, 1986):
o o o

The physical properties of the structure Loads and internal stresses Dimensional changes

Instruments including extensometers, strainmeters, laser rangers, and tiltmeters are used. These intruments are extremely sensitive. Ideally all observations strutural/geotechnical/survey should be incorporated into the one adjustment model.


• • •

The gravity field may be considered in precise engineering surveys (integrated geodesy/least squares collocation). This is particularly important in large structural projects where newly introduced loads significantly distort the gravity field locally. GPS is useful especially for long connections to stable ground. It has been successfully used in deformation monitoring of structures such as dams. GPS also does not require a clear line of sight In some cases, eg a steep dam wall, GPS may be unsuitable due to poor satellite visibility/geometry, and multi-pathing.

Processing


The adjustment of such survey networks require all the considerations with respect to elimination of errors and the correct estimation of measurement precisions. Any deformation survey must pay particular attention to errors in the

survey so that gross or systematic errors do not contaminate the detection of movements and produce false results (eg is the dam about to fail or did you forget to correct for the EDM index error?) Certain field techniques and procedures can be applied:
o o o o

Forced centring - eliminates errors from multiple instrument/target setups, especially between epochs. Repeated measurements Simultaneous measurements Corrections for trunion axis tilt (in many cases small zenith distances will be encountered) Advancing theodolite circles, etc.

o •

It is also important to determine in advance whether absolute or relative movement is important as the former will require an absolute datum to be defined outside the area of expected movement. If only change in shape is important (eg bridge sag) then a minimal constraint or free network solution can be used to avoid any influence from external constraints. As in the diagram there is no survey connection to stable control.

If a block shift or rotation is important as well (eg dam deformation), then connections must be made to survey stations which are sited in stable areas to provide the required absolute datum. Often stable ground may be considerable distance away from the area being surveyed for deformation.





Surveys for deformation are generally repeated at certain time intervals (measurement epochs). The time interval depends on the expected movement / settlement of the structure and the risk to life. Generally stations and targets are put in place and suitable field procedures established. The established procedure is repeated at each epoch to minimise systematic and gross errors Each of these repeated network surveys is known as an epoch of measurement, so the comparison and analysis of the results of the repeated surveys is commonly known as epoch testing Some structures deform at such a rate that the deformation must be modelled during the time taken to perform a measurement epoch





Testing




The essence of epoch testing is to determine whether the differences between the coordinates from two different epochs are statistically significant Epoch testing must take into account the precisions of the coordinates, as well as the correlations between both the coordinates of individual stations and coordinates of different stations hence the full weight coefficient matrices from each epoch contribute as follows : Epoch 1: x1 station coordinates vector Q1 weight coefficient matrix Epoch 2: x2 station coordinates vector

Q2 weight coefficient matrix and the differences in the coordinates and the associated weight coefficients are: d = x 2 - x1 Qd = Q1 + Q2


• •

If free networks are employed then there should be a 6 or 7 parameter transformation applied, to fit epoch 2 to epoch 1, using all points in the network The first test conducted should always be the global congruency test : analogous to the global test for a single network The quantity W = dt Qd-1 d is tested against a Fisher statistic at an appropriate confidence level, if W passes then there has been no (statistically significant) movement and the networks are congruent If W fails the global congruency test then each point must be assessed by a local test which compares the contribution to W of the point against a Fisher critical value : analogous to the local testing of residuals for a single network. This test is done by recalculating W without each point in turn







The worst point (smallest W value) is rejected and the entire testing process repeated, including the transformation for free networks (without the rejected points, which are now considered to have moved) and the global congruency test Once the global congruency test passes, all those points which have been rejected are considered to have moved whilst those that are still contributing to W are considered to be stable It is not unusual to test groups of points (eg centre of the dam wall) and to amalgamate the survey data for stable points over two or more epochs A good rule of thumb is that the point precisions from the survey should be at least six times (preferably ten times!) smaller than the expected magnitudes of the movement in order to confidently detect the unstable points Where several epochs of data (usually > 3) are available a kinematic or dynamic model for the epochs can be formed. A kinematic model only considers the movements of the system without regard to their cause - a dynamic model also models the movement, but considers the forces

• •



causing the movement.

Error Ellipses and Residuals




Graphical representations of deformation analyses are often shown as error ellipses (95% or some other confidence interval - not the standard error ellipse) with vectors of movement - differences between measurement epochs. Example. The visual representation is useful for empirical checking and for the identification of the characteristics of any movement. Examples with trends in residuals. 3D and multi-epoch representations are possible with CAD systems



References
Chrzanowski, A., 1986. Geotechnical and other Non_Geodetic Methods in Deformation Measurements. Proceedings Deformation Measurements Workshop Modern Methodology in Precise Engineering and Deformation Surveys II. Bock, Y. (Ed.), Massachusetts, U.S.A., October 31 - November 1, 1986. Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Massachusetts, U.S.A., pp. 112 - 153. Krakiwsky, E. J., 1986. An Overview of Deformations, Measurement Technologies, and Mathematical Modelling and Analysis. Proceedings Deformation Measurements Workshop Modern Methodology in Precise Engineering and Deformation Surveys II. Bock, Y. (Ed.), Massachusetts, U.S.A., October 31 - November 1, 1986. Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts Institute of Technology, Massachusetts, U.S.A., pp. 7 - 33

Created: 25 May, 1999 Last modified: %1e August 1999 Authorised by: Mark Shortis, Assistant Dean, Computing and Multimedia, Faculty of Engineering. Maintained by: Nicole Jones, Department of Geomatics. Email: [email protected]

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close