Identification

Published on February 2018 | Categories: Documents | Downloads: 58 | Comments: 0 | Views: 442
of 85
Download PDF   Embed   Report

Comments

Content

Dynamic and Time Series Modeling for Process Control Sachin C. Patwardhan Dept. of Chemical Engineering IIT Bombay

Automation Lab IIT Bombay

Why Mathematical Modeling? Key Component of All Advanced Monitoring, Control and Optimization Schemes • • •



Process Synthesis and Design (offline) Operation scheduling and planning Process Control - Soft sensing / Inferential measurement - Optimal control (batch operation) - On-line optimization (continuous operation) - On-line control (Single loop / multivariable) Online performance monitoring Fault diagnosis / fault prognosis

1/9/2007

System Identification

2

1

Automation Lab IIT Bombay

Plant Wide Control Framework Layer 4

Long Term Scheduling and Planning

Layer 3

On-line Optimizing Control PV

Setpoints

Market Demands / Raw material availability Plant-Model Mismatch

Model Predictive Control

Layer 2

PV

Setpoints Layer 1

PID & Operating constraint Control PV

MV

Plant 1/9/2007

System Identification

Models for Plant-wide Control

Layer 4

Layer 3

Layer 2

Layer 1

1/9/2007

Load Disturbances 3

Automation Lab IIT Bombay

Aggregate Production Rate Models Steady State / Dynamic First Principles Models Dynamic Multivariable Time Series Models SISO Time Series Models, ANN/PLS/Kalman Filters (Soft Sensing) System Identification

4

2

Automation Lab IIT Bombay

Mathematical Models Qualitative

¾Qualitative Differential Equation ¾Qualitative signed and directed graphs ¾Expert Systems

Quantitative

¾Differential Algebraic systems ¾Mixed Logical and Dynamical Systems ¾Linear and Nonlinear time series models ¾Statistical correlation based (PCA/PLS)

Mixed

¾Fuzzy Logic based models 1/9/2007

System Identification

White Box Models

5

Automation Lab IIT Bombay

First Principles / Phenomenological / Mechanistic ƒ Based on ƒ ƒ ƒ ƒ

energy and material balances physical laws, constitutive relationships Kinetic and thermodynamic models heat and mass transfer models

ƒ Valid over wide operating range ƒ Provide insight in the internal working of systems ƒ Development and validation process: difficult and time consuming 1/9/2007

System Identification

6

3

Automation Lab IIT Bombay

Example: Quadruple Tank System

Tank 4

Tank3

Tank 1

Tank 2

Pump1 V1

Pump 2 V2

a dh1 =− 1 A1 dt dh2 a =− 2 dt A2 dh3 a =− 3 dt A3 dh4 a =− 4 dt A4

a3 γk 2 gh3 + 1 1 v 1 A1 A1 a γ k 2 gh2 + 4 2 gh4 + 2 2 v 2 A2 A2 (1 − γ 1 )k1 2 gh3 + v1 A3

2 gh1 +

2 gh4 +

(1 − γ 2 )k2

A4

v2

Manipulated Inputs : v1 and v2 Measured Outputs : h1 and h2

1/9/2007

System Identification

Example: Non-isothermal CSTR

7

Automation Lab IIT Bombay

Reaction : A ⎯⎯→ B Material Balance

V

dC A = F (C A 0 − C A ) −Vk0 exp( −E / RT )C A dt Enery Balance

dT = ρC p F (T0 −T ) − Q dt + ( − ∆Hrxn )Vk0 exp( −E / RT )C A

Vρcp

Heat Tranfer to Cooling Jacket

aFc b +1 Q= (T −Tcin ) Fc + (aF b / 2 ρc C pc ) 1/9/2007

System Identification

8

4

Example: Fed-Batch Fermenter d (XV ) = µ (S1 , S2 )XV dt d (S2V ) = F2S2F − σ 2 (S1 , S2 )XV dt d (PV ) = π (S1 , S2 )XV − kPV dt dV = F2 dt µ (S1 , S2 ) =

(X : Biomass Conc.) (S2 : Substrate - 2 Conc.) (P : Product Conc.) (V : Reactor Volume)

0.086S1S2 2.0 + S1 + 0.0303S12

σ 2 (S1 , S2 ) = µ (S1 , S2 ) / 1.05

1/9/2007

; π (S1 , S2 ) = 117.7e − 0.311S2 µ (S1 , S2 )

System Identification

9

Automation Lab IIT Bombay

Fixed Bed Reactor Material Balances

„

Automation Lab IIT Bombay

(Distributed Parameter System)

∂CA ∂C = − vl A − k10e − E1 / RTr CA ∂t ∂z ∂CB ∂C = − vl B + k10 e− E1 / RTr CA − k 20 e− E2 / RTr CB ∂t ∂z

……..Reactant A

……..Product B

Energy Balances

„

∂Tr ∂T ( −∆H r1 ) = − vl r + k10 e − E1 / RTr CA ∂t ∂z ρm Cpm +

∂Tj ∂t

( −∆H r 2 ) k ρm Cpm

=u

1/9/2007

∂Tj ∂z

+

20

e − E2 / RTr CB +

U wj ρmjC pmjVj

Uw ( Tj − Tr ) ρm Cpm Vr

(T − T ) r

……..Reactor Temp.

j

System Identification

……..Jacket Temp. 10

5

Automation Lab IIT Bombay

Grey Box Models Semi-Phenomenological

Part of model developed from the first principles and part developed from data

Example: dynamic model for reactor model using

energy and material balance and Reaction kinetics modeled using neural network

Better choice than complete black box models

1/9/2007

System Identification

11

Example: Stirred Tank Heater-Mixer Cold Water Flow CV-1

Automation Lab IIT Bombay

4-20 mA Input

Cold Water Flow

CV-2

TT-1 Heater Coil

LT

4-20 mA Input Signal

Thyrister Control Unit

TT-2

TT-3

Experimental Setup: Schematic Diagram 1/9/2007

System Identification

12

6

Automation Lab IIT Bombay

Example: Stirred Tank Heater-Mixer Q (I 1 ) dT1 F1 = (Ti 1 −T1 ) + dt V1 V1 ρC p dh2 1 [F + F (I ) − F ] = dt A2 1 2 2 dT2 1 = dt h2A2

⎡ UA(T2 −Tatm ) ⎤ ⎥ ⎢F1 (T1 −T2 ) + F2 (Ti 2 −T2 ) − ρC p ⎥⎦ ⎢⎣ 2 3 Q (I 1 ) = 7.979I 1 + 0.989I 1 − 0.0073I 1

F2 (I 2 ) = 3.9 + 27I 2 − 0.71I 22 + 0.0093I 23 0

U = 139.5 J / m 2 Ks

;

F (h ) = k h2 − h

I1 : % current input to thyrister power controller I2 :% current input to control valve 1/9/2007

System Identification

13

Example: Stirred Tank Heater-Mixer

Heat Input to Tank 1

5000

5000

process predicted

data 1 cubic

4000

3500 3000 2500 2000 1500

3500 3000 2500 2000 1500

1000

1000

500 0 0

Charecteraisation of Control Valve CV-2

4500

4000

flow rate in ml/min

Heat supplied to water in tank one

4500

Automation Lab IIT Bombay

500 10

20 30 40 % Current input to the power module

50

0 0

Thyrister Power Controller Characterization 1/9/2007

System Identification

20

40 60 % valve opening

80

100

Control Valve Characterization 14

7

Model Validation: Input Excitations

Automation Lab IIT Bombay

Input Perturbations Heater input(mA)

18 16 14 12 10

Valve two input(mA)

0

200

400

600 800 Sampling instant

200

400 600 Sampling instant

1000

18 16 14 12 0

1/9/2007

800

1000

System Identification

15

Model Validation: Level Variations

Automation Lab IIT Bombay

65 60 55 Height(cm)

50 45 40 35 30 ModelSimulation Predictions Validation data Measured Output

25 20 0

1/9/2007

200

400

600 800 Sampling instant

System Identification

1000

16

8

Automation Lab IIT Bombay

Model Validation: Temperature Profiles 345 S iM m ou dl aet li oPn r e d i c t i o n s V aMl i d ea a tsi o un r e dd aOt au t p u t

Tank 1 Temp(K)

340 335 330 325 320 315

0

200

400

600 800 S a m p lin g I n s t a n t

1000

3 1 8 M o d e l P r e d ic tio n s M e a s u re d O u tp u t

Tank 2 temp(K)

3 1 6

3 1 4

3 1 2

3 1 0

3 0 8

1/9/2007

0

2 0 0

4 0 0

6 0 0 S a m p lin g in s ta t

8 0 0

1 0 0 0

System Identification

Dynamic Models for Control „

17

Automation Lab IIT Bombay

Linear perturbation models: Regulatory operation around fixed operating point of mildly nonlinear processes operated continuously. Developed using ƒ ƒ

Local linearization of white/gray box models Identification from input output data

Why use approximate Linear Models? ƒ ƒ

„

Linear control theory for controller synthesis and closed loop analysis is very well Developed For small perturbations near operating point, processes exhibit linear dynamics

Nonlinear dynamic models: strongly nonlinear systems, operation over wide operating range, batch / semi-batch processes

1/9/2007

System Identification

18

9

Local Linearization

Automation Lab IIT Bombay

Given a lumped parameter model dX/dt = F (X ,U , D )

; Y = G (X )

and steady state operating point (X ,U , D ), we apply Taylor series expansion around (X ,U , D ) to develop linear perturbation model

dx/dt = Ax + Bu + Hd ; y = Cx Perturbation variables x(t) = X(t) - X ; y(t) = Y(t) - Y ; u(t) = U(t) - U ; d(t) = D(t) - D ; 1/9/2007

System Identification

Local Linearization

19

Automation Lab IIT Bombay

where

A = [∂F / ∂X ] ; B = [∂F / ∂U ] ;

H = [∂F / ∂D ] ; C = [∂G / ∂X ] computed at (X ,U , D )

Transfer Function Matrix: Can be obtained by taking Laplace transform together with assumption x (0) = 0 (i.e. initial state of the process corresponds to operating steady state) y (s ) = G p (s )u (s ) + Gd (s )d (s ) Gu (s ) = C [sI − A ]−1 B ; Gd (s ) = C [sI − A ]−1 H 1/9/2007

System Identification

20

10

Automation Lab IIT Bombay

Perturbation Model for CSTR Consider non-isothermal CSTR dynamics

feed flow rate

dC A = f1 (C A , T , F , Fc , C A0 , Tcin ) dt dT = f 2 (C A , T , F , Fc , C A0 , Tcin ) dt

States (X ) ≡ [C A T

]T

coolant flow rate

Measured Output (Y ) ≡ [T

Manipulated Inputs (U ) ≡ [F Fc ]T

Feed conc.

Unmeasured Disturbances (Du ) ≡ [C A 0 ]

Cooling water Temp.

Measured Disturbances (Dm ) ≡ [Tcin ] 1/9/2007

]

System Identification

21

Automation Lab IIT Bombay

CSTR: Model Parameters and Steady state Operating Point V ( Reactor volume ) = 1 m3 ; F (Inlet flow) = 1 m3/min ; CA0( Inlet concentrat ion of A) = 2.0 kmol/m3 ; T0 (Inlet temperature) = 50 0C ; F (Coolant flow) = 15 m3/min ;

Cp (Specific heat of reacting mixture) = 1 cal/(g K)

;

0

Tcin (Coolent Inlet Temperature ) = 92 C ; Cpc (spacific heat of coolent ) = 1 cal /(g K) ;

ρ (Reacting liquid density) = 106 g/m3 ; ρc ( Coolent density ) = 106 g/m3 ; - ∆Hrnx (Heat of reaction) = 130 x 106 cal/kmol ; a = 1.678 x 106cal / min ; b = 0.5 ; E/ R = 8330.1 K

CA (Concentra tion of A) = 0.265kmol/m3

T (Reactor Temperature) =121 C 0

1/9/2007

System Identification

Operating Steady State 22

11

Discrete Dynamic Models

Automation Lab IIT Bombay

Computer control relevant discrete models x (k + 1) = Φx (k ) + Γu (k ) y (k ) = Cx (k ) T

Φ = exp(AT ) ; Γ = ∫ exp(Aτ )B dτ 0

Definition

T2

Φ = exp(AT ) = I +TΦ +

2!

Φ 2 + ........

Note: Assumption of piece-wise constant inputs holds only for manipulated inputs and NOT for the disturbances or any other input 1/9/2007

System Identification

Transfer Function Matrix

23

Automation Lab IIT Bombay

q-Transfer Function Matrix: Can be obtained by taking q-transform together with assumption x (0) = 0

y (k ) = G p (q )u (k ) G p (q ) = C [qI − Φ ]−1 Γ q : Shift Operator q{f(k)} = f(k + 1) ; q-1 {f(k)} = f(k - 1) Alternatively, taking z - transform on bothe sides of difference equation

zx (z ) − x (0) = Φx (z ) + Γu (z ) When x(0) = 0 x(z) = [zI - Φ ]−1 Γu (z )

y (z ) = Cx (z ) = [zI - Φ ]−1 Γu (z ) G p (z ) = C [zI − Φ ]−1 Γ : Pulse Transfer Function 1/9/2007

System Identification

24

12

Automation Lab IIT Bombay

Computation of System Matrices Compouation Method 1 : Let A = ΨΛΨ -1 where Λ is diagonal matrix with eigenvalues appearing on maion diagonal Ψ : matrix with eigenvectors of A as columns Φ = Ψexp( ΛT) Ψ -1 ⎡T ⎤ Γ = Ψ ⎢ ∫ exp( Λτ )dτ ⎥ Ψ -1B ⎥⎦ ⎢⎣ 0 Compouation Method 2 : Φ(t ) = exp(At ) is solution of ODE - IVP

dΦ = AΦ(t ) ; Φ(0) = I dt

Taking Laplace Transform

[

]

sΦ(s) - Φ(0) = AΦ(s) ⇒ Φ(s) = [sI − Φ ]−1 ⇒ Φ(T ) = L−1 (sI − Φ )−1 t =T ⎡T

⎤ Γ = ⎢ ∫ exp(Aτ )dτ ⎥B ⎣⎢ 0 ⎦⎥

When Α is invertible matrix Γ = [exp(AT ) − I ]A −1B = [Φ − I ]A −1B 1/9/2007

System Identification

25

Automation Lab IIT Bombay

CSTR: Continuous Perturbation Model Continuous linear state space model ⎡C (t ) − C A ⎤ ⎡F (t ) − F ⎤ x (t ) = ⎢ A ; u (t ) = ⎢ ⎥ ⎥ ; ⎣Fc (t ) − Fc ⎦ ⎣ T (t ) −T ⎦ ⎡ - 7.56

dx / dt = ⎢ 852.72 ⎣

- 0.09⎤ 1.735 ⎤ ⎡ 0 x ( t ) + ⎢ - 6.07 - 70.95⎥u (t ) 5.77 ⎥⎦ ⎣ ⎦ y (t ) = [0 1]x (t )

Laplace Transfer Function - 6.07 s - 45.9 - 70.95 s + 943.5 ⎤ 2 ⎣ s + 1.79 s + 35.8 s + 1.79 s + 35.83⎥⎦

Gp (s ) = ⎡⎢

1/9/2007

2

System Identification

26

13

Models for Computer Control

Automation Lab IIT Bombay

Computer controlled system / Distributed Digital Control system Digital To Analog Converter

Analog To Digital Converter

Process

Manipulated Inputs From Control Computer

Measured Outputs To Control Computer

Control Computer /DCS 1/9/2007

System Identification

27

Digital Control: Measured Outputs

Automation Lab IIT Bombay

Output measurements are available only at discrete sampling instant {tk = kT : k = 0,1,2,....} Where T represents sampling interval 3.2

3.2 3

2.8 2.6

ADC

2.4 2.2

Measured Output

Measured Output

3

2.8 2.6 2.4 2.2

2

2

1.8 0

1.8 0

5

10

Sampling Instant

15

20

10

Sampling Instant

15

20

Sampled measurement

Continuous Measurement

sequence to computer

from process 1/9/2007

5

System Identification

28

14

Automation Lab IIT Bombay

Digital Control: Manipulated Inputs

2.9

2.9

2.8

2.8

2.7

2.7

2.6

2.6

DAC

2.5 2.4 2.3

Manipulated Input

Manipulated Input Sequence

In computer controlled (digital) systems Manipulated inputs implemented through DAC are piecewise constant u (t ) = u (tk ) ≡ u (k ) for tk ≤ t ≤ tk +1

2.5 2.4 2.3 2.2

2.2

2.1

2.1 2 0

2

2

4

6

8

10

12

14

16

18

20

0

2

4

Input Sequence

8

10

12

14

16

18

20

Continuous input profile generated by DAC

Generated by computer 1/9/2007

6

Sampling Instant

Sampling Instant

System Identification

CSTR: Discrete Perturbation Model

29

Automation Lab IIT Bombay

Discrete linear state space model Sampling Time (T) = 0.1 min ⎡ 0.0026 0.134 ⎤ ⎡ 0.185 - 0.008⎤ x (k ) + ⎢ ⎥u (k ) ⎥ ⎣73.492 1.333 ⎦ ⎣- 0.7335 - 1.797 ⎦

x (k + 1) = ⎢

y (k ) = [0 1]x (k )

Discrete q-transfer function model ⎡ - 6.07 q - 45.9 - 70.95 q + 943.5 ⎤ 2 2 ⎥ ⎣ q + 1.79 q + 35.83 q + 1.79 q + 35.83⎦

Gp (q ) = ⎢

⎡ - 6.07 q-1 - 45.9q-2 -1 -2 ⎣1 + 1.79 q + 35.83q

Gp (q −1 ) = ⎢ 1/9/2007

- 70.95 q-1 + 943.5q-2 ⎤ 1 + 1.79 q-1 + 35.83q-2 ⎥⎦

System Identification

30

15

Automation Lab IIT Bombay

Black Box Models Data Driven / Black Box Models

Static maps (correlations)/ dynamic models (difference equations) developed directly from

historical input-output data

Valid over limited operating range Provide no insight into internal working of systems Development process: much less time consuming and comparatively easy 1/9/2007

System Identification

31

Automation Lab IIT Bombay

Black Box Models Dynamic Models: Given observed data Set of past Inputs :U (k ) = [u (1) u ( 2) ... u (k )] Measured Outputs :Y (k ) = [y (1) y ( 2) ... y (k )]

we are looking for relationship

(

)

y(k ) = Ω U (k −1) ,Y (k −1) , θ + e (k )

such that noise (residuals) e(k) are as small as possible

θ ∈ R d represents parameter vector

1/9/2007

System Identification

32

16

Tools for Black Box Modeling

Automation Lab IIT Bombay

Linear Difference equation (time series) models Principle component analysis (PCA) / Projection Of latent structures (PLS) / Statistical models based on linear correlation analysis of historical data Artificial Neural Networks/Wavelet Networks Excellent for capturing arbitrary nonlinear maps Fuzzy Rule Based Models Quantification of qualitative process knowledge 1/9/2007

System Identification

Steps in Model Development „ „

ƒ

„

Automation Lab IIT Bombay

Selection of model structure Planning of experiments for estimation of unknown model parameters ƒ

„

33

Design of input perturbation sequences Open loop / closed loop experimentation

Estimation of model parameters from experimental data using optimization techniques Model validation ƒ ƒ

Prediction capabilities Steady state behavior

1/9/2007

System Identification

34

17

Automation Lab IIT Bombay

Model Structure Selection Issues in Model Selection

• • • • •

Process application (batch / continuous) Time scale of operation Type of application (scheduling/optimization/MPC/Fault Diagnosis) Availability of physical knowledge / historical data Development time and efforts

Model granularity decides how well we can make control / planning moves or diagnose / analyze process behavior 1/9/2007

System Identification

Data Driven Models

35

Automation Lab IIT Bombay

Development of linear state space/transfer models starting from first principles/gray box models is impractical proposition. Practical Approach • Conduct experiments by perturbing process around operating point • Collect input-output data • Fit a differential equation or difference equation model Difficulties • Measurements are inaccurate • Process is influenced by unknown disturbances • Models are approximate 1/9/2007

System Identification

36

18

Discrete Model Development

Automation Lab IIT Bombay

Excite plant around the desired operating point by injecting input perturbations Measurement Noise

2.9

3.2 3

2.8

Measured Output

2.7

Process

2.5 2.4 2.3

2.6 2.4 2.2

2.2

2

2.1

2

4

6

8 10 12 Sampling Instant

14

16

18

1.8 0

20

Unmeasured Disturbances

Input excitation for model identification 1/9/2007

5

10

Sampling Instant

15

20

Measured output response

System Identification

37

Automation Lab IIT Bombay

CSTR: Input Excitation PRBS: Pseudo Random Binary Signal Manipulated Input Excitation

17 Coolent Flow

2 0

16 15 14 13

Reactant Inflow

Manipulated Input

2.6

2.8

0

5

10 Sampling Instant

15

20

5

10 Sampling Instant

15

20

1.1 1.05 1 0.95 0.9 0.85

1/9/2007

0

System Identification

38

19

CSTR: Identification Experiments

Automation Lab IIT Bombay

Inlet Conc.(mod/m3)

Effect of Inlet Concentration Fluctuations On Reactor Temperature

1

5

10 Sampling Instant

15

20

Continuous Line: Data with noise

400

390 0

5

10 Sampling Instant

1/9/2007

15

20

System Identification

39

Automation Lab IIT Bombay

CSTR: Noise Component Unmeasured Disturbance Component in Measured Outputs 3

2 v(k)= y(k) - G(q) u(k), (Deg K)

Measured Temperature (K)

0.8 0

Dotted Line: Data without noise

1

0

-1

-2

-3

-4 0

50

100

150

200

Sampling Instant

1/9/2007

System Identification

40

20

Two Non-Interacting Tanks Setup

Automation Lab IIT Bombay

Tank1

SISO System Output: Level in tank 2 Manipulated Input : Valve Position CV-2 Disturbance: Valve Position CV-1

Tank2

LT CV1

CV2

Pump1

Pump2

Sump

Non Interacting Tank Level Control setup

1/9/2007

System Identification

41

Automation Lab IIT Bombay

Input Output Data

Output (mA)

6.5 6 5.5 5 4.5 0

Input (mA)

Raw Input and output signals

50

100

50

100

150

200

250

150

200

250

14 12 10 0

Time 1/9/2007

System Identification

42

21

Automation Lab IIT Bombay

Perturbation Data for Identification Input and output signals

Output

1

0

-1 0

50

100

50

100

150

200

250

150

200

250

Input

2

Mean values removed from Input and Output data

0 -2 0

Time 1/9/2007

System Identification

43

Impulse Response Model Consider T.F.

Automation Lab IIT Bombay

y (s ) = g (s ) with impulse input u (s )

yimpulse (t) = g(t) = L-1 [g (s )] Convolutio n Integral : y (t ) =



∫ g (τ )u (t − τ )dτ

0

For piece - wise constant inputs ⎡T



⎡2T



⎦⎥

⎣⎢ T

⎦⎥

y (kT ) = ⎢ ∫ g (τ )dτ ⎥u [(k − 1)T ] + ⎢ ∫ g (τ )dτ ⎥u [(k − 2)T ] + ... ⎣⎢ 0

y (k ) =

∞ ⎡T





⎢0 j =1 ⎣

⎦⎥

j =1

∑ ⎢ ∫ g (τ )dτ ⎥u (k − j ) = ∑ gT ( j )u (k − j )

⎡ jT ⎤ Impulse Response Coefficien ts : g j = ⎢ ∫ g (τ )dτ ⎥ ⎢⎣( j −1)T ⎥⎦ 1/9/2007

System Identification

44

22

Impulse Response Model

Automation Lab IIT Bombay

Impulse Response Model ∞



j =1

j =1

y (k ) = ∑ g j u (k − j ) = ∑ g j q − j u (k ) ∞

Defining transfer operator G(q) = ∑ g j q − j j =1

y (k ) = G (q )u (k ) 9Current output y(k) is viewed as weighted sum of all past inputs moves. 9Impulse response coefficients determine weighting of each past move 9G(q) is open loop BIBO stable if ∞

∑ gj < ∞

j =1

1/9/2007

System Identification

Discrete Model Forms

45

Automation Lab IIT Bombay

Finite Impulse Response (FIR) Model For open loop stable systems gk → 0 as k → ∞ N

y (k ) ≅ ∑ g j u (k − j ) j =1

Discrete Transfer Function Model y (q −1 ) b1 q-1 + b2q-2 + ...bn q-2 = u (q −1 ) 1 + a1 q-1 + ..an q−n Example

y (q ) b1 q-1 + b2q-2 = u (q −1 ) 1 + a1 q-1 + a2q−2 −1

which is equivalent to

y (k) = -a1 y(k - 1) − a2 y(k - 2) + b1u(k - 1) + b2u(k - 1) 1/9/2007

System Identification

46

23

Automation Lab IIT Bombay

Output Error (OE) Model Data collected through experiments Set of N output Measurements

Y N ≡ {y (k ) : y (0), y (1), y ( 2),......., y (N )} Set of Input Sequence

U N ≡ {u (k ) : u (0), u (1), u ( 2),......., u (N )} Output / Measurement Error Model

y (k ) = G (q )u (k ) + v (k ) Deterministic component 1/9/2007

Measured Value of Output

Residue: unmeasured disturbances + measurement noise System Identification

Estimation of FIR Model

47

Automation Lab IIT Bombay

Consider FIR model with n coeffieints y(k) = g1u(k - 1) + ...... + gnu(k - n) + v(k) Using experimental data we can write y(n) = g1u(n - 1) + .... + gn u(0) + v(n) y(n + 1) = g1u(n) + .... + gn u(1) + v(n + 1) .....................

y(N) = g1u(N - 1) + ...... + gn u(N - n) + v(N) Arranging in matrix form ⎡ y(n) ⎤ ⎡ u (n − 1) u (n − 2) ⎢ y(n + 1) ⎥ ⎢ u (n ) u (n − 1) ⎥=⎢ ⎢ ⎢ ... ⎥ ⎢ .. .. ⎥ ⎢ ⎢ .. ⎣ y(N) ⎦ ⎣u (N − 1) 1/9/2007

.. ..

u (0) ⎤ ⎡ g1 ⎤ ⎡ v(n) ⎤ u (1) ⎥⎥ ⎢g2 ⎥ ⎢⎢v(n + 1)⎥⎥

⎢ ⎥+ ⎥ ⎢ ... ⎥ ⎢ ... ⎥ ⎥ ⎥ ⎢ .. u (N − n )⎦ ⎢⎣ gn ⎥⎦ ⎣ v(N) ⎦

..

..

System Identification

48

24

Automation Lab IIT Bombay

Least square estimation Resulting model is linear in parameters Y = Aθ + V

Least square parameter estimation ˆ= θ

min T min [Y − Aθ]T [Y − Aθ ] V V= θ θ

[

]

ˆ = AT A −1 AY θ Let the noise sequence{ v(k)} have zero mean and let θT represent true value of the parameter vector, i.e.

[

ˆ = AT A θ

[]

Y = AθT + V

]

−1

[

AY = AT A

[

= θT + AT A

[

]

]

]

−1 T

A [AθT + V]

−1 T

A V

−1 T

E θˆ = θT + AT A A E [V] = θT 1/9/2007

System Identification

49

Automation Lab IIT Bombay

Estimated FIR Model Impulse Response Coefficients

0.035 0.035

Comparison of Impulse Responses

0.03 0.03

FIR (69 coeff) ARMAX(2,2,2,1) ARX(6,6,1) OE(2,2,1)

0.025 0.025 0.02 0.02 0.015 0.015 0.01

0.01

0.005

0.005

0

0

-0.005

0.005 -0.01 0 1/9/2007

-0.01

0

10

10

20

20

30

30

40

40

50

50

System Identification

60

60

70

70

80 50

25

Automation Lab IIT Bombay

FIR Model Fit

y(k)

0.5

Plant Model

0 -0.5 0

50

100

150

100

150

v(k)

0.2 0.1 0 -0.1 0

50

Sampling Time

1/9/2007

System Identification

51

Automation Lab IIT Bombay

Estimated step Response Step response can be estimated from impulse response coefficients Unit Step Response Coefficien t : ai =

0.15

0.4

0.1

0.3

0.05

y(k)

y(k)

0.5

0.2

0

0.1

-0.05

0 -0.1 0

j =1

Unit Step Response

Unit Step Response

0.6

i

∑ gj

-0.1 10

1/9/2007

20

30

Time

40

50

60

70

2

System Identification

4

6

Time

8

10

Unit Delay 52

26

Automation Lab IIT Bombay

Features of estimation Thus, the least square estimation generates an unbiased estimate of model parameters when E [V] = 0 If {v(k)}is white noise sequence with variance σ 2 , then

[

]

Cov[V] = E VVT = σ 2I

{}

[

(

)(

ˆ =E ⎡θ ˆ−θ θ ˆ−θ Cov θ T T ⎢⎣

]

[

−1

][

= AT A AT E VVT A AT A

]

−1

) ⎤⎥⎦ T

[

= σ 2 AT A

]

−1

σ 2 can be estimated as σˆ 2 =

1 ˆT ˆ 1 ˆ)T ( Y − Aθ ˆ) V V = ( Y − Aθ

N

N

Thus, estimated parameter covariance matrix is

{ }

ˆ, θ ˆ = Cˆov θ 1/9/2007

1

N −n

(Vˆ Vˆ)[A A] T

T

−1

System Identification

53

Difficulties with FIR Model

Automation Lab IIT Bombay

Advantages: Method can be easily extended to multiple input case Difficulty: Variance Errors in FIR Model Parameters var( gi ) α [1/(N - n)]

Variances of parameter estimates can be reduced by increasing data length (N) Disadvantages: 9Large number of parameters for MIMO case 9Large data set required to get good parameter estimates, which implies long time for experimentation. Alternate Model Form

y (k ) =

1/9/2007

b1 q-1 + ... + bn q-2 q −d u (k ) + v (k ) 1 + a1 q-1 + ... + an q −2 System Identification

Output Error 54

27

Parameterized OE model

Automation Lab IIT Bombay

Two tank system under consideration is expected to have second order dynamics

x (s ) =

kp u (s ) (τ 1s + 1)(τ 2s + 1)

which is equivalent to 2'nd order discrete time model

x (k ) =

b1q −1 + b2q −2 u (k ) 1 + a1q −1 + a2q −2

Since time delay (dead time) was found to be d = 1

x (k ) =

b1 q-1 + b2 q-2 q −1u (k ) 1 + a1 q-1 + a2 q −2

which is equivalent to

x (k) = -a1 x(k - 1) − a2 x(k - 2) + b1u(k - 2) + b2u(k - 3) y(k) = x(k) + v(k) 1/9/2007

System Identification

Parameter Estimation

55

Automation Lab IIT Bombay

x(k) : true value Y(k) : measured value v(k) : measurement noise / disturbance Difficulty: Only {y(k)} sequence is known. Sequence {x(k)} is unknown Consequence: Linear least square method can’t be used for parameter estimation Given (a1 , a2 , b1 , b2 , x (1), x (2)) and d = 1 we can recursively estimate x(k) as

x (3) = −a1x (2) − a2x (1) + b1u (1) + b2u (0)

x ( 4) = −a1x (3) − a2x (2) + b1u (2) + b2u (1) ..........

x (N ) = −a1x (N − 1) − a2x (N − 2) + b1u (N − 2) + b2u (N − 3) v (k ) = y (k ) − x (k ) for k = 3,4,....N

1/9/2007

System Identification

56

28

Parameter Estimation

Automation Lab IIT Bombay

Nonlinear Optimization Problem Estimate (a1 , a2 , b1 , b2 , x (1), x (2)) such that Ψ[e (0),...e (k )] =

N

∑ [v(k)]2

k =3

is minimized with respect to (a1 , a2 , b1 , b2 , x (1), x (2))

v (k ) = y (k ) − x (k )

x (k ) = −a1x (k − 1) − a2x (k − 2) + b1u (k − 2) + b2u (k − 3) Simplification : Choose x(0) = x(1) = 0 Identified Model Parameters y(k) = [B(q)/A(q)]u(k) + v(k) B(q) = 4.567e-006 q^-2 + 0.01269 q^-3 A(q) = 1 - 1.653 q^-1 + 0.6841 q^-2 1/9/2007

System Identification

57

Automation Lab IIT Bombay

OE Model OE(2,2,2): Measured and Simulated Outputs 0.5

y(k)

0 -0.5 50

100

150

200

50

100

150

200

v(k)

0.15 0.1 0.05 0 -0.05

1/9/2007

System Identification

58

29

Stochastic Process

Automation Lab IIT Bombay

A discrete stochastic process can be regarded as a family of stochastic variables {v(k) : k = ... - 1, 0, 1, .....} where index k represents sampling instants. A rendom process may be considered as function of two variables v(k, ω ). For a fixed ω, the function v(., ω ) is an ordinary time function called 'realizatio n' of the stochastic process. For a fixed k = k0 , v(k0 ,.) is a random variable. Probability distribution function of a random process α

Fv (α , k ) = Pr{v (k ) ≤ α } = ∫ f (ν , k )δν −∞

Mean of a stochastic process is a time varying function α

µ v (k) = E {v (k , ω )} = ∫ νf (ν , k )δν −∞

1/9/2007

System Identification

Stochastic Processes

59

Automation Lab IIT Bombay

Auto - correlation of a stochastic process is

rvv (k ,t ) = E {v (k )v (t )}

Auto - correlation function quantifies dependence of v(.) at one time with values at another time

Weakly Stationary Random Process Mean is independent of time E{v(k)} = µ v

Autocorrelation function dependence in time is solely function of (k - t)

rvv (k ,t ) = rvv (k − t )

Cross - correlation of a stochastic processes {v(k)} and {w(k)}is

rvw (k ,t ) = E {v (k )w (t )}

Quantifies correlation between two random processes at two different instants of time 1/9/2007

System Identification

60

30

Computing Statistics

Automation Lab IIT Bombay

For a stationary random process, sample statistics can be estimated using values of signal in time Sample mean can be estimated as

µˆv =

1

N

N

∑v (k )

k =1

and sample auto - correlation can be estimated as

Rˆv (τ ) =

N 1 [v (k )][v (k − τ )] ∑ (N − τ ) k =τ

sample auto - covariance can be estimated as cov[v (k ),v (k − τ )] =

1/9/2007

N 1 [v (k ) − µˆv ][v (k − τ ) − µˆv ] ∑ (N − τ ) k =τ

System Identification

Computing Statistics

61

Automation Lab IIT Bombay

For a two stationary random processes {v(k)} and {w(k)} sample cross - correlation can be estimated as

Rˆvw (τ ) =

N 1 [v (k )][w (k − τ )] ∑ (N − τ ) k =τ

sample cross - covariance can be estimated as cov[v (k ), w (k − τ )] =

N 1 ∑ [v (k ) − µˆv ][w (k − τ ) − µˆw ] (N − τ − 1) k =τ

Cross-correlation quantifies time dependence of two stochastic processes 1/9/2007

System Identification

62

31

OE Model : Autocorrelation

Automation Lab IIT Bombay

Auto Cotrelation for OE(2,2,2)

Sample Autocorrelation

0.8 0.6 0.4 0.2

95% Confidence Interval

0 -0.2 0

1/9/2007

5

10 Lag

15

20

System Identification

Equation Error Model

63

Automation Lab IIT Bombay

A discrete linear model, which captures the effect of past unmeasured disturbances, can be proposed as y (k ) = b1u (k − d − 1) + ... + bmu (k − d − m )

− a1 y (k − 1) − ... − an y (k − n ) + e (k )

d : Time delay / dead time

How many past outputs do we include in the model? We can choose n such that ƒ error e(k) becomes uncorrelated with y(k) and contains no information about past disturbances ƒ Error e(k) is like a random variable uncorrelated with e(k-1), e(k-2),… How do we mathematically state above requirement? 1/9/2007

System Identification

64

32

Unmeasured Disturbance Modeling

Automation Lab IIT Bombay

ƒ The measured output y(k) contains contributions due to ƒ Measurement errors (noise) ƒ Unmeasured disturbances

ƒ In additions modeling (equation) errors arise while developing approximate linear perturbation models

Thus, in order to extract true model parameters from the data, we need to carry out modeling of unmeasured disturbances (or noise) Noise is modeled as a stochastic process (sequence of random variables, which are correlated in time) 1/9/2007

System Identification

Noise Modeling

65

Automation Lab IIT Bombay

y (k ) = G (q )u (k ) + v (k ) Deterministic component

Residue: unmeasured disturbances + measurement noise

Note: Information about unmeasured disturbances in the past is contained in the output measurement record. Thus, an obvious choice of model structure is

y (k ) = f [u(k - 1),..., u(k - m), y(k - 1), ....., y(k - p) ] + e (k ) 1/9/2007

System Identification

66

33

Automation Lab IIT Bombay

White Noise Let us define auto-correlation in a random process {e(k): k= 1, 2, …} as Rv (τ ) = cov[e (k ), e (k − τ )] =

lim

N 1 ∑ e (k )e (k − τ ) N → ∞ (N − τ ) k =τ

Equation error sequence e(k) in ARX model should be independent and equally distributed random variable sequence, i.e. ⎧ σ 2 for τ = 0 ree (τ ) = ⎨ ⎩0 for τ = ±1, ± 2,.... Such sequence is called discrete time white noise 1/9/2007

System Identification

67

Automation Lab IIT Bombay

Example: White Noise 29

Experimental Data From HeaterMixer Setup

Measurement Mean value

Temperature (deg C)

28.5

28

27.5

27

26.5

26 0

20

40

60

80

100

Sampling Instant

Mean = 27.33 0C Variance = 0.633 0C 1/9/2007

System Identification

68

34

Measurement Errors: Histogram

Automation Lab IIT Bombay

Histogram of Measurement Errors

45 40 35

No. of Samples

30 25 20 15 10 5 0 -1.5

-1

-0.5

0

0.5

1

1.5

Measurement Error

1/9/2007

System Identification

69

White Noise: Autocorrelation

Automation Lab IIT Bombay

Sample Autocorrelation Function (ACF)

Sample Autocorrelation

0.8

0.6

0.4

0.2

0

-0.2 0

1/9/2007

5

10 Lag

System Identification

15

20

70

35

ARX Model Development

Automation Lab IIT Bombay

Consider 2'nd order ARX model with d = 1

y (k ) = −a1 y (k − 1) − a2 y (k − 2) + b1u (k − 2) + b2u (k − 3) + e (k ) Advantages: 9Sequences {y(k)} and (u(k)} are known 9Linear in parameter model – optimum can be computed analytically ˆ(k) as We can recursively estimate y

yˆ(3) = −a1 y (2) − a2 y (1) + b1u (1) + b2u (0) yˆ( 4) = −a1 y (3) − a2 y (2) + b1u (2) + b2u (1) ..........

yˆ(N ) = −a1 y (N − 1) − a2 y (N − 2) + b1u (N − 2) + b2u (N − 3) e (k ) = y (k ) − yˆ(k ) for k = 3,4,......N

1/9/2007

System Identification

ARX : Parameter Identification

71

Automation Lab IIT Bombay

Arranging in matrix form

u (1) u (0) ⎤ ⎡ a1 ⎤ ⎡ e(2) ⎤ − y ( 0) ⎡ y(n) ⎤ ⎡ − y (1) ⎢ y(n + 1) ⎥ ⎢ − y (2) u (2) u (1) ⎥⎥ ⎢a2 ⎥ ⎢⎢ e(3) ⎥⎥ − y (1) ⎢ ⎥=⎢ ⎢ ⎥+ ⎢ ... ⎥ ⎢ ⎥ ⎢b1 ⎥ ⎢ ... ⎥ .. .. .. .. ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ y(N) ⎦ ⎣− y (N − 1) − y (N − 2) u (N − 1) u (N − 2) ⎦ ⎣b2 ⎦ ⎣e(N)⎦ Resulting model is linear in parameters Y = Aθ + e Least square parameter estimation ˆ= θ

min T min [Y − Aθ]T [Y − Aθ] e e= θ θ

[

]

ˆ = AT A −1 AY θ

Choose model order n such that sequence {e(k)} becomes white noise 1/9/2007

System Identification

72

36

Automation Lab IIT Bombay

ARX: Order Selection

Objective Function Value

5.5

x 10

ARX Order Selection

-4

5

Time Delay (d) = 1

4.5 4 3.5 3 2.5 2

3

1/9/2007

4

Model Order

5

6

System Identification

73

Automation Lab IIT Bombay

Sample Autocorrelation Sample Autocorrelation

ARX: Order Selection 1

Auto Cotrelations for ARX(2,2,2)

0.5 0 -0.5 0 1

5 10 15 Auto Cotrelations for ARX (4,4,2)

20

Model Residuals are not white

0.5 0 -0.5 0

1/9/2007

5

10 Lag

15

System Identification

20

74

37

Automation Lab IIT Bombay

ARX: Order Selection Auto Cotrelation for ARX(6,6,2)

Sample Autocorrelation

0.8 0.6

Model Residuals white

0.4 0.2 0 -0.2 0

5

10 Lag

1/9/2007

15

20

System Identification

75

ARX: Identification Results

y(k)

1

Automation Lab IIT Bombay

ARX(6,6,2): Measured and Simulated Output

0

-1 0

50

100

50

100

150

200

250

150

200

250

0.06 0.04 e(k)

0.02 0 -0.02 -0.04

Time

1/9/2007

System Identification

76

38

Automation Lab IIT Bombay

6’th Order ARX Model Identified ARX Model Parameters A(q)y(k) = B(q)u(k) + e(t)

A(q) = 1 - 0.8135 q-1 - 0.1949 q-2 - 0.07831 q-3 + 0.1107 q-4 + 0.03542 q-5 + 0.01755 q-6 B(q) = 0.00104 q-2 + 0.013 q-3 + 0.01176 q-4 + 0.004681 q-5 + 0.002472 q-6 + 0.002197 q-7 Error statistics Estimated Mean : E{e(k)} = 4.8813 × 10 -3 Estimated Variance : λˆ2 = 2.5496 × 10 - 4

{e(k)} is practically a zero mean white noise sequence 1/9/2007

System Identification

77

ARX: Estimated Parameter Variances Value

Value

σˆ

a1

-0.8135

0.0674

a2

-0.1949

0.0868

-0.0783

0.0863

0.1107

0.0863

0.0354

0.0871

0.0175

0.0484

a3 a4 a5 a6 1/9/2007

σˆ

0.001

0.0009

0.013

0.0011

b3

0.0118

0.0014

b4

0.0047

0.0015

0.0025

0.0015

0.0022

0.0013

b1 b2

b5 b6

System Identification

Automation Lab IIT Bombay

78

39

Automation Lab IIT Bombay

ARX Model Auto Regressive with Exogenous input (ARX)

y (k ) = b1u (k − 1) + ... + bmu (k − m )

− a1 y (k − 1) − ... − an y (k − n ) + e (k )

Using shift operator (q), ARX model can be expressed as B (q −1 ) −d 1 y (k ) = q u (k ) + e (k ) A(q −1 ) A(q −1 )

A(q −1 ) = 1 + a1q −1 + .... + an q −n B (q −1 ) = b1q −1 + .... + bm q −m

Noise Model

where e(k) is white noise sequence Disadvantage Large model order required to get white residuals 1/9/2007

System Identification

Automation Lab IIT Bombay

Noise Models v (k ) =

79

1

A(q −1 )

e (k )

e (k ) : Zero mean white noise process with variance λ2 Auto Regressive (AR) Model

v (k ) = −a1v (k − 1) − ... − anv (k − n ) + e (k ) Alternatively, if poles of A(q) are inside unit circle, then, by long division 1

A(q −1 )

= 1 + h1q −1 + h2q −2 + ....... = H (q −1 )

v (k ) = H (q )e (k ) =



∑ hi e (k − i )

i =0

Moving Average (MA) Process v(k) = e (k ) + h1e (k − 1) + ....hn e (k − n ) 1/9/2007

System Identification

80

40

Automation Lab IIT Bombay

ARMA Model AR and MA models can be combined to formulate a more general ARMA model

v (k ) = −a1v (k − 1) − ... − anv (k − m ) + e (k ) + c1e (k − 1) + ... + cm e (k − m ) or v (k ) =

C (q −1 ) e (k ) A(q −1 )

e (k ) : Zero mean white noise process with variance λ2 Advantage: Parsimonious in parameters (significantly less number of model parameters required than AR or MA models for capturing noise characteristics) If poles of A(q) are inside unit circle, then, by long division

C (q −1 ) = 1 + h1q −1 + h2q −2 + ....... = H (q −1 ) A(q −1 ) 1/9/2007

System Identification

81

Example: Colored Noise

Automation Lab IIT Bombay

0.8q −1 − 0.4q − 2 v (k ) = e (k ) 1 − 1.5q −1 + 0.8q − 2 White Noise Passing Through Filter 4 3

Properties Of e(k) Mean = 0 Variance = 1

2

Filter Output

1 0 -1 -2 -3 -4 0

50

100

150

200

Sampling Instant

1/9/2007

System Identification

82

41

Colored Noise: Autocorrelation 1

Automation Lab IIT Bombay

Sample Autocorrelation Function (ACF)

Sample Autocorrelation

0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 0

5

10

1/9/2007

15 Lag

20

25

30

System Identification

Parameterized Models

83

Automation Lab IIT Bombay

ARMAX: Auto Regressive Moving Average with exogenous input (ARMAX)

y (k ) = b1u (k − d − 1) + ... + bmu (k − d − m ) − a1 y (k − 1) − ... − an y (k − n )

+ e (k ) + c1e (k − 1) + ... + cr e (k − r )

Or

y (k ) =

B (q −1 ) −d C (q −1 ) q u ( k ) + e (k ) A(q −1 ) A(q −1 )

Box-Jenkins (BJ) model: most general representation of time series models

y (k ) =

B (q −1 ) −d C (q −1 ) q u (k ) + e (k ) −1 A(q ) D (q −1 )

{e(k)} is white noise sequence in both the cases 1/9/2007

System Identification

84

42

Parameter Identification Problem

Automation Lab IIT Bombay

Given input output data collected from plant

Y N ≡ {y (k ) : y (0), y (1), y (2),......., y (N )} U N ≡ {u (k ) : u (0), u (1), u (2),......., u (N )} Choose a suitable model structure for the time series model and estimate the parameters of the model (coefficients of A(q), B(q), C(q) polynomials) such that some objective function of the residual sequence e(k) Ψ[e (0), e (1),.......e (N )]

is minimized. The resulting residual sequence {e(k)} should be a white noise sequence 1/9/2007

System Identification

ARMAX: One Step Prediction

85

Automation Lab IIT Bombay

Consider 2'nd order ARMAX model with d = 1

y (k ) = −a1 y (k − 1) − a2 y (k − 2) + b1u (k − 2) + b2u (k − 3) + e (k ) + c1e (k − 1) + c2e (k − 2)

y (k ) =

b1q −2 + b2q −3 1 + c1q −1 + c2q −2 u ( k ) e (k ) + 1 + a1q −1 + a2q −2 1 + a1q −1 + a2q −2

Difficulties: 9Sequences {y(k)} and {u(k)} are known but {e(k)} is unknown 9Non-Linear in parameter model – optimum can’t be computed analytically Solution Strategy Problem solved numerically using nonlinear optimization procedures 1/9/2007

System Identification

86

43

Inevitability of Noise Model

Automation Lab IIT Bombay

Crucial Property of Noise Model : Noise model and its inverse are stable and all its poles and zeros are inside unit circle ∞

v (k ) = H (q )e (k ) = ∑ hi e (k − i ) i =0

H(q) is stable i.e.



∑ hi < ∞

i =0



~

e (k ) = H −1 (q )v (k ) = ∑ hi e (k − i ) i =0

-1

H (q) is stable i.e.



~

∑ hi < ∞

i =0

Key problem in identification is to find such H(q) and a white noise sequence {e(k)} Note : H(q) is always 'monic' polynomial i.e. h0 = 1 1/9/2007

System Identification

Example: A Moving Average Process

87

Automation Lab IIT Bombay

Consider a first order MA process v(k) = e(k) + ce(k - 1) where {e(k)} is a white noise sequence i.e. H(q) = 1 + cq-1 =

q +c has a pole at q = 0 and zero at q = - c q

Then, H -1 (q ) =

1

1 + cq



−1

= ∑ ( −c )i q −i if c < 1 i =0

and e(k) can be recovered from measurements of v(k) ∞

e(k) = ∑ ( −c )i v (k − i ) i =0

Inversion of Noise Model plays a crucial role in the procedure for model identification 1/9/2007

System Identification

88

44

Automation Lab IIT Bombay

One Step Prediction Suppose we have observed v(t) upto t ≤ (k - 1) and

we want to predict v(k) based on measurements upto time (k - 1) ∞



v (k ) = ∑ hi e (k − i ) = e (k ) + ∑ hi e (k − i ) = e (k ) i =0

i =1



vˆ(k | k − 1) = ∑ hi e (k − i )

as e(k) has zero mean

i =1

vˆ(k | k − 1) : Conditiona l expectation of v(k) based on informatio n upto (k - 1)

vˆ(k | k − 1) = v (k ) − e (k ) = [H (q ) − 1]e (k ) =

H (q ) − 1 v (k ) H (q ) ~

vˆ(k | k − 1) = [H −1 (q ) − 1]v (k ) = ∑ − hiv (k − i ) ∞

i =1

1/9/2007

System Identification

89

One Step Output Prediction

Automation Lab IIT Bombay

Suppose we have observed y(t) and u(t) upto t ≤ (k - 1) and We have y(k) = G(q)u(k) + v(k) and we want to predict y(k) based on informatio n upto time (k - 1)

yˆ(k | k − 1) = G (q )u (k ) + vˆ(k | k − 1)

[

]

= G (q )u (k ) + 1 − H −1 (q ) v (k ) However, v(k) = y(k) - G(q)u(k)

[

]

yˆ(k | k − 1) = G (q )u (k ) + 1 − H −1 (q ) [y(k) - G(q)u(k)] Rearrangin g we have

[

]

yˆ(k | k − 1) = H −1 (q )G (q )u (k ) + 1 − H −1 (q ) y(k) or

H (q )yˆ(k | k − 1) = G (q )u (k ) + [H (q ) − 1]y(k) 1/9/2007

System Identification

90

45

ARX: One Step Predictor

Automation Lab IIT Bombay

Consider 2'nd order ARX model with d = 1 ⎡ b1q −2 + b2q −3 ⎤ ⎡ ⎤ 1 u (k ) + ⎢ e (k ) −1 −2 ⎥ −1 −2 ⎥ 1 a q a q a q a q 1 + + + + ⎣ ⎣ ⎦ ⎦ 1 2 1 2

y (k ) = ⎢

One step ahead predictor for this model is ⎡b1q −2 + b2q −3 ⎤ ⎡ − a q −1 − a2q −2 ⎤ u (k ) + ⎢ 1 ⎥ ⎥ y (k ) 1 1 ⎣ ⎦ ⎣ ⎦

yˆ(k | k − 1) = ⎢

which is equivalent to difference equation

yˆ(k | k − 1) = −a1 y (k − 1) − a2 y (k − 2) + b1u (k − 2) + b2u (k − 3)

Advantage : All terms in RHS are known Residual at k' th instant can be estimated as

e (k ) = y (k ) − yˆ(k | k − 1) 1/9/2007

System Identification

ARMAX: One Step Predictor

91

Automation Lab IIT Bombay

Consider 2'nd order ARMAX model with d = 1 ⎡ b1q −2 + b2q −3 ⎤ ⎡ 1 + c1q −1 + c2q −2 ⎤ u k + ( ) ⎥ ⎢1 + a q −1 + a q −2 ⎥e (k ) −1 −2 ⎣1 + a1q + a2q ⎦ ⎣ ⎦ 1 2

y (k ) = ⎢

One step ahead predictor for this model is ⎡ b1q −2 + b2q −3 ⎤ ⎡ (c1 − a1 )q −1 + (c2 − a2 )q −2 ⎤ u ( k ) + ⎥ ⎢ ⎥ y (k ) −1 −2 1 + c1q −1 + c2q −2 ⎣1 + c1q + c2q ⎦ ⎣ ⎦

yˆ(k | k − 1) = ⎢

which is equivalent to difference equation

yˆ(k | k − 1) = −c1 yˆ(k − 1 | k − 2) − c2 yˆ(k − 2 | k − 3) + b1u (k − 2) + b2u (k − 3)

+ (c1 − a1 )y (k − 1) + (c2 − a2 )y (k − 2)

Residual at k' th instant can be estimated as

ε (k ) = y (k ) − yˆ(k | k − 1) 1/9/2007

System Identification

92

46

ARMAX: One Step Predictor

Automation Lab IIT Bombay

Alternatively, using residuals at previous instances

ε (k − 1) = [y (k − 1) − yˆ(k − 1 | k − 2)]

ε (k − 2) = [y (k − 2) − yˆ(k − 2 | k − 3)] we can rearrange one step predictor as

yˆ(k | k − 1) = −a1 y (k − 1) − a2 y (k − 2) + b1u (k − 2) + b2u (k − 3) + c1ε (k − 1) + c2ε (k − 2)

ε (k ) = y (k ) − yˆ(k | k − 1) We can start prediction with initial guesses

ε (0) = ε (1) = 0 and, given model parameters (a1 , a2 , b1 , b2 , c1 , c2 ), we can generate sequence {ε (k)} using sequences {y(k)} and {u(k)}. 1/9/2007

System Identification

93

2’nd Order ARMAX Model

Automation Lab IIT Bombay

Optimization formulation Estimate (a1 , a2 , b1 , b2 , , c1 , c2 ) such that objective function Ψ=

N

N

k =3

k =3

2 ∑ [e(k)] = ∑ [y (k ) − yˆ(k | k − 1)]

2

is minimized with respect to (a1 , a2 , b1 , b2 , c1 , c2 )

Identified Model Parameters A(q) = 1 - 1.651 q^-1 + 0.68 q^-2 B(q) = 0.001748 q^-2 + 0.01154 q^-3 C(q) = 1 - 0.8367 q^-1 + 0.2501 q^-2 Residual {e(k)} Statistics Estimated Mean : E{e(k)} = 4.3601e - 003 Estimated Variance : λˆ2 = 2.6813e - 004 1/9/2007

System Identification

94

47

Automation Lab IIT Bombay

2’nd Order ARMAX Model

y(k)

1

ARMAX(2,2,2,2): Measured and Simulated Ouputs

0

-1 0

50

100

50

100

150

200

250

150

200

250

e(k)

0.05 0 -0.05 Time 1/9/2007

System Identification

95

Automation Lab IIT Bombay

Histogram of {e(k)} Histogram of innovatoion sequence e(k)

70

60

Number of Samples

50

40

30

20

10

0 -0.06

-0.04

-0.02

0

0.02

0.04

0.06

e(k)

1/9/2007

System Identification

96

48

Automation Lab IIT Bombay

ARMAX: Autocorrelation Auto Cotrelation for ARMAX(2,2,2,2)

Sample Autocorrelation

0.8 0.6 0.4 0.2 0 -0.2 0

5

1/9/2007

10 Lag

15

20

System Identification

97

Comparison of Model Predictions 0.6

Automation Lab IIT Bombay

Measured and simulated model output

0.4

Best Fit (%) ARMAX : 76.45 ARX : 76.37 OE : 77.38

0.2

y(k)

0 -0.2 -0.4 -0.6

ARMAX(2,2,2,2) ARX(6.6.2) Plant OE(2,2,2)

-0.8 -1 0

1/9/2007

50

100

Time

150

200

System Identification

250

98

49

Automation Lab IIT Bombay

Comparison of Models Step Response

y(k)

0.4 0.2 0 0

10

20 30 Sampling Instant

40

50

Impulse Response OE(2,2,2) ARX(6,6,2) ARMAX(2,2,2)

0.03

y(k)

0.02 0.01 0

0

5

10

15

1/9/2007

20

25

Sampling Instant

30

35

40

45

System Identification

99

Comparison of Models: Nyquist Plots

Automation Lab IIT Bombay

0.6 0.4 0.2 0 -0.2 -0.4

OE(2,2,2) ARX(6,6,2) ARMAX(2,2,2)

-0.6 -0.1

1/9/2007

0

0.1

0.2

System Identification

0.3

0.4

100

50

Automation Lab IIT Bombay

Prediction Error Method Given data set

ZN = {(y(K), u(k) ) : k = 1,2....N }, Model

y (k ) = G (q , θ )u (k ) + H (q , θ )e (k ) Optimal 1 - step predictor

[

]

yˆ(k | k − 1) = H −1 (q , θ )G (q , θ )u (k ) + 1 − H −1 (q , θ ) y (k ) One step prediction error is defined as

ε (k, θ ) = y (k ) − yˆ(k | k − 1, θ ) Parameter Estmimation by Prediction Error Method Find θ that minimizes objective function

V (θ , Z N ) = 1/9/2007

N

1

∑ ε (k, θ )2

N

k =1

System Identification

PEM: Parameter Estimation θˆN =

101

Automation Lab IIT Bombay

Min

V (θ , Z N )

θ

Typically, the resulting parameter estimation problem is solved numerically using (a) Nonlinear optimization (b) Gauss Newton Method If it is desired to emphasize certain frequency of interest, then, we can minimize 2 1 N V (θ , Z N ) = ∑ [ε F (k, θ )]

N

k =1

where ε F (k) = F (q −1 )ε (k)

F (q −1 ) represents a filter

Alternate Choice of Objective Function

V (θ , Z N ) = 1/9/2007

1

N

N

∑ ε (k, θ )2

k =1

System Identification

102

51

Model Order Selection

Automation Lab IIT Bombay

Model order determined by minimizing Akaike Information Criterion (AIC)

AIC (θˆN ) = N ln ⎡⎢

1

N

⎤ ⎣ N k =1 ⎦ n : Number of model parameters AIC = {Prediction Term} + { Model Order term} 9Prediction Term: estimate of how ell the model fits data 9Model Order Term: measure of model complexity required to obtain the fit AIC strikes a balance between low residual variance and excessive number of model parameters, with smaller values indicating more desirable models Basic Idea: Penalize model complexity (measured by n) and obtain a model, which is reasonable w.r.t. variance errors and model complexity 1/9/2007

∑ ε (k , θˆN )2 ⎥ + 2n

System Identification

MIMO System Identification

103

Automation Lab IIT Bombay

ƒ ARX Model: Method for ARX parameter identification can be extended to deal directly with multivariate data ƒ OE / ARMAX / BJ Models: Typically, an n x m MIMO system is modeled as n MISO (Multi Input Single Output) systems y i (k ) = Gi 1 (q )u1 (k ) + ...... + Gim (q )um (k ) + Hi (q )ei (k ) i = 1,2,.....n MISO models are combined to form a one MIMO model ƒ Input excitation: Inputs can be perturbed sequentially or simultaneously 1/9/2007

System Identification

104

52

Automation Lab IIT Bombay

Identification Experiments on 4 Tank Setup

Input 2

Input 1

Output 2 Output 1

1/9/2007

System Identification

105

4 Tank Setup: Input Excitations

Automation Lab IIT Bombay

Input 2 (mA)

Input 1 (mA)

Manipulated Input Sequence

1/9/2007

1 0 -1 0

200

400

0

200

400

600

800

1000

1200

600 800 Time (sec)

1000

1200

1 0 -1

System Identification

106

53

4 Tank Setup: Measured Output

Automation Lab IIT Bombay

Level 1 (mA)

Meaured Output 5 0 -5 0

200

400

0

200

400

600

800

1000

1200

600 800 Time (sec)

1000

1200

Level 2 (mA)

4 2 0 -2 -4 -6

1/9/2007

System Identification

107

Splitting Data for Identification and Validation

Automation Lab IIT Bombay

Input and output signals

y1

5 0 -5 0

500

1000

1 u1

0.5 0

-0.5 0

Identification Data 1/9/2007

500

Samples

1000

System Identification

Validation data 108

54

Automation Lab IIT Bombay

MISO OE Model MISO 2’nd Order Model

y1(t) = [B1(q)/A1(q)]u1(t) + [B2(q)/A2(q)]u2(t) + e1(t) B1(q) = 0.1393 q-1 + 0.04704 q-2 B2(q) = 0.002375 q-1 + 0.01105 q-2 A1(q) = 1 - 0.2454 q-1 - 0.6571 q-2 A2(q) = 1 - 1.887 q-1 + 0.8903 q-2 Estimated using Prediction Error Method Loss function 0.114719 1/9/2007

Sampling interval: 3

System Identification

109

Automation Lab IIT Bombay

OE Model: Validation Measured and simulated model output 3 2

y1

1 0 -1 -2 oe221 Fits 87.07% Validation data

-3 1100

1/9/2007

1150

1200

1250 1300 Time

System Identification

1350

1400

110

55

Automation Lab IIT Bombay

Residual Autocorrelation: ARX

Sample Autocorrelation

Sample Autocorrelation Function (ACF) of arx20201 for y1 0.8 0.6 0.4 0.2 0 -0.2 0

5

1/9/2007

10 Lag

15

20

System Identification

111

Residuals Autocorrelations B-J 3’rd Order

ARMAX 4’th Order

0.8 0.6 0.4 0.2 0

1/9/2007

Sample Autocorrelation Function (ACF) of bj33331 for y1

Sample Autocorrelation

Sample Autocorrelation

Sample Autocorrelation Function (ACF) of armx441 for y1

-0.2 0

5

10 Lag

15

Automation Lab IIT Bombay

20

0.8 0.6 0.4 0.2 0 -0.2 0

System Identification

5

10 Lag

15

20

112

56

Automation Lab IIT Bombay

Model Validation 4

Measured and simulated model output

2

y1

0 -2 -4 -6 1100

bj33331 89.89% armax4441 86.79% Validation data arx20201 86.83% 1150

1200

1/9/2007

1250 1300 Samples

1350

1400

System Identification

ARMAX Model

113

Automation Lab IIT Bombay

ARMAX (4’th Order) A(q)y1(t) = B1(q)u1(t) + B2(q)u2(t) + C(q)e1(t) A(q) = 1 - 0.6236 q-1 - 0.8596 q-2 - 0.0758 q-3 + 0.568 q-4 B1(q) = 0.08324 q-1 + 0.02757 q-2 + 0.02681 q-3 - 0.1214 q-4 B2(q) = 0.004045 q-1 + 0.03261 q-2 - 0.01841 q-3 + 0.0201 q-4 C(q) = 1 - 0.4695 q-1 - 0.8017 q-2 - 0.1065 q-3 + 0.4855 q-4 Loss function 0.0243707

1/9/2007

System Identification

114

57

Automation Lab IIT Bombay

Box-Jenkins Model y(t) = [B(q)/F(q)]u(t) + [C(q)/D(q)]e(t) B1(q) = 0.08196 q-1 + 0.1035 q-2 + 0.1323 q-3 B2(q) = 0.01197 q-1 + 0.001306 q-2 + 0.01304 q-3 C(q) = 1 - 1.976 q-1 + 1.126 q-2 - 0.1453 q-3 D(q) = 1 - 2.096 q-1 + 1.209 q-2 - 0.1128 q-3 F1(q) = 1 + 0.3058 q-1 - 0.5066 q-2 - 0.6204 q-3 F2(q) = 1 - 0.897 q-1 - 0.9828 q-2 + 0.8861 q-3 Loss function 0.0239039 1/9/2007

System Identification

115

ARMAX:State Realization

Automation Lab IIT Bombay

x(k+1) = Φ x(k) + Γ x(k) + L∞ e(k) Y(k) = C x(k) + e(k) Φ

= [0.6236 1

0

0

0.8596

0

1

0

0.0758

0

0

1

-0.5680

0

0

0]

Γ = [ 0.0832 0.0040 0.0276 0.0326

L∞ = [ 0.1541 0.0579

0.0268 -0.0184

-0.0307

-0.1214

-0.0826 ] ;

C=[ 1 1/9/2007

0.0201 ] 0

0

0] System Identification

116

58

Recursive Parameter Estimation

Automation Lab IIT Bombay

Consider 2'nd order ARX model with d = 1

y (k ) = −a1 y (k − 1) − a2 y (k − 2) + b1u (k − 2) + b2u (k − 3) + e (k ) y (k ) = ϕT (k )θ + e (k ) ϕT (k ) = [− y (k − 1) − y (k − 2) u (k − 2) u (k − 3)] Arranging in matrix form ⎡ y(2) ⎤ ⎡ ϕT (2) ⎤ ⎡ a1 ⎤ ⎡ e(2) ⎤ ⎥ ⎢a ⎥ ⎢ ⎢ y(n + 1)⎥ ⎢ T e(3) ⎥ ⎥ ⎥ = ⎢ ϕ (3) ⎥ ⎢ 2 ⎥ + ⎢ ⎢ ⎢ ... ⎥ ⎢ ........ ⎥ ⎢ b1 ⎥ ⎢ ... ⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ T ⎢ ⎣ y(N) ⎦ ⎣ϕ (N )⎦ ⎣b2 ⎦ ⎣e(N)⎦ Y(N ) = A(N )θ + e(N )

N introduced as formal parameter to indicate that data up to instant N has been used

Least square parameter estimation

[

]

ˆ(N ) = A(N )T A(N ) −1 A(N ) Y θ 1/9/2007

System Identification

117

RLS: Problem Formulation

Automation Lab IIT Bombay

Nhen an additional measurement is obtained on - line , matrix A becomes ⎡ A(N ) ⎤ A(N + 1) = ⎢ T ⎥, ⎣ϕ (N + 1)⎦

⎡ Y (N ) ⎤ ⎥ ⎣y (N + 1)⎦

Y (N + 1) = ⎢

New estimate θˆ(N + 1) can be written as

θˆ(N + 1) = [AT (N + 1)A(N + 1)] AT (N + 1)Y (N + 1) −1

Thus, A(N) keeps growing in size as new data arrives. Instead of inverting AT (N + 1)A(N + 1) at every instat, Can we rearrange calculations at (N+1) so that solution at Instant N can be used to compute solution at instant (N+1)?

[

]

θˆ(N + 1) = [AT (N )A(N ) + ϕ (N + 1)ϕT (N + 1)]

−1

[

× AT (N )Y (N ) + ϕ (N + 1)y (N + 1) 1/9/2007

System Identification

] 118

59

Automation Lab IIT Bombay

RLS: Solution Using matrix inversion lemma

[A + BCD]−1 = A −1 − A −1B [C −1 + DA −1B ] DA −1 −1

and some rearrangem ent Solution to above problem is given by following recursive set of equatiosns

θˆ(N + 1) = θˆ(N ) + L(N )[y (N + 1) − yˆ(N + 1 | N )] where yˆ(N + 1 | N ) = ϕT (N + 1)θˆ(N ) Estimator gain matrix L(N) is computed by solving following Riccati equations

L(N ) = P (N )ϕ (N + 1)[1 + ϕT (N + 1)P (N )ϕ (N + 1)]

−1

P (N + 1) = [I − L(N )ϕT (N + 1)]P (N )

1/9/2007

System Identification

119

Automation Lab IIT Bombay

RLS: Initialization In order to obtain an initial conditions to start RLS, it is necessary to choose N = N 0 such that

AT (N 0 )A(N 0 ) is nonsingula r . P (N 0 ) = [AT (N 0 )A(N 0 )]

−1

θˆ(N 0 ) = P (N 0 )AT (N 0 )Y (N 0 ) Recursive canculations can be used for N ≥ N0

Alternatively, recursive equations are begun with the initial covariance matrix

P (0) = αI and θˆ(0) = 0 where α is chosen large ( ≈ 10 4 ) indicating that we have not trust in the initial parameter estimate i.e. θˆ( 0) = 0

[

This choice ensures P(N) → AT (N )A(N ) 1/9/2007

System Identification

]

−1

as N increases 120

60

Automation Lab IIT Bombay

Time Varying Systems For time varying systems, it is necessary to eliminate the influence of old data. This can be achieved using an exponential weighting in the loss function. 2

J (θ ) = ∑ λN −k [y (k ) − ϕT (k )θ ] N

k =1

λ is called as forgetting factor RLS for this system is given by,

[

θˆ(k + 1) = θˆ(k ) + K (k ) y (k + 1) − ϕT (k + 1)θˆ(k )

K (k ) = P (k )ϕ (k + 1)[λ + ϕT (k + 1)P (k )ϕ (k + 1)]

]

−1

P (k + 1) = [I − K (k )ϕT (k + 1)]P (k ) λ

Asymptotic Data Length (ASL) = 1/(1 - λ )

λ = 0.999 : ASL = 1000 ; λ = 0.95 : ASL = 20 1/9/2007

System Identification

RLS Application to 4 Tank Data

121

Automation Lab IIT Bombay

y (k ) = −a1 y (k − 1) − a2 y (k − 2) + b1u1 (k − 1) + b2u1 (k − 2) + d 1u2 (k − 1) + d 2u2 (k − 2) + e (k )

1/9/2007

System Identification

122

61

Automation Lab IIT Bombay

Model Predictions

1/9/2007

System Identification

123

Automation Lab IIT Bombay

Parameter Variations 1

a1 a2 b1 b2 d1 d2

0.8

Model Parameters

0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 1/9/2007

200

400

600 800 1000 Sampling Instant System Identification

1200 124

62

Automation Lab IIT Bombay

Extended Recursive Formulations (ELS) OE Model x(k) = a1x (k − 1) + a2x (k − 2) + b1u (k − 2) + b2u (k − 3)

y (k ) = x (k ) + v (k )

Recursive OE Formulation uses regressor vector

ϕ (k ) = [xˆ(k − 1) xˆ(k − 2) u (k − 1) u (k − 2)] ARMAX Model

x(k) = a1 x (k − 1) + a2 x (k − 2) + b1u (k − 2) + b2u (k − 3) + e (k ) + c1e (k − 1) + c2e (k − 2)

Extended Recursive Formulatio n uses regressor vector

ϕ (k ) = [y (k − 1) y (k − 2) u (k − 1) u (k − 2)

ε (k − 1) ε (k − 2)]

where ε (k - 1) andε (k - 2) represent model residuals at previous instants 1/9/2007

System Identification

Frequency Domain Analysis „

Automation Lab IIT Bombay

Time domain formulations of parameter estimation problem ƒ ƒ

„

125

Useful for carrying out parameter estimation Does not provide any insight into internal working of optimization problem

Frequency domain (power spectrum) analysis ƒ ƒ ƒ ƒ

Based on Fourier transform of auto-correlation and cross correlation function of signals Powerful tool for analysis (analogous to use of Laplace transforms in linear control theory) Provides insight into various aspects of optimization formulation Can be used for perturbation signal design and estimation error analysis

1/9/2007

System Identification

126

63

Automation Lab IIT Bombay

Stationary Process Consider noise model v (k ) =



∑ hi e (k − i )

i =0

where {e(k)}is a zero meanwhite noise process with variance λ2 ∞

E {v (k )} = ∑ hi E {e (k − i )} = 0 i =0

and auto - covariance is ∞



Rv (τ ) = E {v(k),v (k − τ )} = ∑ ∑ h (t )h ( j )E {e(k − t),e (k − j − τ )} t =0 j =0



= λ∑





∑ h (t )h ( j )δ (k − j − τ ) = λ ∑ h (t )h (t − τ )

t =0 j =0

t =0

Note : h (r ) = 0 if r < 0. Covariance Rv (τ ) is independen t of k and is uniquely defined by {h(k) } and λ2 .

Such stochastic process is called ' stationary' since it has zero mean and auto - covariance is independen t of time (k ). 1/9/2007

System Identification

127

Quasi-Stationary Process

Automation Lab IIT Bombay

y (k ) = G (q )u (k ) + H (q )e (k ) = (deterministic) + (stochastic)

Since E {e(k) } = 0, we have

E {y(k) } = G (q )u (k ) and {y(k)}is not a stationary process Quasi-stationary process A signal {s (k )} is said to be quasi stationary if it is subject to (i) E {s(k) } = ms (k )

ms (k ) ≤ C

(ii ) E {s (k )s (t )} = Rs (k ,t )

Lim

1

N → ∞N

∀k

Rs (k ,t ) ≤ C

N

∑ Rs (k , k − τ ) = Rs (τ )

k =1

Rs (τ ) : auto - correlation function of signal {s(k)} 1/9/2007

System Identification

128

64

Signal Spectrum and Cross Spectrum Defining E {f (k )} =

Lim

1

N → ∞N

Automation Lab IIT Bombay

N

∑ f (k ) we have

k =1

Rs (τ ) = E {s (k )s (k − τ )}

Rsw (τ ) = E {s (k )w (k − τ )} We define power spectrum of signal {s(k)} Φ s (ω ) =



∑ Rs (τ ) e − jωτ

τ = −∞

and cross sprctrum between {s(k)} and {w(k)} as Φ sw (ω ) =



∑ Rsw (τ ) e − jωτ

τ = −∞

provided the infinite sum exists. Φ s (ω ) is always a real function of (ω ) Φ sw (ω ) is, in general, complex valued function of ω 1/9/2007

System Identification

Power Spectrum (Contd.)

129

Automation Lab IIT Bombay

Note: Spectrum of signal s(t) represents Fourier transform of auto-covariance function Inverse Transform: By definition of inverse Fourier transform (Parseval' s Theorem)

E [s 2 (k )] = Rs (0) =

1 2π

π

∫ Φ s (ω )dω



Fundamental Modeling Problem Given a disturbance with spectrum Φ v (ω ) Can we find a transfer function H(q) such that the random process v(k) = H(q)e(k) has same spectrum with {e(k)} being a white noise? 1/9/2007

System Identification

130

65

Automation Lab IIT Bombay

Spectral Factorization Main Result

Theorem : Suppose that Φ v (ω ) > 0 is a rational function of cos(ω ) (or e iω ).Then there exists a monic rational transfer function of z, R(z), with no poles and zeros outside unit circle such that Φv (ω ) = λ2 R (e iω )

2

If the residue signal v(k) is weakly stationary, i.e. cov[v(k),v(s)] is function of only (k-s) for any pair (k,s), then spectral factorization theorem states that such a random sequence can be thought of being generated by a stable linear transfer function driven by white noise 1/9/2007

System Identification

131

Spectrum of White Noise

Automation Lab IIT Bombay

λ2 for τ = 0 ⎩0 for τ = ±1, ± 2,.... ⎧

ree (τ ) = ⎨

Case : λ2 = 1 P xx - X P o w e r S p e c tral D e ns ity 10

0 .1

10

10

0

White Noise: Uniform power spectrum density at all frequencies

-0 .1

T h e o re tic a l

10

10

-0 .2

E s tim ate d -0 .3

0

1/9/2007

0 .2

0 .4 0 .6 F re q ue nc y

0 .8

System Identification

1

132

66

Automation Lab IIT Bombay

Example: Autocorrelation for AR Numerical Estimate

1 ⎡ N ⎤ ∑ y (t )y (t − τ )⎥ N − τ ⎢⎣t =τ +1 ⎦ 2 Theoretical (with knowledge of λ )

Rˆv (τ ) =

Example : y (t ) = 0.5y (t − 1) + e (t )

E [y (t )y (t − τ )] = 0.5E [y (t − 1)y (t − τ )] + E [e (t )y (t − τ )] Ry (τ ) = 0.5Ry (τ − 1) + Rye (τ )

But, Rye (τ ) = E [e (t )y (t − τ )] = 0 if τ > 0 = λ2 if τ = 0 For τ = 0 : Ry (0) = 0.5Ry (1) + λ2 For τ = 1 : Ry (1) = 0.5Ry (0) ⇒ Ry (τ ) = 1/9/2007

4 (0.5)τ λ2 3

System Identification

Example: Autocorrelation Function

133

Automation Lab IIT Bombay

y (t ) = 0.5y (t − 1) + e (t )

Case : λ2 = 1

1/9/2007

System Identification

134

67

Spectrum of a Stochastic Process

Automation Lab IIT Bombay

Consider a stationary stochastic process

v (k ) = H (q −1 )e (k )

{e (k )} : zero mean white noise process with variance λ2 Φv (ω ) = λ2 H (e iω )

2

Example : y (k ) =

1 e (k ) 1 − 0.5q −1

Φ y (ω ) = λ2

1 1 − 0.5 exp( − jω )

2

1 [1 + 0.5 cos(ω )]2 + 0.25 sin 2 (ω ) ω ∈ [0, π /T ] ; π /T : Nyquist Frequency = λ2

1/9/2007

System Identification

135

Automation Lab IIT Bombay

Spectrum: Colored Noise Pxx - X Power Spectral Density 5 Theoretical Estimated from data 4

3

y (t ) = 0.5y (t − 1) + e (t ) Case : λ2 = 1

2

1

0 0 1/9/2007

0.2

0.4 0.6 Frequency System Identification

0.8

1 136

68

Spectra for Linear Systems

Automation Lab IIT Bombay

Spectrum of mixed deterministic and stochastic signal s(k) = u(k) + v(k)

{u (k )} : Quasi - stationary and deterministic signal with spectrum Φ u (ω ) {v (k )} : Stationary stochastic process with spectrum Φ v (ω ) E [s (k )s (k − τ )] = E [u (k )u (k − τ )] + E [v (k )v (k − τ )] = Ru (τ ) + Rv (τ )

as E [u (k )v (k − τ )] = 0 y (k ) = G (q )u (k ) + H (q )e (k )

{u (k )} : Quasi - stationary and deterministic signal {e (k )} : Zero mean white noise process with variance λ2 2

Φ y (ω ) = G (e iω ) Φ u (ω ) + λ2 H (e iω )

2

Φ yu (ω ) = G (e iω )Φ u (ω ) 1/9/2007

System Identification

Asymptotic Errors Analysis

137

Automation Lab IIT Bombay

True Behavior

y(k) = G (q )u (k ) + H (q )e (k ) Proposed / Identified Model

y(k) = Gˆ(q )u (k ) + Hˆ(q )e (k ) Prediction error

[ ] (q )[(G (q ) − Gˆ(q ) )u (k ) + H (q )e (k )]

ε (k) = Hˆ −1(q ) y (k ) − Gˆ(q )u (k ) = Hˆ −1

Parameters estimated by minimizing

V (θ , Z N ) =

1

N

ε 2 (k, θ ) ∑ N k =1

Asymptotic properties derived for N → ∞ 1/9/2007

System Identification

138

69

Convergence Results

Automation Lab IIT Bombay

Convergenc e Result : Parameter estimator concerges with probability 1 to the minimizing argument of expected value of the loss function

θˆN → θ * with probability 1 as N → ∞, i.e., θ* =

⎡arg min ⎤ V (θ , N )⎥ N → ∞ ⎢⎣ θ ⎦ lim

Asymptotic parameter estimate is independent of any noise realization that is present in data. This property helps in characterizing structural (bias) errors in identified models Issue addressed Can a data generating system be recovered exactly from measured input-output data sequence? 1/9/2007

System Identification

Consistency Result

139

Automation Lab IIT Bombay

Simultaneous parameterization of G(q) and H(q): If model set considered in identification exercise can describe system exactly and if the input signal is sufficiently exciting then

Gˆ(q , θ * ) = G (q ) and Hˆ(q , θ * ) = H (q ) Independent Parameterization of G(q) and H(q): If model set considered in identification exercise can describe deterministic component of the system exactly and if the input signal is sufficiently exciting then

Gˆ(q , θ * ) = G (q )

Provided noise model is independently parameterized 1/9/2007

System Identification

140

70

Persistency of Excitation

Automation Lab IIT Bombay

A signal {u(k)} is said to be persistently exciting of order 'n' if following limit exists

ru (τ ) =

lim 1 N → ∞N

N

∑ u (k + τ )u (k )

k =1

and matrix

ru (1) ..... ru (n − 1)⎤ ⎡ ru (0) ⎢ r ( −1) r (0) ..... ...... ⎥ u u ⎢ ⎥ Ru (n ) = .... ..... ...... ⎥ ⎢ .... ⎢ ru (0) ⎥⎦ ⎣ru (1 − n ) .... ..... is positive definite 1/9/2007

System Identification

Persistency of Excitation

141

Automation Lab IIT Bombay

1. If {u(k)} is zero mean white noise with variance σ 2 , then Ru (n ) = σ 2In , which is always + ve definite. Thus, white noise is persistently exciting (PE) of all orders 2. If {u(k)} is step function of magnitude σ , then then ru (τ ) = σ 2 for all τ . Hence, Ru (n ) is non - singular if and only if n = 1. Thus, step function is PE of order 1. Consequently, step function cannot be used to identify coefficien ts of numerator if transfer function order is higt than 1. 1/9/2007

System Identification

142

71

Automation Lab IIT Bombay

Example Consider data generating system with y(k) = G(q)u(k) + H(q)e(k)

G (q ) =

b1q −1 + b2q −2 1 + c1q −1 H q = ; ( ) 1 + a1q −1 + a2q −2 1 + d1q −1

1.

For consistent identification, Box-Jenkin’s model structure will have to be considered to guarantee that model set is able to describe the system 2. When identifying model with ARX structure, irrespective of order of the models, no consistency can be achieved and biased estimates will result 3. When identifying model with OE structure, having independent parameterization property, a second order OE model will be consistently identify G(q). 1/9/2007

System Identification

143

Automation Lab IIT Bombay

Asymptotic Bias and Variance Errors

In reality, true model order is not exactly known, data length is finite and data contains unmeasured disturbances / noise. This results in two types of errors in estimation

[

G(q) - G(q, θˆN ) = [G (q ) − G (q , θ * )] + G (q , θ * ) − G(q, θˆN )

]

[G (q ) − G (q , θ )] : structural or bias error induced by *

fact that model set is not rich enough

[G (q , θ

to exactly characterize the plant *

]

) − G(q, θˆN ) : noise induced or variance errors due to unmeasured disturbances / noise

⎧Total Error ⎫ ⎨ ⎬ = {Bias Error} + {Variance Error} ⎩of Estimation ⎭ 1/9/2007

System Identification

144

72

Automation Lab IIT Bombay

Bias Error: Concept

Real systems are of very high order and model is always chosen of lower order Thus, bias errors are always present in any identification exercise Classic Example in Process Control 1 (10s + 1) 8 1 ˆ(s) = e −36s Identified FOPTD model : G (50s + 1)

G (s ) =

Process Dynamics :

1/9/2007

System Identification

145

Automation Lab IIT Bombay

Bias Errors: Concepts Step Response

1

Amplitude

0.8 0.6 0.4 FOPTD Model Step Response PLant Step Response

0.2 0 0

1/9/2007

50

100

150

200

Time (sec)

250

System Identification

300

350

146

73

Automation Lab IIT Bombay

Bias Error: Concept Nyquist Diagram

1

Imaginary Axis

0.5

0

Model

-0.5

Plant

-1 -1

-0.5

1/9/2007

0

0.5

Real Axis

1

System Identification

147

Automation Lab IIT Bombay

Bias Errors Prediction error

[(

)

ε (k) = Hˆ −1 (q ) G (q ) − Gˆ(q ) u (k ) + H (q )e (k )

]

Parameters estimated by minimizing

V (θ , Z N ) =

1

N

N

∑ ε 2 (k, θ )

k =1

When data length N is large, we can write Rε (0) = E [ε (k )] =

lim ⎡ 1 N → ∞ ⎢⎣N

N

⎤ ⎦

∑ ε 2 (k, θ )⎥ =

k =1

lim [V (θ , Z N )] N →∞

By Parseval' s Theorem π lim [V (θ , Z N )] = ∫ Φ ε (ω )dω N →∞ −π 2 1 Φ ε (ω ) = ⎡ G (e iω ) − Gˆ(e iω ) Φu (ω ) + Φv (ω )⎤ ⎢⎣ ⎥⎦ ˆ iω 2 H (e )

1/9/2007

System Identification

148

74

Bias Error: Interpretations lim

N

π

[θ ] = ∫ ⎡⎢ G (e iω ) − Gˆ(e iω ) →∞ N ⎣ −π

2

Automation Lab IIT Bombay

1 dω Φ u (ω ) + Φv (ω )⎤ ⎥⎦ ˆ iω 2 H (e ) 2

„

Input spectrum can be chosen intelligently to reduce variance errors in certain frequency regions of interest For Output Error model (i.e. H(q)=1), lim

N

π

[θ ] = ∫ ⎡⎢ G (e iω ) − Gˆ(e iω ) →∞ N ⎣ −π

2

Φ u (ω ) ⎤dω ⎥⎦

Thus, Gˆ(e iω ) → G (e iω ) if model is not under - parameterized 1/9/2007

System Identification

149

Example: Input Selection

Automation Lab IIT Bombay

Random Binary Signal in frequency range [0,0.1]

1 u(k)

„

iω iω Bias distribution of G (e ) − Gˆ(e ) in frequency domain is weighted by Signal To Noise Ratio

0 -1 0

50

100

150

50 100 Sampling Instant

150

Random Binary Signal in frequency range [0.3,0.5]

Generated By MATLAB Command ‘idinput’

1 u(k)

„

0 -1 0

1/9/2007

System Identification

150

75

Automation Lab IIT Bombay

Input Spectrum 10

Pxx - X Power Spectral Density

1

RBS in [0 0.1] RBS in [0.3 0.5]

10

10

10

1/9/2007

0

-1

-2

0

0.2

0.4 0.6 Frequency

0.8

1

System Identification

151

Automation Lab IIT Bombay

Variance Errors Asymptotic variance of estimates using PEM are

[ ] Nn ΦΦ ((ee )) ˆ(e )] ≅ n H(e ) Var[H N

ˆ(e iω ) ≅ Var G

v

u









2

n : Model Order N : Data Length Implications: 9Variance errors can be reduced by ƒ increasing the data length (N) ƒ choosing high signal to noise ratio Φ (e iω ) Signal to Noise Ratio (SNR) = u iω Φv (e ) 9Models with large number of parameters require relatively larger data set for better parameter estimation. 1/9/2007

System Identification

Noise to Signal Ratio

152

76

State Realization

Automation Lab IIT Bombay

Consider 3'rd order SISO T.F.

y (k ) =

b1q 2 + b2q + b3 u (k ) q 3 + a1q 2 + a2q + a3

Problem : Derive state space model of form x(k + 1) = Φx(k) + Γu(k) y(k) = Cx(k) Canonical realizations ¾Observable canonical form ¾Diagonal canonical form ¾Controllable canonical form 1/9/2007

System Identification

Controllable Canonical Form

153

Automation Lab IIT Bombay

Introducing intermediate variable η (k) as

[q

3

+ a1q 2 + a2q + a3 ]η (k ) = u (k )

y (k ) = [b1q 2 + b2q + b3 ]η (k ) we have

η (k + 3) = -a1η (k + 2) − a2η (k + 1) − a3η (k ) + u (k ) Defining state variables x1 (k ) = η (k + 2) ; x2 (k ) = η (k + 1) ; x3 (k ) = η (k ) we have x1 (k + 1) = −a1x1(k ) − a2x 2 (k ) − a2x 3 (k ) + u (k )

x 2 (k + 1) = x 3 (k ) x1(k + 1) = x 2 (k )

1/9/2007

System Identification

154

77

Controllable Canonical Form

Automation Lab IIT Bombay

The above equations can be rearranged as ⎡-a1 − a2 x(k + 1) = ⎢ 1 0 ⎢ 1 ⎣⎢ 0

− a3 ⎤ ⎡ 1⎤ ⎥ 0 x (k ) + ⎢0⎥u (k ) ⎥ ⎢ ⎥ 0 ⎦⎥ ⎣⎢0⎦⎥

y (k ) = b1η (k + 2) + b2η (k + 1) + b3η (k ) y (k ) = [b1 b2 b3 ]x (k )

Above form can be easily extended to develop state realization for single input multiple output (SIMO) system with a common denominator polynomial

1/9/2007

System Identification

Realization for SIMO Model

155

Automation Lab IIT Bombay

Consider SIMO T.F. model ⎡ y1 (q ) ⎤ 1 ⎢y (q )⎥ = ⎢ 2 ⎥ q 3 + a1q 2 + a2q + a3 ⎢⎣y 3 (q )⎥⎦

⎡ b11q 2 + b12q + b13 ⎤ ⎥ ⎢ 2 ⎢b21q + b22q + b23 ⎥ ⎢⎣b31q 2 + b32q + b33 ⎥⎦

Then, state realizatio n for above SIMO system can be written as follows ⎡-a1 − a2 0 x(k + 1) = ⎢ 1 ⎢ 1 ⎣⎢ 0

− a3 ⎤ ⎡ 1⎤ ⎥ 0 x (k ) + ⎢0⎥u (k ) ⎥ ⎢ ⎥ 0 ⎦⎥ ⎣⎢0⎦⎥

⎡b11 b12 b13 ⎤ y (k ) = ⎢b21 b22 b23 ⎥x (k ) ⎢ ⎥ ⎢⎣b31 b32 b33 ⎥⎦ 1/9/2007

System Identification

156

78

State Realization of MIMO T.F.

Automation Lab IIT Bombay

Consider 2 × 2 T.F. model ⎡ g (q ) ⎤ y(k ) = ⎢ 11 ⎥u1(k ) ⎣ g21(q )⎦ with state realizatio n

Defining a augmented state vector X(k) as ⎡ x (k ) ⎤ X(k) = ⎢ 1 ⎥ ⎣x 2 (k )⎦ we can write

x (k + 1) = Φ x (k ) + Γ u (k ) (1)

(1)

(1)

(1)

y (k ) = C (1)x (1) (k )

MIMO state realizatio n as

and ⎡ g (q ) ⎤ y(k ) = ⎢ 12 ⎥u2 (k ) ⎣ g22 (q )⎦ with state realizatio n

⎡Γ (1) ⎡Φ (1) [0] ⎤ 0 ⎤ X(k + 1) = ⎢ X(k) + ⎢ u(k) ( 2) ⎥ ( 2) ⎥ 0 0 [ ] Γ Φ ⎣ ⎦ ⎣ ⎦ y(k) = C (1) C ( 2) X (k )

[

]

x (2) (k + 1) = Φ ( 2)x ( 2) (k ) + Γ ( 2)u2 (k )

y (k ) = C ( 2)x ( 2) (k ) 1/9/2007

System Identification

157

Observable Canonical Form

Automation Lab IIT Bombay

y (k + 3) + a1y (k + 2) + a2y (k + 1) − b1u (k + 2) − b2u (k + 1) = −a3 y (k ) + b3u (k )

Defining LHS of above equation as x3 (k + 1) and x1 (k) = y(k) we have

x3 (k + 1) = −a3x1(k ) + b3u (k )

[q

3

........(1)

+ a1q 2 + a2q + a3 ]η (k ) = [b1q 2 + b2q + b3 ]u (k ) is equivalent to

η (k + 3) + a1η (k + 2) + a2η (k + 1) + a3η (k ) = b1u (k + 2) + b2u (k + 1) + b3u (k )

1/9/2007

System Identification

158

79

Observable Canonical Form

Automation Lab IIT Bombay

Now rearrangin g x3 (k )

x 3 (k ) = y (k + 2) + a1y (k + 1) + a2y (k ) − b1u (k + 1) − b2u (k ) as follows

y (k + 2) + a1y (k + 1) − b1u (k + 1) = −a2y (k ) + x 3 (k ) + b2u (k ) and defining LHS of above equation as x2 (k + 1), we have x2 (k + 1) = −a2x1(k ) + x 3 (k ) + b2u (k )

......( 2)

Similarly we can write x2 (k ) = y (k + 1) + a1y (k ) − b1u (k ) which can be rearranged as x1 (k + 1) = −a1x1(k ) + x 2 (k ) + b1u (k ) .....(3) 1/9/2007

System Identification

Observable Canonical Form

159

Automation Lab IIT Bombay

The equations (1), (2) and (3) can be rearranged as ⎡ -a1 x(k + 1) = ⎢− a2 ⎢ ⎣⎢− a3

1 0⎤ ⎡b1 ⎤ ⎥ 0 1 x (k ) + ⎢b2 ⎥u (k ) ⎥ ⎢ ⎥ 0 0⎦⎥ ⎣⎢b3 ⎦⎥

y (k ) = [1 0 0]x (k )

Above form can be easily extended to develop state realization for multiple input single output (MISO) system with a common denominator polynomial

1/9/2007

System Identification

160

80

Realization for MISO Model

Automation Lab IIT Bombay

Consider MISO T.F. model

y (k ) =

1 {[b11q 2 + b12q + b13 ]u1(k ) q + a1q + a2q + a3 3

2

+ [b21q 2 + b22q + b23 ]u2 (k ) + [b31q 2 + b32q + b33 ]u3 (k )} Then, state realization for above MISO system can be written as follows ⎡ -a1 x(k + 1) = ⎢− a2 ⎢ ⎣⎢− a3

1 0⎤ ⎡b11 b21 b31 ⎤ ⎡u1(k ) ⎤ ⎥ 0 1 x (k ) + ⎢b12 b22 b32 ⎥ ⎢u 2 (k )⎥ ⎥ ⎥⎢ ⎥ ⎢ 0 0⎦⎥ ⎣⎢b13 b23 b33 ⎥⎦ ⎢⎣u3 (k )⎦⎥ y (k ) = [1 0 0]x (k )

1/9/2007

System Identification

ARMAX: State Realization

161

Automation Lab IIT Bombay

x (k + 1) = Φx (k ) + Γu (k ) + L∞e (k ) y (k ) = Cx (k ) + e (k ) ⎡ − a1 ⎢ −a 2 ⎢ Φ = ⎢ ... ⎢− a ⎢ n −1 ⎣⎢− an −2

G (q ) =

1

0 ....

0

1

0⎤ 0⎥ ⎥ .... .... ....⎥ 0 0 .... 1 ⎥⎥ 0 0 .... 0 ⎦⎥ ....

⎡b1 ⎤ ⎡ c1 − a1 ⎤ ⎢b ⎥ ⎢c − a ⎥ 2 ⎢ 2⎥ ⎥ ⎢ 2 ; Γ = ⎢ ... ⎥ ; L∞ = ⎢ ... ⎥ ⎢ ... ⎥ ⎢ ... ⎥ ⎢ ⎥ ⎥ ⎢ ⎣⎢cn − an ⎦⎥ ⎣⎢bn ⎦⎥

C = [1 0 .... 0]

B (q ) C (q ) −1 −1 = C [qI − Φ ] Γ ; H (q ) = = C [qI − Φ ] L∞ + I A(q ) A(q ) Interpretation as a State Observer

x (k + 1 | k ) = Φx (k | k − 1) + Γu (k ) + L∞ [y (k ) − Cx (k | k − 1)] 1/9/2007

System Identification

162

81

Connection with Steady State Kalman Estimator

Automation Lab IIT Bombay

Steady state form of Kalman prediction estimator (for large time) is given as

xˆ(k + 1 | k ) = Φxˆ(k | k − 1) + Γu (k ) + L∞e (k ) e (k ) = y (k ) − Cxˆ(k | k − 1) L∞ = ΦP∞C T (R2 + CP∞C T ) −1 P∞ = ΦP∞ ΦT + R1 − L∞CP∞ ΦT Thus, development of time series model can be viewed as identification of steady state Kalman estimator without requiring explicit knowledge of noise covariance matrices (R1,R2 ) Steady state Kalman gain L∞ is parameterized through H(q) and estimated directly from data. 1/9/2007

System Identification

163

State Realization of MIMO T.F. Consider 2 × 2 T.F. model

Defining a augmented state vector X(k) as

y1 (k ) = g11(q )u1(k ) + g12 (q )u 2 (k )

⎡ x (k ) ⎤ X(k) = ⎢ 1 ⎥ ⎣x 2 (k )⎦ we can write

with state realization x(1) (k + 1) = Φ (1)x (1) (k ) + Γ (1)u (k )

y1(k ) = C (1)x (1) (k )

Automation Lab IIT Bombay

MIMO state realization as

and y2 (k ) = g21(q )u1(k ) + g22 (q )u2 (k ) with state realization x(2) (k + 1) = Φ ( 2)x ( 2) (k ) + Γ ( 2)u (k )

⎡Φ (1) X(k + 1) = ⎢ ⎣ [0]

⎡ Γ (1) ⎤ [0] ⎤ X(k) + ⎢ ( 2) ⎥ u(k) ( 2) ⎥ Φ ⎦ ⎣Γ ⎦

⎡C (1) 0 ⎤ y(k) = ⎢ X (k ) ( 2) ⎥ ⎣ 0 C ⎦

y 2 (k ) = C ( 2)x ( 2) (k ) 1/9/2007

System Identification

164

82

Non-uniqueness of State Realizations

Automation Lab IIT Bombay

Consider a square non - singular matrix Ψ Defining new state vector η (k) as

η (k) = Ψx (k ) ⇒ x (k ) = Ψ −1η (k ) Substituting for vector x (k ) in state space model

η (k + 1) = [ΨΦΨ −1 ]η (k ) + [ΨΓ ]u (k ) y (k ) = [CΨ −1 ]η (k )

Defining new matrices ~ ~ = [ΨΦΨ −1 ] ; ~ Φ Γ = [ΨΓ ] ; C = [CΨ −1 ] we have state dynamics in terms of trasformed states ~ η (k ) + ~ η (k + 1) = Φ Γu (k ) ~

y (k ) = C η (k ) 1/9/2007

System Identification

Non-uniqueness of State Realizations

165

Automation Lab IIT Bombay

Note that both realizatio ns have identical transfer functions, i.e., ~ ~ −1 ~ C qI - Φ Γ = CΨ −1[qI − ΨΦΨ −1 ]ΨΓ

[

]

= C [qI − Φ ]Γ

Since matrix a square invertible matrix Ψ can be chosen in infinite possible ways there are infinite possible state realizatio ns for a given transfer function 1/9/2007

System Identification

166

83

Automation Lab IIT Bombay

Summary „

Grey box models ƒ ƒ ƒ

„

Black Box Models ƒ ƒ ƒ

„

Better choice for representing system dynamics. Provide insight into internal working of the system Development process time consuming and difficult Relatively easy to develop Provide no insight into internal working of systems Limited extrapolation abilities.

Black Box Model Development ƒ ƒ

Noise modeling is necessary to be able to extract the deterministic component of the model properly Prediction error method used for parameter estimation

1/9/2007

System Identification

Automation Lab IIT Bombay

Summary „

167

Black Box Model Development ƒ ƒ

ƒ

ƒ

ƒ

FIR and ARX models are relatively easy to develop but require large data set for reducing variance errors OE, ARMAX or BJ models provide parsimonious description of model dynamics but require application nonlinear optimization for parameter estimation Variance errors are directly proportional to number of model parameters and inversely proportional to data length Frequency domain analysis provides insight into working of PEM. Variance errors can be reduced by appropriately selecting Signal to Noise Ratio Bias errors in certain frequency region of interest can be reduced by appropriately choosing the spectrum of perturbation input sequences

1/9/2007

System Identification

168

84

Automation Lab IIT Bombay

References „

„

„

„

„

Ljung, L., System Identification: Theory for Users, Prentice Hall, 1987. Sodderstorm and Stoica, System Identification, 1989. Astrom, K. J., and B. Wittenmark, Computer Controlled Systems, Prentice Hall India (1994). Franklin, G. F., Powell, J. D., and M. L. Workman, Digital Control Systems, Addison Wesley, 1990. Astrom, K. J., and B. Wittenmark, Adaptive Control, Pearson Education, 1995.

1/9/2007

System Identification

169

85

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close