Model Predictive control Notes

Published on March 2017 | Categories: Documents | Downloads: 55 | Comments: 0 | Views: 491
of 135
Download PDF   Embed   Report

Comments

Content

.

.

ACS6116
Advanced Industrial Control
Week 1 – Model Predictive Control
Dr Paul Trodden
Department of Automatic Control & Systems Engineering

1 / 253

.
.

Introduction to the module
Aim
This module aims to provide an introduction to the application
of advanced control techniques and the solution of control
problems often experienced in industrial environments. It
includes the advanced approach of model predictive control
(MPC) and covers the concepts of active noise (ANC) and
vibration control (AVC).

Organization
This week Model predictive control (Dr P Trodden)
Next week Active noise and vibration control (ANC/AVC)
(Drs MO Tokhi and SA Pope)
Third week Shared labs and tutorials
.

2 / 253

.

.

Teaching contact hours


25 hours of lectures
▶ 13 hours for Model predictive control
▶ 12 hours for ANC/AVC



12 hours of labs
▶ 5 hours for Model predictive control
▶ 11 hours for ANC/AVC



7 hours of tutorial sessions
▶ 3 hours for Model predictive control
▶ 4 hours for ANC/AVC



Around 110 hours of independent study time

3 / 253

.
.

Timetable – Week 1

.

4 / 253

.

.

Timetable – Week 3

5 / 253

.
.

Introduction to the module
Assessment

.



60% written examination



2-hour examination paper in May/June: answer 3 questions
from 4
▶ 2 questions on Model predictive control
▶ 1 question on Active noise control
▶ 1 question on Active vibration control



40% continuous assessment
▶ 20% Model predictive control
▶ 10% Active noise control
▶ 10% Active vibration control

6 / 253

.

.

Introduction to the module
Feedback


Feedback is an important part of how you learn



Applies to knowledge, understanding, specific and more
general skills



A key part of your academic and professional development



Learning is not just about the marks you get



Feedback can be from me, your peers and most
importantly yourself



Ultimately to benefit from feedback you must reflect on
your own learning



Make use of information provided in many forms – make
sure you pick up feedback when it is provided

7 / 253

.
.

Introduction to the module
Feedback in this module

.



Reflect on your own understanding throughout the module
– self-feedback



Discussions with other students



Ask questions during/after lectures



In-class questions and exercises



Do the problem sets and attend labs and tutorial classes



Associate tutors in labs and tutorial classes



Individual written feedback on assessment performance



Whole-class feedback on exam performance

8 / 253

.

.

This week – Model predictive control
Intended learning outcomes
1. Describe the receding horizon principle.
2. Construct a finite-horizon optimal control problem.
3. Re-formulate a finite-horizon optimal control problem as an
optimization problem.
4. Recall and evaluate the analytical expression for the
unconstrained MPC control law.
5. Describe and explain the differences between LQR control and
MPC, including advantages and disadvantages.
6. Describe and explain qualitatively the effect of tuning parameters
on closed-loop performance.
7. Design MPC controllers with guaranteed nominal feasibility and
stability.
8. Modify MPC controllers to include offset-free tracking.
9. Write MATLAB code to design, implement and simulate MPC
controllers.

9 / 253

.
.

This week – Model predictive control
Syllabus and outlines

1. What is Model Predictive Control?
2. Unconstrained model predictive control
3. Constrained model predictive control
4. Stability and feasibility
5. Offset-free tracking and disturbance rejection

.

10 / 253

.

.

Part I
What is Model Predictive Control?

11 / 253

.
.

Section 1
Overview

.

12 / 253

.

.

What is model predictive control (MPC)?
Reading: Ch. 1 of Maciejowski
r.

+


Controller

u

System

y

dynamics dx
dt = f(x, u, t), y = h(x, t)
▶ possibly multivariable/MIMO
Aim: provide controls, u, so to
▶ meet control objective
▶ have output (y) track setpoint (r)
▶ regulate state (x) to 0
▶ minimize cost / maximize performance
▶ satisfy constraints

System:



13 / 253

.
.

Constraints are ubiquitous…


Physical limits
▶ Input constraints, e.g. actuator limits
▶ State constraints, e.g. capacity, flow limits



Performance specifications
▶ Maximum overshoot



Safety limits
▶ Maximum pressure, temperature

Optimal operation often means operating near constraints

.

14 / 253

.

.

Performance

Operating near limits

.

.
Output

(Redrawn from Maciejowski)

15 / 253

.
.

Constraints are ubiquitous…
…but very tricky to handle
Classical and even most advanced control methods are
restricted to a-posterior handling of constraints


Anti-windup techniques for PID



Detuning optimal LQR



Trial and error

Why?
Because they all try to determine the control law off-line
∫t
uPID = KP e + KI

e(τ) dτ + KD
τ=0

de
dt

uLQR = K∞ x
.

16 / 253

.

.

Enter model predictive control


Constraints handled explicitly in design



Handles multivariable systems easily



Like LQR, optimizes a system performance measure



Allows optimal operation, closer to limits

MPC is the only advanced control technique to have made
significant impact in industrial control domain1
It is the second-most popular control technique in industry2
1

J. M. Maciejowski. Predictive Control with Constraints. Prentice Hall,
2002, p. 352.
2
J. B. Rawlings and B. T. Stewart. “Coordinating multiple
optimization-based controllers: New opportunities and challenges”. In:
Journal of Process Control 18 (2008), pp. 839–845.

17 / 253

.
.

MPC – the basic idea

.



Measure the state or output of the system at the current
sampling instant



Solve an optimization problem to find the best control
input to apply to the real system. The optimization
▶ uses a dynamic model of the system to
predict/forecast behaviour for a limited time ahead
from the current state
▶ chooses the forecast that optimizes the performance
measure while satisfying all constraints



Apply the optimized control input



Repeat at the next sampling instant

18 / 253

.

.

Section 2
The basic algorithm

19 / 253

.
.

MPC – the basic algorithm

output

Constraint
Setpoint

.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

20 / 253

.

.

MPC – the basic algorithm

output

Constraint
Setpoint

horizon
.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

20 / 253

.

MPC – the basic algorithm

output

Constraint
Setpoint

horizon
.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

20 / 253

.

.

MPC – the basic algorithm

output

Constraint
Setpoint

horizon
horizon

.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

20 / 253

.

MPC – the basic algorithm

Three ingredients

.



Prediction



Optimization



Receding horizon

21 / 253

.

.

Prediction



Use dynamic model of plant to predict
Mainly, we’ll use discrete-time state-space models
x(k + 1) = f (x(k), u(k))
y(k) = g (x(k))



where u(k), x(k), y(k) are input, state, output at k
Predictions from measured state x(k|k) = x(k)
{
}
u(k|k), u(k + 1|k), . . . , u(k + N − 1|k)
|
{z
}
N future controls
{
}
x(k|k), x(k + 1|k), . . . , x(k + N|k)
|
{z
}
current plus N future states

}
{
y(k|k), y(k + 1|k), . . . , y(k + N|k)
{z
}
|
current plus N future outputs

22 / 253

.
.

Optimization



Choose forecast that minimizes a predicted cost
N

(
)
l x(k + j|k), u(k + j|k)
j=0

while satisfying any constraints on u, x, y


.

Use constrained optimization to do this

23 / 253

.

.

The receding horizon principle


Apply to the real system the first control u∗ (k|k) from the
optimized sequence
{ ∗
}
u (k|k), u∗ (k + 1|k), . . . , u∗ (k + N − 1|k)



Actual plant evolves to
x(k + 1) = ˜f (x(k), u∗ (k|k))
˜ (x(k + 1))
y(k + 1) = g



Repeat at next sampling instant

24 / 253

.
.

MPC – the basic algorithm

output

Constraint
Setpoint

.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

25 / 253

.

.

MPC – the basic algorithm

output

Constraint
Setpoint

horizon
.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

25 / 253

.

MPC – the basic algorithm

output

Constraint
Setpoint

horizon
.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

25 / 253

.

.

MPC – the basic algorithm

output

Constraint
Setpoint

horizon
horizon

.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

25 / 253

.

Section 3
History and examples of MPC

.

26 / 253

.

.

History and invention
Roots in


Optimal control in the 1950s and 60s



Industrial control in the 1970s and 80s



Nonlinear, robust and constrained control (1990s)

Reinvented several times (under the monikers DMC, MAC,
EPSAC, GPC, IDCOM, PFC, QDMC, SOLO, RHC, MBPC inter alia)

MPC is a generic name for the approach, not a single algorithm

27 / 253

.
.

Roots in optimal control
∫∞
min

(
)
l x(t), u(t), t dt

0

subject to

.

dx
dt

= f(x, u, t)



A 300-year history



Significant theory emerged in 1950s onwards



(Motivated by aerospace: military aircraft, missiles, space)



LQR and LQG led to MPC

28 / 253

.

.

Roots in industrial process control
Earliest(?)
▶ Richalet et al. (1976) of Adersa: IDCOM
▶ Cutler and Ramaker (1980) of Shell Oil: Dynamic Matrix
Control (DMC)

Reproduced from S. J. Qin and T. A. Badgwell. “A survey of
industrial model predictive control technology”. In: Control
Engineering Practice 11 (2003), pp. 733–764

29 / 253

.
.

Historical usage in process control

.

Reproduced from Qin and Badgwell

30 / 253

.

.

The future
Traditionally used on stable plants with slow dynamics and
other control layers

But with rapid advances in


computing hardware



MPC theory



optimization solution methods

new applications are emerging

31 / 253

.
.

Emerging applications



Aerospace



Automative



Energy and power



Robotics and autonomous systems

to name a few

.

32 / 253

.

.



Example: power systems

∼.






Automatic generation control3
Line over-load prevention4

3

A. N. Venkat et al. “Distributed MPC Strategies With Application to Power
System Automatic Generation Control”. In: IEEE Transactions on Control
Systems Technology 16.6 (2008), pp. 1192–1206.
4
M. R. Almassalkhi and I. A. Hiskens. “Model-Predictive Cascade Mitigation
in Electric Power Systems With Storage and Renewables—Part I: Theory and
Implementation”. In: IEEE Transactions on Power Systems 30 (2015),
pp. 67–77.

33 / 253

.
.

Example: Spacecraft control

Rendezvous and capture of sample container
▶ Minimize fuel use
▶ Avoid collision with container
▶ Capture within specified time limit
E. N. Hartley et al. “Model predictive control system design and
implementation for spacecraft rendezvous”. In: Control
Engineering Practice 20.7 (July 2012), pp. 695–713
.

34 / 253

.

.

Section 4
This course

35 / 253

.
.

Topics

.



Unconstrained model predictive control



Constrained model predictive control



Stability and feasibility



Offset-free tracking and disturbance rejection

36 / 253

.

.

MPC is a family of algorithms

System might be


nonlinear or linear



continuous or discrete time



continuous or discrete input/state/output



deterministic or uncertain

We’ll restrict ourselves to linear, discrete-time, deterministic
systems, with continuous inputs/states/outputs

37 / 253

.
.

Books

.



J. M. Maciejowski. Predictive Control with Constraints.
Prentice Hall, 2002, p. 352



J. B. Rawlings and D. Q. Mayne. Model Predictive Control:
Theory and Design. Nob Hill Publishing, 2009, p. 711
Freely(!) available online:
http://jbrwww.che.wisc.edu/home/jbraw/mpc/



J. A. Rossiter. Model-Based Predictive Control: A Practical
Approach. CRC Press, 2003, p. 344



E. F. Camacho and C. Bordons. Model Predictive Control.
2nd edition. Springer, 2004, p. 405

38 / 253

.

.

Part II
Unconstrained model predictive
control

39 / 253

.
.

Section 1
Problem statement

.

40 / 253

.

.

Baseline assumptions
.



MPC

u

System

x

Discrete-time linear state-space model
x(k + 1) = Ax(k) + Bu(k)
y(k) = Cx(k)






State x(k) is measurable at every k
Control objective is to regulate x to 0 while minimizing a
stage cost function
(
)
l x(k), u(k)
(Note l : Rn × Rm 7→ R)
No disturbances, noise, model errors, delays

41 / 253

.
.

The optimal control problem
From an initial state x(0)


(
)
min
l x(k), u(k)
k=0

subject to
x(k + 1) = Ax(k) + Bu(k), k = 0, 1, 2, . . .

.



Often l(x, u) = (1/2)x⊤ Qx + (1/2)u⊤ Ru, in which case this is
the infinite-horizon LQR problem.



Solve once and apply u(k), k = 0, 1, 2, . . .

42 / 253

.

.

Infinite-horizon LQR
State space

Controls

u(k)

x2

0

x(0)

.

k

0
.

10

20

30

0
x1

43 / 253

.
.

Section 2
LQ optimal control

.

44 / 253

.

.

Infinite-horizon LQR
Problem statement – discrete-time system
For a system
x(k + 1) = Ax(k) + Bu(k)
with initial
( state
) x(0) at k = 0, find the control law
u(k) = κ x(k) that minimizes the objective function



x⊤ (k)Qx(k) + u⊤ (k)Ru(k)

k=0





Q, R are weighting matrices
Stage cost is a quadratic form
Q and R are usually positive definite
Common to set Q = C⊤ C – then x⊤ Qx = y⊤ y

45 / 253

.
.

Aside: what is a quadratic form?
(See Mini-Tutorial 1 in Maciejowski)
A function of the form


f(z) = z Mz =

n


Mij zi zj

i,j=1

Example
[ ]

[
]
Q11 Q12
x1
with Q =
For x =
Q21 Q22
x2
x⊤ Qx = Q11 x21 + Q12 x1 x2 + Q21 x2 x1 + Q22 x22




.



If Q is symmetric, x⊤ Qx = Q11 x21 + 2Q12 x1 x2 + Q22 x22
If Q is diagonal, x⊤ Qx = Q11 x21 + Q22 x22

For any Q, x⊤ Qx = x⊤ Sx where S is symmetric

46 / 253

.

.

Aside: what is a quadratic form?

x⊤ Qx

Example – Q11 = Q22 = 1, Q12 = Q21 = 0.1

0
0
.

x2

.

.

0
x1

47 / 253

.

Aside: what is a positive-definite matrix?
A symmetric matrix M is positive definite if
z⊤ Mz > 0
for all non-zero z.

.



We often write M ≻ 0



Equivalent to M having all positive eigenvalues



Q ≻ 0 and R ≻ 0 means our LQR cost is zero at zero but
positive everywhere else.



Also notion of positive semidefiniteness: M ⪰ 0

48 / 253

.

.

Aside: what is a quadratic form?

x⊤ Qx

Example – Q11 = Q22 = 1, Q12 = Q21 = 0.1 (∴ Q ≻ 0)

0
0
.

x2

.

0
x1

.

49 / 253

.

Aside: what is a quadratic form?

x⊤ Qx

Example – Q11 = 1, Q22 = Q12 = Q21 = 0 (∴ Q ⪰ 0)

0
0
.

.

x2

.

0
x1

50 / 253

.

.

Aside: what is a quadratic form?

x⊤ Qx

Example – Q11 = Q22 = 0.1, Q12 = Q21 = 1 (∴ Q ⊁ 0)

0

0
.

x2

.

.

0
x1

51 / 253

.

Infinite-horizon LQR
Solution – discrete-time system
min




x⊤ (k)Qx(k) + u⊤ (k)Ru(k)

k=0

subject to
x(k + 1) = Ax(k) + Bu(k), k = 0, 1, 2, . . .
has an unique optimal solution (which is a linear time-invariant
control law!)
u(k) = K∞ x(k)
if





.

R is positive definite (R ≻ 0)
Q is positive semi-definite (Q ⪰ 0)
The pair (A, B) is stabilizable
1
The pair (Q 2 , A) is detectable

52 / 253

.

.

Aside: what is stablizability and detectability?
Recall that an nth-order system
x(k + 1) = Ax(k) + Bu(k)
y(k) = Cx(k)
is controllable if and only if rank(C) = n, where
[
]
C = B AB . . . An−1 B ,
and observable if and only if rank(O) = n, where


C
 CA 

O=
 ... .
CAn−1

53 / 253

.
.

Aside: what is stablizability and detectability?

.



Controllable: the system can be steered, in finite time,
from any state x to any other state x ′



Observable: the system initial state x(0) can be
determined, in finite time, for any possible state and
control input trajectories x(k) and u(k), k ⩾ 0



Stabilizable: all uncontrollable modes are stable and
decay to steady state



Detectable: all unobservable modes are stable and decay
to steady state

54 / 253

.

.

Example 1
The system
[

]
[ ]
1 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
[
]
y(k) = 1 0 x(k)
[

]
1
0
with Q = C⊤ C =
0 0
▶ (A, B) is controllable (rank(C) = 2), therefore stabilizable
1



(Q 2 , A) = (C, A) is observable (rank(O) = 2), therefore
detectable



(Note Q ⪰ 0)

55 / 253

.
.

Aside: what is stablizability and detectability?
Example 2
The system
[

]
[ ]
1 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
[
]
y(k) = 0 1 x(k)
[

]
0
0
with Q = C⊤ C =
0 1
▶ (A, B) is controllable (rank(C) = 2), therefore stabilizable



.

1

(Q 2 , A) = (C, A) is not observable (rank(O) = 1), so we
cannot conclude detectability
1
Is (Q 2 , A) detectable? Check the unobservable modes
(obsvf.m) – if they are stable, the system is detectable

56 / 253

.

.

Aside: what is stablizability and detectability?
Observability canonical form
Apply similarity transformation z = Px
z(k + 1) = PAP−1 z(k) + PBu(k)
[
][
]
˜ 11 0
A
z1 (k)
= ˜
˜ 22 z2 (k) + PBu(k)
A21 A
y(k) = CP−1 z(k)
[
]
] z1 (k)
[
˜1 0
= C
z2 (k)



z1 observable modes, z2 unobservable modes
˜ 22 stable
Detectable if and only if A

57 / 253

.
.

Aside: what is stablizability and detectability?
Example 2

[

]
[
]
1 1
Here, with A =
and C = 0 1
0 1
[

] [
]
C
0 1
O=
=
CA
0 1
[
Let P =

]

0 1
. Then
1 1
[

˜ = PAP−1
A

] [
]
˜ 11 0
A
1 0
= ˜
˜ 22 = 1 1
A21 A

A22 = 1, which is only neutrally stable – (C, A) is not detectable.
.

58 / 253

.

.

Back to infinite-horizon LQR
Solution – discrete-time system
min




x⊤ (k)Qx(k) + u⊤ (k)Ru(k)

k=0

subject to
x(k + 1) = Ax(k) + Bu(k), k = 0, 1, 2, . . .
has a unique optimal solution
u(k) = K∞ x(k)
How do we find K∞ ?
▶ Solve the discrete algebraic Riccati equation (DARE)
▶ (See Mini-Tutorial 7 in Maciejowski)
▶ Can we use optimization instead?

59 / 253

.
.

Back to infinite-horizon LQR
Solving by optimization
min

{u(0),u(1),u(2),... }




x⊤ (k)Qx(k) + u⊤ (k)Ru(k)

k=0

subject to
x(k + 1) = Ax(k) + Bu(k), k = 0, 1, 2, . . .


Optimize over control input sequence
{
}
u(0), u(1), u(2), . . .



Quadratic objective, linear equality constraints – a QP?
Infinite sequence, therefore infinite decision variables in
the optimization problem – intractable!



.

60 / 253

.

.

Finite-horizon LQR
Solving by optimization
min

{u(0),u(1),...,u(N−1)}

N−1

(

)
x⊤ (k)Qx(k) + u⊤ (k)Ru(k) + x⊤ (N)Px(N)

k=0

subject to
x(k + 1) = Ax(k) + Bu(k), k = 0, 1, 2, . . . , N − 1




.

Integer N is the prediction horizon length
P is a terminal cost matrix (more later)
From initial state x(0), we optimize control input sequence
{
}
u(0), u(1), u(2), . . . , u(N − 1)



Corresponding state predictions are
{
}
x(0), x(1), x(2), . . . , x(N)



Quadratic objective, linear equality constraints – a QP?61 / 253

.

Section 3
Formulating the optimization problem

.

62 / 253

.

.

Formulating the optimization problem

Two steps
1. Construct the prediction matrices
2. Construct the cost matrices

63 / 253

.
.

Constructing prediction matrices



{
}
Control sequence is u(0), u(1), u(2), . . . , u(N − 1)
Starting from x(0) and applying the model recursively
x(j) = Aj x(0) + A(j−1) Bu(0) + A(j−2) Bu(1) + · · · + Bu(j − 1)



Collect predictions over all steps j = 1, . . . , N
x = Fx(0) + Gu

(1)

where


 



0
B
... 0
A
u(0)
x(1)
 AB
 A2 
 u(1) 
 x(2) 
B
. . . 0


 




,F =  . ,G =  .
x ≜  . ,u ≜ 

.
.
.
.
..
. . .. 
..
 ..


 .. 

 .. 
u(N − 1)
x(N)
AN−1 B AN−2 B . . . B
AN


.

64 / 253

.

.

Constructing prediction matrices
Example

[

]
[ ]
1 1
0.5
x(k + 1) =
x(k) +
u(k)
(2)
0 1
1




u(0)
x(1)
For N = 3, x(2) = Fx(0) + G  u(1) , where
u(2)
x(3)




0.5 0
0
1 1
  1

  0 1 
0
0 




B
0
0
A




1.5
0.5
0
1
2
2








F= A =

 , G = AB B 0 =  1
1
0
0
1
2
3




A B AB B
A
 2.5 1.5 0.5
 1 3
1
1
1
0 1

65 / 253

.
.

Constructing cost matrices
The FH-LQR cost function is
N−1

( ⊤
)
x (k)Qx(k) + u⊤ (k)Ru(k) + x⊤ (N)Px(N)
k=0

Write as


⊤ Q 0
x(1)




x(2)
 0 Q
x⊤ (0)Qx(0) + 
 . . .   ..
.
x(N)
0 ...

⊤ R

u(1)

 u(2)  0
 
+
 . . .   ..
.
u(N − 1)
0


.



... 0 
x(1)
.. 


.
  x(2) 
 ... 
Q 0
x(N)
0 P


0 ... 0 
u(0)
.. 


R
.
  u(1) 
 ... 
R 0
u(N − 1)
... 0 R

66 / 253

.

.

Constructing cost matrices
Then the problem is
˜ + u⊤ Ru
˜
min x⊤ (0)Qx(0) + x⊤ Qx
subject to
x = Fx(0) + Gu
where



Q 0 ... 0

.. 
0 Q
.
˜ =
,
Q

 ..
.
Q 0
0 ... 0 P


... 0

.. 
0 R
.
.
˜=
R

 ..
.
R 0
0 ... 0 R


R

0

Now eliminate the equality constraints by substituting for x (left
as an exercise)

67 / 253

.
.

Constructing cost matrices
The final FH-LQR problem from initial state x(0) is
1
min u⊤ Hu + c⊤ u + α
u 2

(3)

where



˜ + R)
˜ is the Hessian matrix, and depends only
H = 2(G⊤ QG
on the cost matrices (Q, R, P) and system matrices (A, B)
˜ is a vector of constants,
c = Lx(0), where L = 2G⊤ QF,
depending on x(0)

.



˜ + Q, is a scalar constant
α = x⊤ (0)Mx(0), where M = F⊤ QF
depending on x(0)



Note Q, P ⪰ 0, R ≻ 0 ⇒ H ≻ 0

68 / 253

.

.

Constructing cost matrices
Example
For the system (2), with N = 3, suppose Q = P = I2×2 , R = 100.
˜ = I6×6 , R
˜ = 100I3×3 , and
Then, Q
˜ + R)
˜ = 2G⊤ I6×6 G + 2I3×3
H = 2(G⊤ QG


1.75 6.5 2.25
= 2  6.5 4.5 1.75  + 200I3×3
2.25 1.75 1.25


223.5 13
4.5
209 3.5 
=  13
4.5
3.5 202.5
Note H is symmetric and positive definite

69 / 253

.
.

Constructing cost matrices
Example
For the system (2), with N = 3, suppose Q = P = I2×2 , R = 100.
˜ = I6×6 , R
˜ = 100I3×3 , and
Then, Q
˜
c = Lx(0) = 2G⊤ QFx(0)
= 2G⊤ I6×6 Fx(0)


4.5 14
= 2  2 7.5 x(0)
0.5 2.5


9 28
= 4 15  x(0)
1 5

.

70 / 253

.

.

Constructing cost matrices
Example
For the system (2), with N = 3, suppose Q = P = I2×2 , R = 100.
˜ = I6×6 , R
˜ = 100I3×3 , and
Then, Q
˜ + Q)x(0)
α = x(0)⊤ Mx(0) = x(0)⊤ (F⊤ QF
= x(0)⊤ (F⊤ F + I2×2 )x(0)
[
]
4
6
= x(0)⊤
x(0)
6 18

71 / 253

.
.

Section 4
Solving the optimization problem

.

72 / 253

.

.

Solving the optimization problem
1
min u⊤ Hu + c⊤ u + α
u 2





This is an unconstrained QP: minimize a quadratic function
(
)
1
J x(0), u = u⊤ Hu + c⊤ u + α
2
(
)
Compute the gradient ∇u J x(0), u
(
)
Set ∇u J x(0), u = 0 and solve for u∗

73 / 253

.
.

Solving the optimization problem


Function

(
)
1
J x(0), u = u⊤ Hu + c⊤ u + α
2



Gradient

(
)
∇u J x(0), u = Hu + c



Set to zero and solve for u∗
Hu∗ + c = 0
⇒ u∗ = −H−1 c



But c = Lx(0). Therefore,
u∗ = −H−1 Lx(0)



.

(4)

Unique, global optimum if H ≻ 0 (∵ H is invertible)

74 / 253

.

.

The FH-LQR control law
u∗ = −H−1 Lx(0)

(5)



Note that u∗ does not depend on α = x⊤ (0)Mx(0)




So we can compute u∗ with only H, L and x(0)
(
)
But would need α to compute correct cost, J x(0), u∗ (0)



(This is is important later for stability analysis)



Finally, and most importantly, we never directly invert H to
compute u∗ (0)

75 / 253

.
.

The FH-LQR control law
u∗ = −H−1 Lx(0)



.

Recall u stacked vector of future inputs
Therefore
[
] ( −1 )
u(0) = Im 0 0 . . . 0 −H L x(0)
[
](
)
u(1) = 0 Im 0 . . . 0 −H−1 L x(0)
..
.
[
](
)
u(j) = 0 . . . Im . . . 0 −H−1 L x(0)
..
.
[
](
)
u(N − 1) = 0 0 0 . . . Im −H−1 L x(0)

76 / 253

.

.

The FH-LQR control law
Example
For the system (2), with N = 3 and Q = P = I2×2 , R = 100,

−1 

9 28
223.5 13
4.5
209 3.5  4 15  x(0)
u∗ = −H−1 Lx(0) = −  13
1 5
4.5
3.5 202.5


0.0392 0.1211
= −  0.0166 0.0639 x(0)
0.0038 0.0209
Therefore,

[

]
u(0) = − 0.0392 0.1211 x(0)
[
]
u(1) = − 0.0166 0.0639 x(0)
[
]
u(2) = − 0.0038 0.0209 x(0)

77 / 253

.
.

Finite-horizon LQR versus Infinite-horizon LQR
State space

Controls

u(k)

.
.
.
.

x2

0

N =. ∞
N =. 7
N =. 12
N =. 20

x(0)

.

k

0
.

5

10

15

20

0
x1

.

78 / 253

.

.

Beyond the horizon


We solved the FH-LQR problem at x(0)



Solution gives N open-loop controls, minimizing
N−1

(

)
x⊤ (k)Qx(k) + u⊤ (k)Ru(k) + x⊤ (k + N)Px(k + N)

k=0


But what about k ⩾ N?
▶ Could solve FH-LQR problem again at k = N
▶ But in MPC we solve FH-LQR problem at every k
▶ Receding horizon
▶ Close the loop

79 / 253

.
.

Section 5
The receding-horizon control law

.

80 / 253

.

.

Receding-horizon FH-LQR…
…or MPC with linear model and quadratic cost (LQ-MPC)
At time step k
1. Measure state x(k)
2. Solve FH-LQR problem for u∗ (k)
1
u∗ (k) = arg min u⊤ (k)Hu(k) + x⊤ (k)L⊤ u(k)
u(k) 2
3. Implement u(k) as first control in u∗ (k)
4. Wait one step; increment k; return to 1

81 / 253

.
.

output or state, x(k)

MPC – receding horizon control

.

Constraint
Setpoint
x(k)

x∗ (k)
x(k + 1)

x∗ (k + 1)

horizon

horizon
time

.
k

k+1

input, u(k)

u(k − 1)

u∗ (k)
u(k)
k

Constraint
u∗ (k + 1)
time

k+1
Constraint

.

.

82 / 253

.

.

Some notation





x(k) is the state at time k
x(k + j|k) is the prediction, made at k, of the state at time
k + j (i.e. j steps ahead)
Similarly, u(k) and u(k + j|k)
Given current state x(k), the prediction equation is
x(k) = Fx(k) + Gu(k)
where F and G are as before, and




u(k|k)
x(k + 1|k)
 u(k + 1|k) 
 x(k + 2|k) 




x(k) ≜ 

 , u(k) ≜ 
..
..




.
.
u(k + N − 1|k)
x(k + N|k)

83 / 253

.
.

The unconstrained LQ-MPC problem
At a state x(k) at time k,
min
u(k)

N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)
subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)

.

84 / 253

.

.

The unconstrained LQ-MPC problem

Equivalently
1
min u⊤ (k)Hu(k) + x⊤ (k)L⊤ u(k) (+α)
u(k) 2

85 / 253

.
.

Some definitions


The cost function is
JN

(

N−1

( ⊤
)
x (k+j|k)Qx(k+j|k)+u⊤ (k+j|k)Ru(k+j|k)
x(k), u(k) =

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)


The value function is
(
)
(
)
J∗N x(k) = min JN x(k), u(k)
u(k)



.

The optimal control sequence is
(
)
u∗ (k) = arg min JN x(k), u(k)
u(k)
{ ∗
}
= u (k|k), u∗ (k + 1|k), . . . , u∗ (k + N − 1|k)

86 / 253

.

.

The receding-horizon LQR control law
We saw that, for the initial state x(0),
u∗ (0) = −H−1 Lx(0)
[
](
)
u(0) = Im 0 0 . . . 0 −H−1 L x(0)
What happens at x(k)? Know that
˜ + R)
˜ does not depend on x(k) or k
▶ H = 2(G⊤ QG
˜ does not depend on x(k) or k
▶ L = 2G⊤ QF
▶ Therefore,
u∗ (k) = −H−1 Lx(k)
u(k) = KN x(k)
is a linear, time-invariant control law, where
[
](
)
KN = Im 0 0 . . . 0 −H−1 L

87 / 253

.
.

The receding-horizon LQR control law
Example
For the system (2), with N = 3 and Q = P = I2×2 , R = 100,


0.0392 0.1211
u∗ (k) = −H−1 Lx(k) = −  0.0166 0.0639 x(k)
0.0038 0.0209
Therefore, MPC law is u(k) = K3 x(k) where
[
]
K3 = − 0.0392 0.1211

.

88 / 253

.

.

Unconstrained LQ-MPC – revised algorithm

At time step k
1. Measure state x(k)
2. Apply optimal RH control law
u(k) = KN x(k)
3. Wait one step; increment k; return to 1
No need to solve optimization on-line!

89 / 253

.
.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(0)

.

k

0
.

5

10

0
x1

.

90 / 253

.

.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(0)

.

k

0
.

5

10

0
x1

90 / 253

.
.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(1)
x(0)

.

k

0
.

5

10

0
x1

.

90 / 253

.

.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(1)
x(0)

.

k

0
.

5

10

0
x1

90 / 253

.
.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(1)
x(0)

.

k

0
.

5

10

0
x1

.

90 / 253

.

.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(2)
x(1)
x(0)

.

k

0
.

5

10

0
x1

90 / 253

.
.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(2)
x(1)
x(0)

.

k

0
.

5

10

0
x1

.

90 / 253

.

.

LQ-MPC – receding-horizon control
State space

Controls

u(k)

.

N =. ∞

x2

0

x(2)
x(1)
x(0)

.

k

0

5

.

10

0
x1

90 / 253

.
.

LQ-MPC versus Infinite-horizon LQR
State space

Controls

u(k)

.
.
.
.
.

x2

0

k

0

x(0)

.

N =. ∞
N =. 7
N =. 12
N =. 20
N =. 5

.

5

10

15

20

0
x1

.

91 / 253

.

.

Section 6
A general unconstrained MPC formulation

92 / 253

.
.

A general unconstrained MPC formulation
N−1

(
)
(
)
min
l x(k + j|k), u(k + j|k) + F x(k + N|k)
u(k)

j=0

subject to

(
)
x(k + j + 1|k) = f x(k + j|k), u(k + j|k) , k = 0, 1, . . . , N − 1
x(k|k) = x(k)





.

l is the stage cost and F is a{terminal cost (more later)
}
Control sequence u∗ (k) = u(k|k), . . . , u(k + N − 1|k)
Depending on l, F and f, problem may have
▶ A unique, global optimum (strictly convex problem)
▶ A set of global optima (convex problem)
▶ Multiple local optima (nonconvex problem)
▶ An unbounded objective

93 / 253

.

.

x⊤ Qx

Strictly convex function

0
0
.

x2

.

0
x1

94 / 253

.
.

x⊤ Qx

Non-strictly convex function

0
0
.

.

x2

.

0
x1

95 / 253

.

.

V(x)

Nonconvex function

0
0
.

x2

.

0
x1

96 / 253

.
.

x⊤ Qx

Unboundedness

0

0
.

.

x2

.

0
x1

97 / 253

.

.

A general unconstrained MPC formulation
N−1

(
)
(
)
min
l x(k + j|k), u(k + j|k) + F x(k + N|k)
u(k)

j=0

subject to

(
)
x(k + j + 1|k) = f x(k + j|k), u(k + j|k) , k = 0, 1, . . . , N − 1
x(k|k) = x(k)







Control implemented is first element of u∗ (k)
The (implicit) MPC control law is
(
)
u(k) = κN x(k)
Usually, no closed-form expression for κN (·)
LQR one of the only formulations with closed-form control
law, κN (x) = KN x

98 / 253

.
.

Some useful alternative MPC formulations
The Linear Absolute Regulator (LAR)
N−1


)



(
q
x(k + j|k)
1 + r
u(k + j|k)
1 + p
x(k + N|k)
1
min
u(k)

j=0

subject to
x(k + j + 1|k) = Ax(k + j|k) + Bu(k + j|k), k = 0, 1, . . . , N − 1
x(k|k) = x(k)




.

Minimize total, rather than “RMS”, control effort
No closed-form solution for κN (·) – solve numerically for u∗
Problem transforms to a constrained linear program (LP)
rather than unconstrained QP

99 / 253

.

.

Some useful alternative MPC formulations


Optimize over input changes ∆u(k) = u(k) − u(k − 1)
min

N−1


∆u(k)

x⊤ (k + j|k)Qx(k + j|k) + ∆u⊤ (k + j|k)R∆u(k + j|k)

j=0

Useful for suppressing rapid control changes


Separately defined prediction and control horizons
min
u(k)

N

j=0



x (k + j|k)Qx(k + j|k) +

N∑
c −1

u⊤ (k + j|k)Ru(k + j|k)

j=0

with Nc ⩽ N and u(k + j|k) = Kx(k + j|k) for j ⩾ Nc

100 / 253

.
.

Section 7
Closed-loop performance and stability

.

101 / 253

.

.

Back to unconstrained LQ-MPC
Problem:
min
u(k)

N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)
subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)
Optimal control law:
u(k) = KN x(k)
How do we choose Q, R, P and N?

102 / 253

.
.

LQ-MPC versus Infinite-horizon LQR
State space

Controls

u(k)

.
.
.
.
.

x2

0

k

0

x(0)

.

N =. ∞
N =. 7
N =. 12
N =. 20
N =. 5

.

5

10

15

20

0
x1

.

103 / 253

.

.

The effect of prediction horizon length
Example
The following system with Q = P = I, R = 100.
[
]
[ ]
2 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
N
3
5
7
12
20
25


KN
−[0.3576
−[1.0160
−[1.0791
−[1.1087
−[1.1278
−[1.1299
−[1.1303

0.3234]
0.9637]
1.0381]
1.0534]
1.0629]
1.0640]
1.0642]

KN → K∞ as N → ∞

104 / 253

.
.

How do we choose Q, R, P and N?

.



Q, R usually tuned to deliver desired response



P = Q seems “sensible”?



And N? KN → K∞ as N → ∞. But
▶ With increasing N
▶ computational burden increases
▶ QP matrices more ill-conditioned
▶ Decreasing N worsens performance
▶ K might not even be stabilizing!
N
▶ Choose smallest acceptable N

105 / 253

.

.

KN might not even be stabilizing?
Recall we can assess the stability of a controlled linear system
x(k + 1) = (A + BK)x(k)
by finding the eigenvalues of (A + BK). The system is


globally asympotically stable if and only if
|λi | < 1, i = 1 . . . n



marginally stable if |λi | ⩽ 1, with |λj | = 1 for some j



unstable if |λi | > 1 for any i

In other words, a stable closed-loop system has all eigenvalues
of (A + BK) inside the unit circle.

106 / 253

.
.

Aside: what is global asymptotic stability?
From any initial state x(0), the system remains bounded and
converges (settles) asymptically to 0

Formal definition

(
)
For a system x(k + 1) = f x(k) , the origin x = 0 is

.



stable if for any positive scalar ϵ there exists a positive
scalar δ such that




x(0)
< δ ⇒
x(k)
< ϵ, ∀k > 0



globally asymptotically stable if it is stable and also, for
any initial state x(0),


x(k)
→ 0 as k → ∞



unstable if it is not stable

107 / 253

.

.

KN might not even be stabilizing
Example
The following system with Q = P = I, R = 100.
[
]
[ ]
2 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
N
3
5
7
12
20
25


KN
−[0.3576
−[1.0160
−[1.0791
−[1.1087
−[1.1278
−[1.1299
−[1.1303

0.3234]
0.9637]
1.0381]
1.0534]
1.0629]
1.0640]
1.0642]

λ(A + BKN )

Stable?

1.41, 1.08
0.82, 0.71
0.92, 0.51
0.89, 0.50
0.87, 0.50
0.87, 0.50
0.87, 0.50

No
Yes
Yes
Yes
Yes
Yes
Yes

108 / 253

.
.

The effect of N on stability

.



Suggests we just need N large enough



Unfortunately, not that simple: very complex relationship
between stability and N, Q, R and P, as the next example
shows.

109 / 253

.

.

The effects of N and R on stability
Example (Maciejowski)
The following system with Q = P = I,
[
]
[
]
1.216 −0.055
0.02763
x(k + 1) =
x(k) +
u(k)
0.221 0.9947
0.002763
8

.

Stable (A. + BKN )

R

6
4
2
0.
0

.

5

10

15

20
N

25

30

40

35

110 / 253

.

Section 8
Guaranteeing stability of unconstrained
LQ-MPC

.

111 / 253

.

.

Back to infinite-horizon LQR
min




x⊤ (k)Qx(k) + u⊤ (k)Ru(k)

k=0

subject to
x(k + 1) = Ax(k) + Bu(k), k = 0, 1, 2, . . .
The closed-loop system
x(k + 1) = (A + BK∞ )x(k)
is globally asymptotically stable if
▶ R is positive definite (R ≻ 0)
▶ Q is positive semi-definite (Q ⪰ 0)
▶ The pair (A, B) is stabilizable
1
▶ The pair (Q 2 , A) is detectable

112 / 253

.
.

Stability of unconstrained LQ-MPC
min
u(k)

N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)
subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)
The closed-loop system is
x(k + 1) = (A + BKN )x(k)
1

We’ll assume R ≻ 0, Q ⪰ 0, (A, B) stabilizable, (Q 2 , A) detectable.
Question: when is KN stabilizing?
.

113 / 253

.

.

When is KN stabilizing?
As we saw, even if K∞ is stabilizing, KN is not necessarily
stabilizing for a given N and Q, R, P

How can we make sure KN is stabilizing?


Infinite horizons are stabilizing
▶ Use sufficiently large N so that horizon is “infinite”
▶ Use a terminal constraint

114 / 253

.
.

Terminal state constraint for stability
State space

Controls

u(k)

x2

0

k

0
.

.

5

10

0
x1

Observation: because N is finite, x(k + N|k) ̸= 0 – the “plan”
stops short of the origin (the target)
.

115 / 253

.

.

Terminal state constraint for stability
Idea

Add the constraint x(k + N|k) = 0 to the MPC problem


Then a plan must end at the origin to be feasible



If x(k + N|k) = 0, then x(k + N + 1|k) = Ax(k + N|k) = 0 for
u(k + N|k) = 0



The state remains at the origin for all k ⩾ N!

116 / 253

.
.

Terminal state constraint for stability
Formulation
N−1

( ⊤
)
x (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)
min
u(k)

j=0

subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)
x(k + N|k) = 0




.

No terminal cost needed – why?
Note that cost for k = N . . . ∞ is zero
The cost is an “infinite”-horizon cost

117 / 253

.

.

Terminal state constraint for stability
Drawbacks
Several


Problem is a constrained QP – harder to solve



We lose the closed-form linear control law; now u = κN (x)



Can lead to large and very active control signals
▶ For small N, x(0) far from 0 will mean large |u| in order
for x(0 + N|0) = 0
▶ Conversely, for x(0) far from 0, a large N will be
needed to keep |u| small



Results in poor robustness to disturbances and modelling
errors

118 / 253

.
.

Infinite horizons are stabilizing
Estimating the infinite-horizon cost
Rather than requiring x(k + N|k) = 0, what if we could estimate
the infinite-horizon cost?


(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0
N−1

( ⊤
)
=
x (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)
j=0

|

+



(
j=N

|
.

{z

cost over finite horizon

}

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)
{z

cost-to-go

)
}

119 / 253

.

.

Infinite horizons are stabilizing
Estimating the cost-to-go
The finite-horizon MPC cost is
N−1

(





x (k + j|k)Qx(k + j|k) + u (k + j|k)Ru(k + j|k)

j=0

|

{z

)
}

cost over finite horizon

+ x⊤ (k + N|k)Px(k + N|k)
|
{z
}
terminal cost

So far, we’ve always set P = Q. But what if we could set P so that
terminal cost = cost-to-go
Then the finite-horizon MPC cost would estimate the
infinite-horizon cost

120 / 253

.
.

Terminal cost for stability
Want
x⊤ (k + N|k)Px(k + N|k) =


( ⊤
)
x (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)
j=N

The trick is to assume the use of a stabilizing control law beyond
the horizon
u(k + N + j|k) = Kx(k + N + j|k), j ⩾ 0
Then problem reduces to finding P that satisfies the Lyapunov
equation
(A + BK)⊤ P(A + BK) − P + (Q + K⊤ RK) = 0
.

121 / 253

.

.

Dual-mode MPC
The resulting approach is often called dual-mode MPC
{
optimization variables j = 0, 1, . . . , N − 1 (mode 1)
u(k+j|k) =
Kx(k + j|k)
j = N, N + 1, . . . (mode 2)


Mode 2 is used only for stability; the implemented control
is still the first in the optimal control sequence



Can choose any K we like, provided (A + BK) is stable



If (A, B) is stabilizable, such a K must exist

122 / 253

.
.

Dual-mode MPC
Example
Let’s look again at the system
[
]
[ ]
2 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
with Q = I and R = 100. This was unstable with P = I, since
[
]
K3 = − 0.3576 0.3234
led to (A + BK3 ) having eigenvalues outside the unit circle.
Let’s try to find a P that leads to a stabilizing K3
.

123 / 253

.

.

Dual-mode MPC
Example


Find a K for mode 2 – already know
[
]
K∞ = − 1.1303 1.0642



Solve the Lyapunov equation
(A + BK)⊤ P(A + BK) − P + (Q + K⊤ RK) = 0
Using M



P = dlyap((A+B*K)',(Q+K'*R*K))
[
]
194.64 161.43
P=
161.43 147.77
New RH controller is
[
]
K3 = − 1.1303 1.0642
Notice anything?

.

124 / 253

.

Dual-mode MPC
Summary





The MPC predictions are
{
optimization variables
u(k+j|k) =
Kx(k + j|k)

j = 0, 1, . . . , N − 1 (mode 1)
j = N, N + 1, . . . (mode 2)

K chosen to stabilize (A, B); P obtained from Lyapunov
equation
The MPC (receding-horizon) control law is
u∗ (k) = −H−1 Lx(k)
u(k) = KN x(k)




.

if K = K∞ , then KN = K∞
Otherwise, K ̸= K∞ (sub-optimal)

125 / 253

.

.

Part III
Constrained model predictive
control

126 / 253

.
.

Section 1
Motivation: all systems are subject to
constraints

.

127 / 253

.

.

Constraints
System variables are always constrained


Physical limits
▶ Input constraints, e.g. actuator limits
▶ State constraints, e.g. capacity, flow limits



Performance specifications
▶ Maximum overshoot



Safety limits
▶ Maximum pressure, temperature

Some constraints are “harder” than others

128 / 253

.
.

Input constraints
Actuators are always physically limited…


Saturation limits
sat(u)

u.



umax
sat(u) = u


umin


Rate (slew) limits

if u ⩾ umax
if umin ⩽ u ⩽ umax
if u ⩽ umin


du
⩽R
dt

…but most controllers are not designed with this in mind
.

129 / 253

.

.

Constraints are nonlinearities…
…but are often easily converted to limits on linear systems


Saturation limits
(
)
x(k + 1) = Ax(k) + B sat u(k) ⇐⇒



x(k + 1) = Ax(k) + Bu(k)
umin ⩽ u(k) ⩽ umax

Rate limits
x(k + 1) = Ax(k) + Bu(k)
−RT ⩽ u(k) − u(k − 1) ⩽ RT

130 / 253

.
.

Control in the presence of constraints is hard…
…even for linear systems
For
x(k + 1) = Ax(k) + Bu(k)
y(k) = Cx(k)
x(k), u(k), y(k) subject to constraints
determine the u = κ(x) that is guaranteed to satisfy all
constraints for all time
Common approach is to design u = κ(x) using your favourite
technique, and then de-tune to avoid constraint violation
Trouble is, optimal operation usually means operating at limits
.

131 / 253

.

.

Performance

Operating near limits

.

.
Output

(Redrawn from Maciejowski)

132 / 253

.
.

Section 2
Problem statement

.

133 / 253

.

.

Constrained infinite-horizon LQR
Problem statement – discrete-time system
For a system
x(k + 1) = Ax(k) + Bu(k)
with initial
( state
) x(0) at k = 0, find the control law
u(k) = κ x(k) that minimizes the objective function



x⊤ (k)Qx(k) + u⊤ (k)Ru(k)

k=0

while guaranteeing constraint satisfaction for all time
▶ Usually impossible to solve this problem
▶ MPC provides an approximate solution
▶ Solve a finite-horizon version of this problem, on-line, and
apply the first control in the optimized sequence –
receding horizon

134 / 253

.
.

Constrained finite-horizon optimal control problem
Problem statement – discrete-time system
For a system
x(k + 1) = Ax(k) + Bu(k)
with initial
{ state x(k), find the control sequence
}
u(k) = u(k|k), u(k + 1|k), . . . , u(k + N − 1|k) that minimizes the
objective function
N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)

.

while guaranteeing constraint satisfaction over the horizon
▶ Can solve this via constrained optimization
▶ Optimized control sequence u∗ (k) will satisfy constraints
▶ Associated state predictions will satisfy constraints 135 / 253

.

.

Constrained MPC and the receding-horizon principle

output

Constraint
Setpoint

.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

136 / 253

.

Constrained MPC and the receding-horizon principle

output

Constraint
Setpoint

horizon
.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

136 / 253

.

.

Constrained MPC and the receding-horizon principle

output

Constraint
Setpoint

horizon
.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

136 / 253

.

Constrained MPC and the receding-horizon principle

output

Constraint
Setpoint

horizon
horizon

.

time

.
k

k+1

input

Constraint
time
k

k+1
Constraint

.

.

136 / 253

.

.

Receding-horizon constrained FH-LQR…
…or MPC with linear model and quadratic cost (LQ-MPC)

At time step k
1. Measure state x(k)
2. Solve constrained FH optimal control problem for u∗ (k)
3. Implement u(k) as first control in u∗ (k)
4. Wait one step; increment k; return to 1

137 / 253

.
.

Constrained LQ-MPC
Problem formulation
At a state x(k) at time k,
min
u(k)

N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)
subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)
u(k + j|k), x(k + j|k) satisfy constraints for j = 0, 1, 2, . . . , N
.

138 / 253

.

.

Constrained MPC
A general formulation
N−1

(
)
(
)
min
l x(k + j|k), u(k + j|k) + F x(k + N|k)
u(k)

j=0

subject to
(
)
x(k + j + 1|k) = f x(k + j|k), u(k + j|k) , k = 0, 1, . . . , N − 1
x(k|k) = x(k)
u(k + j|k), x(k + j|k) satisfy constraints for j = 0, 1, 2, . . . , N

139 / 253

.
.

Section 3
Constraint modelling

.

140 / 253

.

.

Constraint modelling
Many types…


State and input constraints
x∈X



Terminal state constraints
x(k + N|k) = 0



u∈U

x(k + N|k) = xtarg

x(k + N|k) ∈ XN

Output constraints
y∈Y



Mixed constraints
c(x, u) ∈ Z



Rate constraints
∆u ∈ V

141 / 253

.
.

Constraint modelling
Many types…


Upper/lower bounds on variables (box constraints)
xmin ⩽ x ⩽ xmax

umin ⩽ u ⩽ umax

(That is, xmin,i ⩽ xi ⩽ xmax .i for all i = 1, 2, . . . n)


Linear inequality constraints
Px x ⩽ q x



Nonlinear constraints
g(x, u) ⩽ 0

.

Pu u ⩽ qu

h(x, u) = b

∥x∥2 ⩽ 1

142 / 253

.

.

Linear inequality constraints
We’ll consider linear inequality constraints on the predicted
states and inputs
Px x(k + j|k) ⩽ qx ,
Pu u(k + j|k) ⩽ qu , for all j = 0, 1, . . . , N − 1
PxN x(k + N|k) ⩽ qxN


Very general form that can capture many real constraints



Can also capture output constraints (since y = Cx)



Can allow constraints to vary over horizon
(i.e., Px (j)x(k + j|k) ⩽ qx (j)), but we’ll not consider that in
this course

143 / 253

.
.

Linear inequality constraints
Examples

[ ]
x
Suppose x = 1 and u is scalar. Box constraints
x2
xmin ⩽ x(k + j|k) ⩽ xmax

umin ⩽ u(k + j|k) ⩽ umax

are implemented as




+1 0
+xmax,1

 0 +1

 x(k + j|k) ⩽ +xmax,2 

−1 0 
 −xmin,1 
0 −1
−xmin,2
| {z }
| {z }
qx

Px

[

]
[
]
+1
+umax
u(k + j|k) ⩽
−1
−umin
| {z }
| {z }
Pu

.

qu

144 / 253

.

.

Linear inequality constraints
Examples
More generally,

[

]
[
]
+In×n
+xmax
x(k + j|k) ⩽
−In×n
−xmin
| {z }
| {z }
[

Px

]

qx

]
+Im×m
+umax
u(k + j|k) ⩽
−Im×m
−umin
| {z }
| {z }
qu

Pu




.

[

Similar for terminal state constraints (PxN , qxN )
For output constraints
[ ]
[
]
+C
+ymax
x(k + j|k) ⩽
−C
−ymin
| {z }
| {z }
qx

Px

145 / 253

.

Section 4
Formulating the optimization problem

.

146 / 253

.

.

The constrained LQ-MPC problem
At a state x(k) at time k,
min
u(k)

N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)
subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)
Px x(k + j|k) ⩽ qx , j = 0, 1, . . . , N − 1
Pu u(k + j|k) ⩽ qu , j = 0, 1, . . . , N − 1
PxN x(k + N|k) ⩽ qxN

147 / 253

.
.

Prediction matrices
Recall that, in the unconstrained case, we reformulated the
LQ-MPC as a QP problem
1
min u⊤ (k)Hu(k) + x⊤ (k)L⊤ u(k) (+α)
u(k) 2
by constructing
1. prediction matrices (F and G)
2. cost matrices (H and L)
We’ll use these again, but we also need constraint matrices.

.

148 / 253

.

.

Standard form of a constrained QP
1
min z⊤ Hz + f⊤ z
z 2
subject to
Dz ⩽ b
We’ll show that the LQ-MPC QP problem is
1
min u⊤ (k)Hu(k) + x⊤ (k)L⊤ u(k) (+α)
u(k) 2
subject to
Pc u(k) ⩽ qc + Sc x(k)

149 / 253

.
.

Recap: constructing prediction matrices



{
}
Control sequence is u(k|k), u(k + 1|k), . . . , u(k + N − 1|k)
Starting from x(k) and applying the model recursively
x(k + j|k) = Aj x(k) + A(j−1) Bu(k|k) + · · · + Bu(k + j − 1|k)



Collect predictions over all steps j = 1, . . . , N
x(k) = Fx(k) + Gu(k)
where

.




B
0
... 0
A
 AB
 A2 
B
. . . 0


 
F =  . ,G =  .
.
.
.
..
. . .. 
 ..

 .. 
AN−1 B AN−2 B . . . B
AN


150 / 253

.

.

Recap: constructing cost matrices


Recall that the cost function
N−1
( ⊤
)
(
) ∑
x (k+j|k)Qx(k+j|k)+u⊤ (k+j|k)Ru(k+j|k)
J x(k), u(k) =
j=0

+ x⊤ (k + N|k)Px(k + N|k)
may be rewritten as
(
)
1
J x(k), u(k) = u⊤ (k)Hu(k) + x⊤ (k)L⊤ u + x⊤ (k)Mx(k)
{z
}
|
2

independent of u

where H, L, M are defined on p 68.


Recall that H ≻ 0 if Q ⪰ 0, P ⪰ 0, R ≻ 0

151 / 253

.
.

Constructing constraint matrices


Start with input constraints
Pu u(k + j|k) ⩽ qu , j = 0, 1, . . . , N − 1



Collect and stack

Pu 0 . . .
 0 Pu . . .

 ..
.. . .
 .
.
.
|

0


  
qu
0
u(k|k)




0   u(k + 1|k)  qu 

 ⩽  .. 
..  
..
  . 
. 
.
qu
0 . . . Pu
u(k + N − 1|k)
| {z }
{z
}
˜u
q

˜u
P


Therefore
˜ u u(k) ⩽ q
˜u
P

.

152 / 253

.

.

Constructing constraint matrices


Now state constraints
Px x(k + j|k) ⩽ qx , j = 0, 1, . . . , N − 1
PxN x(k + N|k) ⩽ qxN



Collect and stack
 

Px
0 0
0
Px 0
 

0

  x(k|k) +  0 Px
 .. 
 ..
..
.
.
.
0
0 0
| {z }
|
˜x
P
0




qx
 x(k + 1|k)
 
  x(k + 2|k)   qx 

  qx 

⩽ 
..

  .. 
.

 . 
x(k + N|k)
qx
. . . Px N
| {zN }
{z
}
...
...
...
..
.

0
0
0
..
.









˜x
q

˜x
P

Therefore, recalling that x(k|k) = x(k)
˜ x x(k) + P
˜ x x(k) ⩽ q
˜x
P
0

153 / 253

.
.

Constructing constraint matrices
˜ u u(k) ⩽ q
˜u
P
˜ x x(k) ⩽ q
˜ x x(k)
˜x − P
P
0


Use x(k) = Fx(k) + Gu(k) to eliminate x(k)
(
)
˜ x Fx(k) + Gu(k) ⩽ q
˜ x x(k)
˜x − P
P
0
˜ x Gu(k) ⩽ q
˜x − P
˜ x F)x(k)
˜ x + (−P
∴P
0



Stack

]
]
[ ] [
˜u
˜u
0
P
q
+ ˜
˜ x F x(k)
˜ x G u(k) ⩽ q
˜x
−Px0 − P
P
| {z } |
{z
}
| {z }
[

Pc



qc

Sc

Therefore
Pc u(k) ⩽ qc + Sc x(k)

.

154 / 253

.

.

Constructing constraint matrices
Summary of procedure
1. Define linear inequalities in x(·|k), u(·|k), y(·|k)
2. Write as
Px x(k + j|k) ⩽ qx , j = 0, 1, . . . , N − 1
Pu u(k + j|k) ⩽ qu , j = 0, 1, . . . , N − 1
PxN x(k + N|k) ⩽ qxN
3. Stack the inequalities to give
˜ u u(k) ⩽ q
˜u
P
˜ x x(k) ⩽ q
˜ x x(k)
˜x − P
P
0
4. Use x(k) = Fx(k) + Gu(k) to eliminate x(k), and stack
further
Pc u(k) ⩽ qc + Sc x(k)

155 / 253

.
.

Section 5
Solving the optimization problem

.

156 / 253

.

.

Solving the constrained LQ-MPC problem
1
min u⊤ (k)Hu(k) + x⊤ (k)L⊤ u(k) (+α)
u(k) 2
subject to
Pc u(k) ⩽ qc + Sc x(k)
No closed-form solution – need numerical methods.
But, if H ≻ 0, then


The problem is strictly convex



If the problem is feasible, a global minimizer can always be
found



That global mimimizer is unique

157 / 253

.
.

Feasibility
The problem needs to be feasible for u∗ (k) to exist
(
)
The feasible set U x(k) is the set of all u(k) that satisfy the
constraints
}
(
) {
U x(k) = u : Pc u ⩽ qc + Sc x(k)
and depends on x(k) (so may change at each sampling instant)
The MPC problem could even be infeasible at some k


The state/output has strayed outside of limits



There exists no control sequence that maintains constraint
satisfaction over the horizon





For now, we’ll assume the MPC problem is feasible at every k
.

158 / 253

.

.

The solution of a constrained QP

z2

Unconstrained minimum inside the feasible region

. .
z1

159 / 253

.
.

The solution of a constrained QP

z2

Unconstrained minimum outside the feasible region

. .
z1

.

160 / 253

.

.

The solution of the constrained QP


Without constraints, u∗ (k) is a linear function of x(k)
u∗ (k) = −H−1 Lx(k)



With constraints, u∗ (k) is nonlinear



The constraints are usually fixed – can’t just move them so
that the unconstrained minimum inside the feasible region



(The cost, on the other hand, can often be tuned, which
leads to a nice feature of constrained LQ-MPC – see
Section 5.4 in Maciejowski)



Therefore, we need to re-compute u∗ (k) at each k by
solving a constrained QP

161 / 253

.
.

Solving a constrained QP
Two classes of methods
▶ Active set
▶ Interior point
You (hopefully) covered these in ACS6102. (If not, see Section
3.3 of Maciejowski for a brief introduction and references.)
For our purposes, we’ll assume the problem can be solved
efficiently via software algorithms; e.g., quadprog.m in M
uses
ustar = quadprog(H,L*x,Pc,qc+Sc*x)
to solve

}
1
u∗ = arg min u⊤ Hu + x⊤ L⊤ u : Pc u ⩽ qc + Sc x
u
2

.

{

162 / 253

.

.

Section 6
The receding-horizon control law

163 / 253

.
.

Constrained LQ-MPC…
At time step k
1. Measure state x(k)
2. Solve constrained FH-LQR problem for u∗ (k)
1
u∗ (k) = arg min u⊤ (k)Hu(k) + x⊤ (k)L⊤ u(k)
u(k) 2
subject to Pc u(k) ⩽ qc + Sc x(k)
3. Implement u(k) as first control in u∗ (k)
4. Wait one step; increment k; return to 1

.

164 / 253

.

.

The receding-horizon control law



Control implemented is first element of u∗ (k)



The implicit MPC control law is
(
)
u(k) = κN x(k)



No closed-form expression for κN (·)



κN (·) is non-linear

165 / 253

.
.

Constrained LQ-MPC
Example – loss of linearity
Consider the scalar system
x(k + 1) = x(k) + u(k)
subject to the input constraints
−0.5 ⩽ u(k) ⩽ +0.5

Let’s now obtain the unconstrained and constrained control
laws, with Q = P = 1, R = 1 and N = 3.

.

166 / 253

.

.

Constrained LQ-MPC
Example – loss of linearity: single-state system
1

KN
κN

u

0

−1

.
.
−2

−1

0
x

1

2

Three regions of operation

167 / 253

.
.

Constrained LQ-MPC

u

Example – loss of linearity: two-state system

.

.

x1

x2

59 regions of operation!
.

168 / 253

.

.

Constrained LQ-MPC
Example
Let’s revisit the system
[

]
[ ]
1 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
Previously, we obtained the MPC controller KN in the absence of
constraints.
Now, we’ll consider the constraints
−1 ⩽ u(k) ⩽ +1
[ ]
[
]
−5
+5
⩽ x(k) ⩽
−2
+0.5
and set Q = P = I2×2 , R = 10.

169 / 253

.
.

Constrained LQ-MPC
Example
State space

Controls

u(k)

.

K.N

x2

0

x(0)

k

0
.

.

5

10

15

20

0
x1

.

170 / 253

.

.

Constrained LQ-MPC
Example
State space

Controls

u(k)

.
.

K.N
κ.N

x2

0

x(0)

k

0
.

.

5

10

15

20

0
x1

170 / 253

.
.

Part IV
Stability and feasibility

.

171 / 253

.

.

Section 1
Stability of unconstrained LQ-MPC

172 / 253

.
.

Stability of unconstrained LQ-MPC


MPC problem:
min
u(k)

N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)
subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)


The optimal control law is
u(k) = KN x(k)



Therefore, the closed-loop system is linear
x(k + 1) = (A + BKN )x(k)

.

173 / 253

.

.

Stability of unconstrained LQ-MPC
Recall that a linear system
x(k + 1) = (A + BK)x(k)
is stable if and only if all eigenvalues of (A + BK) are inside the
unit circle


▶ So KN is stabilizing if λi (A + BKN ) < 1 for all i


More precisely, we defined global asymptotic stability of
the origin



From any initial state, the system state x(k) remains
bounded and converges (settles) to 0 as k → ∞

174 / 253

.
.

Stability of unconstrained LQ-MPC
KN might not be stabilizing…
Infinite horizons are stabilizing, but large (or infinite) N leads to
problems of tractability and ill-conditioning
Therefore, we proposed
dual-mode MPC }
{
▶ We optimize u(k|k), . . . , u(k + N − 1|k) by minimizing
their cost over the horizon – this is mode 1.
▶ Trick is to assume a stabilizing control law u = Kx (“mode
2”) beyond the horizon
▶ We can work out the cost of mode 2 – this forms an
estimate of the cost-to-go from j = N to ∞
▶ But rather than explicitly calculating this, we can select the
terminal cost matrix P so that the terminal cost is always
equal to the cost-to-go
▶ And therefore the MPC cost is equal to the infinite-horizon
cost (the cost of mode 1 and mode 2)
.

175 / 253

.

.

Dual-mode MPC
Summary




The MPC predictions are
{
optimization variables
u(k+j|k) =
Kx(k + j|k)

j = 0, 1, . . . , N − 1 (mode 1)
j = N, N + 1, . . . (mode 2)

K chosen to stabilize (A, B); P obtained from Lyapunov
equation
(A + BK)⊤ P(A + BK) − P + (Q + K⊤ RK) = 0



The MPC (receding-horizon) control law is
u∗ (k) = KN x(k)




System is stable under usual assumptions on (A, B, Q, R)
If K = K∞ , then KN = K∞ ; otherwise, K ̸= K∞ (sub-optimal)

176 / 253

.
.

Section 2
Stability of constrained LQ-MPC

.

177 / 253

.

.

Dual-mode constrained MPC is not necessarily stable
Example
We found that the unconstrained system
[
]
[ ]
2 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
with Q = I, R = 3 and N = 3 was closed-loop unstable with P = I,
since
[
]
0.3576
0.3234
K3 = −
led to (A + BK3 ) having eigenvalues outside the unit circle.
But we managed to obtain a stabilizing K3 via the dual-mode
procedure with
[
]
194.64 161.43
P=
161.43 147.77

178 / 253

.
.

Dual-mode constrained MPC is not necessarily stable
Example

Now let’s use the same system and cost matrices, and
dual-mode predictions, but subject to the input constraints
−1 ⩽ u(k) ⩽ +1
In the MPC optimization problem, we impose these over the
horizon
−1 ⩽ u(k + j|k) ⩽ +1, j = 0, 1, . . . , N − 1

.

179 / 253

.

.

Dual-mode constrained MPC is not necessarily stable
Example – initial state x(0) = [0.74 0.74]⊤
40

x(k)

20
0
−20
−40 .

u(k)

1
0
−1
.
0

5

10

.

15
k

20

25

30

180 / 253

.

Dual-mode constrained MPC is not necessarily stable
Example – initial state x(0) = [0.75 0.75]⊤
40

x(k)

20
0
−20
−40 .

u(k)

1
0
−1
.
0

.

5

10

15
k

20

25

30

181 / 253

.

.

Dual-mode constrained MPC is not necessarily stable
What’s going on?


Unconstrained system was globally stable – i.e., for any x(0)



Constrained system is apparently stable for some x(0) but
unstable for others



We say that the system is locally stable



How can we analyse and guarantee local stability?



Further complication:
(
)
x(k + 1) = Ax(k) + BκN x(k)
is nonlinear



Standard tools from linear systems theory useless

182 / 253

.
.

Section 3
Lyapunov stability of constrained MPC

.

183 / 253

.

.

Stability analysis of nonlinear systems
Reading: Appendix B of Rawlings & Mayne
Consider the nonlinear discrete-time system
(
)
x(k + 1) = f x(k)

(6)

with f : Rn 7→ Rn continuous and f(0) = 0.

Proposition
If there exists a scalar function V : Rn 7→ R such that
V(0) = 0

(7a)

V(x) > 0, ∀x ∈ X \ {0}
(
)
V f(x) − V(x) < 0, ∀x ∈ X \ {0}

(7b)
(7c)

then the origin is an locally asymptotically stable (LAS)
equilibrium point for (6), with region of attraction X.

184 / 253

.
.

Lyapunov functions
V is said to be a Lyapunov function

.



Conditions (7a) and (7b) mean V must be positive definite



Condition (7c) means V must be strictly monotonically
decreasing along the closed-loop trajectories of the system



If we find a V that satisfies these conditions in some
neighbourhood X of x = 0, then x = 0 is a LAS equilibrium
point



(The origin could still be LAS even if we can’t find a V)



If X = Rn (whole state-space), result is global asymptotic
stability (GAS)

185 / 253

.

.

Finding a Lyapunov function for MPC
Usually tricky. Fortunately, in MPC, we have a natural candidate.
Recall
▶ The LQ-MPC cost function is
JN

(

N−1

( ⊤
)

x (k+j|k)Qx(k+j|k)+u (k+j|k)Ru(k+j|k)
x(k), u(k) =

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)


The optimal control sequence is
{
}
u∗ (k) = u∗ (k|k), u∗ (k + 1|k), . . . , u∗ (k + N − 1|k)



The value function is
(
)
(
)
(
)
J∗N x(k) = JN x(k), u∗ (k) = min JN x(k), u(k)
u(k)

186 / 253

.
.

Finding a Lyapunov function
for MPC
(
)

We use the value function, J∗N x(k) , as a Lyapunov function.

Proposition
If
1

(a) (Q 2 , A) is detectable, and Q ⪰ 0, R ≻ 0
(b) P satisfies the Lyapunov equation
(A + BK)⊤ P(A + BK) − P + (Q + K⊤ RK) = 0


for some K such that λi (A + BK) < 1 for all i,
(
)
then J∗N x(k) is a Lyapunov function for the system
(
)
x(k + 1) = Ax(k) + BκN x(k)
for all x(0) ∈ X.
.

187 / 253

.

.

What this means



(
)
If, for some x(0), J∗N x(k) decreases monotonically and
asymptotically to 0, then we infer that the origin is LAS



If it does not, then we do not infer anything

188 / 253

.
.

Lyapunov stability of unconstrained MPC
Example
We found that the unconstrained system
[
]
[ ]
2 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
with N = 3, Q = I, R = 100,
[
]
194.64 161.43
P=
161.43 147.77
is globally asymptotically stable under the MPC control law
u = K3 x.

.

189 / 253

.

.

Value function – unconstrained MPC
Example – initial state x(0) = [0.74 0.74]⊤
400
(
)
J∗N x(k) is Lyapunov

(
)
J∗N x(k)

300

200

100

0.
0

2

4

6

8

10
k

12

14

16

18

20

190 / 253

.
.

Lyapunov stability of constrained MPC
Example

We found that same system, with the input constraints
−1 ⩽ u(k + j|k) ⩽ +1, j = 0, 1, . . . , N − 1
when controlled by constrained MPC

.



lost stability for x(0) = [0.75 0.75]⊤



maintained stability for x(0) = [0.74 0.74]⊤

191 / 253

.

.

Value function – constrained MPC
Example – initial state x(0) = [0.75 0.75]⊤
2

·104

(
)
J∗N x(k) is not Lyapunov, and system is unstable

(
)
J∗N x(k)

1.5

1

0.5

0.
0

2

4

6

8

.

10
k

12

14

16

18

20

192 / 253

.

Value function – constrained MPC
Example – initial state x(0) = [0.74 0.74]⊤
1,200
1,000

(
)
J∗N x(k) is not Lyapunov, but system is stable

(
)
J∗N x(k)

800
600
400
200
0.
0

.

2

4

6

8

10
k

12

14

16

18

20

193 / 253

.

.

Value function – constrained MPC
Example – initial state x(0) = [0.70 0.70]⊤
800
(
)
J∗N x(k) is Lyapunov ⇒ system is stable

(
)
J∗N x(k)

600

400

200

0.
0

2

4

6

8

10
k

12

14

16

18

20

194 / 253

.
.

So what’s going on?

.



J∗N is a Lyapunov function for unconstrained MPC, but not
always for constrained MPC?



We haven’t said anything about X



Remember, to be a Lyapunov function, J∗N needs to meet
conditions (7a)–(7c) for all x ∈ X



For unconstrained MPC, J∗N is Lyapunov for all x(0): so
X = Rn (we’ll show this later)



For constrained MPC
▶ x(0) = [0.75
0.75]⊤ ∈
/ X, as J∗N is not Lyapunov
▶ x(0) = [0.74
0.74]⊤ ∈
/ X, as J∗N is not Lyapunov
▶ x(0) = [0.70
0.70]⊤ ∈ X, as J∗N is Lyapunov

195 / 253

.

.

So what’s going on?
Recall that the dual-mode scheme makes predictions
{
optimization variables j = 0, 1, . . . , N − 1 (mode 1)
u(k+j|k) =
Kx(k + j|k)
j = N, N + 1, . . . (mode 2)


The optimization variables are
{
}

u (k) = u(k|k), u(k + 1|k), . . . , u(k + N − 1|k)



These – and the predicted states x(k) – are designed to
satisfy all constraints over the horizon



The mode-2 predictions – beyond the horizon – ignore
constraints

196 / 253

.
.

Lyapunov stability of constrained MPC
Example
Back to our example:
]
[ ]
2 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
[

[

]
194.64 161.43
with N = 3, Q = I, R = 100, P =
and
161.43 147.77
−1 ⩽ u(k) ⩽ +1
Let’s look at the mode-2 predictions

.

197 / 253

.

.

Mode-2 predictions
Example
.
.
.
.

Mode 2

u(0 + j|0)

1

.
x(0) = [0.70
0.70]⊤
.
x(0) = [0.74
0.74]⊤
.
x(0) = [0.75
0.75]⊤

0

−1

.
0

2

4

6

8

10
j

12

14

16

.

18

20

198 / 253

.

Feasibility of mode-2 predictions
It can be shown that J∗N is Lyapunov only if the mode-2
predictions satisfy constraints
Our Lyapunov function candidate J∗N meets all criteria bar this
one
So, if we could guarantee that the mode-2 predictions
}
u(k + j|k) = Kx(k + j|k)
j = N, N + 1, . . .
x(k + j + 1|k) = (A + BK)x(k + j|k)
satisfy all constraints, then J∗N will satisfy the Lyapunov function
conditions.
.

199 / 253

.

.

Section 4
Terminal constraints for guaranteed
feasibility

200 / 253

.
.

Feasibility of mode-2 predictions
Feasibility implies stability
If we could guarantee that the mode-2 predictions
}
u(k + j|k) = Kx(k + j|k)
j = N, N + 1, . . .
x(k + j + 1|k) = (A + BK)x(k + j|k)
satisfy all constraints, then J∗N will satisfy the Lyapunov function
conditions.
If we were able to do this, we would also guarantee the
extremely attractive property of recursive feasibility

.

201 / 253

.

.

Recursive feasibility
Suppose that

{ ∗
}


u (k) = u (k|k), u (k + 1|k), . . . , u (k + N − 1|k)


is a feasible solution to the MPC problem at time k
}
{ (
)
min JN x(k), u(k) : constraints on x and u
u(k)

Then, if the candidate solution
{ ∗
}


˜
u(k + 1) = u (k + 1|k), . . . , u (k + N − 1|k), Kx (k + N|k)
is a feasible solution to the problem at time k + 1
}
{ (
)
min JN x(k + 1), u(k + 1) : constraints on x and u
u(k+1)

the system is said to be recursively feasible

202 / 253

.
.

Recursive feasibility
Very desirable
▶ If we have an feasible solution at time k, we can generate
another at the next sampling instant, k + 1, by
▶ taking the tail of the previous solution
{ ∗
}
 ∗ (k + 1|k), . . . , u∗ (k + N − 1|k)

u
(k|k),u







.

and adding a terminal step of the mode-2 control
{ ∗
}
 ∗




u
(k|k),u
(k
+
1|k),
.
.
.
,
u
(k
+
N

1|k),
Kx
(k
+
N|k)


That is, we do not need to solve the optimization at k + 1 to
maintain constraint satisfaction
(But we might want to, for the sake of optimality)
It can be shown that, for a recursively feasible system,
feasibility at k = 0 implies feasibility for all k > 0

203 / 253

.

.

Guaranteeing recursive feasibility



We could add explicit constraints to the MPC problem



But there could be an infinite number



Instead, we use an invariant set

204 / 253

.
.

x2

Invariant sets

0

.
0
x1

Definition

(
)
For a system x(k + 1) = f x(k) , a set Ω is invariant if
x(0) ∈ Ω ⇒ x(k) ∈ Ω, ∀k > 0
.

205 / 253

.

.

Invariant sets
Examples



The origin, or an equilibrium point
0 = (A + BK)0



xe = f(xe )

Limit cycles and periodic orbits

206 / 253

.
.

Invariant terminal constraints
For dual-mode MPC, recall
{
optimization variables
u(k+j|k) =
Kx(k + j|k)


j = 0, 1, . . . , N − 1 (mode 1)
j = N, N + 1, . . . (mode 2)

Mode-2 closed loop dynamics
x(k + j + 1|k) = (A + BK)x(k + j|k)




(8)

Can construct a Ω invariant under (8)
Then, if impose the terminal constraint
x(k + N|k) ∈ Ω
then x(k + N + j|k) ∈ Ω for all j ⩾ 0.

.

207 / 253

.

.

Invariant terminal set
.
.

. 2
Mode
. 1
Mode

0
x2

x(k + N|k)


x(k|k)

.
0
x1

208 / 253

.
.

Constraint-admissible invariant terminal set
Moreover, we can construct Ω so that all constraints are
satisfied within it under the mode-2 control law u = Kx
}
Px x ⩽ q x
for all x ∈ Ω
Pu Kx ⩽ qu
Then,
▶ all future mode-2 predictions satisfy constraints
▶ recursive feasibility is guaranteed
▶ J∗ is a Lyapunov function
N
▶ the closed-loop system
(
)
x(k + 1) = Ax(k) + BκN x(k)
is LAS (region of attraction X)
.

209 / 253

.

.

Guaranteed recursive feasibility


Suppose we have feasible solution at time k
{
}
u∗ (k) = u∗ (k|k), u∗ (k + 1|k), . . . , u∗ (k + N − 1|k)



Then we also have a feasible solution at k + 1 by
▶ taking the tail of the previous solution
}
{ ∗
 ∗



u
(k|k),u
(k
+
1|k),
.
.
.
,
u
(k
+
N

1|k)





So

and adding a terminal step of the mode-2 control
{ ∗
}
 ∗ (k + 1|k), . . . , u∗ (k + N − 1|k), Kx∗ (k + N|k)

u
(k|k),u


{ ∗
}


˜
u(k) = u (k + 1|k), . . . , u (k + N − 1|k), Kx (k + N|k)

210 / 253

.
.

Implementing an invariant terminal set
Recall the constrained MPC problem: at a state x(k) at time k,
min
u(k)

N−1

(

x⊤ (k + j|k)Qx(k + j|k) + u⊤ (k + j|k)Ru(k + j|k)

)

j=0

+ x⊤ (k + N|k)Px(k + N|k)
subject to
x(k + j + 1) = Ax(k + j|k) + Bu(k + j|k), j = 0, 1, 2, . . . , N − 1
x(k|k) = x(k)
Px x(k + j|k) ⩽ qx , j = 0, 1, . . . , N − 1
Pu u(k + j|k) ⩽ qu , j = 0, 1, . . . , N − 1
PxN x(k + N|k) ⩽ qxN
Design (PxN , qxN ) to be a constraint-admissible invariant set
.

211 / 253

.

.

Constructing an invariant terminal set
Beyond the scope of this module – we’ll assume that we are
given a constraint-admissible invariant set
}
{
Ω = x ∈ Rn : PxN x ⩽ qxN
However, the interested student is encouraged to consult


Section 8.5.1 of Maciejowski



the seminal paper by Gilbert and Tan5

5

E. G. Gilbert and K. T. Tan. “Linear systems with state and control
constraints: the theory and application of maximal output admissible sets”.
In: IEEE Transactions on Automatic Control 36 (1991), pp. 1008–1020.

212 / 253

.
.

Stable constrained MPC
Summary of procedure
1

1. Choose Q ⪰ 0, R ≻ 0 (assuming (Q 2 , A) detectable)
2. Compute stabilizing mode-2 control law u = Kx
3. Compute terminal matrix P to satisfy Lyapunov equation
(A + BK)⊤ P(A + BK) − P + (Q + K⊤ RK) = 0
4. Compute (PxN , qxN ) such that the terminal constraint set
}
{
Ω = x ∈ Rn : PxN x ⩽ qxN
is constraint admissible and invariant for the mode-2
closed-loop system x(k + 1) = (A + BK)x(k)
Then, the closed-loop MPC-controlled system is recursively
feasible and locally asymptotically stable (region of attraction X)
.

213 / 253

.

.

Section 5
The region of attraction

214 / 253

.
.

Feasible set
We’re almost ready to define X. First, recall
▶ The collection of input, state/output and terminal
constraints
Px x(k + j|k) ⩽ qx , j = 0, 1, . . . , N − 1
Pu u(k + j|k) ⩽ qu , j = 0, 1, . . . , N − 1
PxN x(k + N|k) ⩽ qxN
may be written


.

(

Pc u ⩽ qc + Sc x(k)
)
x(k) is the set of all u(k) that satisfy

the feasible set UN
the constraints
}
(
) {
UN x(k) = u : Pc u ⩽ qc + Sc x(k)

215 / 253

.

.

Feasible set dependence on x(k)
UN

Propositions




(

{
}
x(k) = u : Pc u ⩽ qc + Sc x(k)
)

(

)

With no constraints, UN x(k) is the full space
Rm × · · · × Rm
(
)
With only input constraints, UN x(k) is non-empty for all
x(k) ∈ Rn
(
)
With both input and state/output constraints, UN x(k)
may be empty for some x(k) ∈ Rn

216 / 253

.
.

Set of feasible initial states
Motivated by these results, we define
{
}
XN = x : there exists a u ∈ UN (x)

Propositions

.



With no constraints, XN = Rn



With only input constraints, XN = Rn



With both input and state/output constraints, XN ⊂ Rn

217 / 253

.

.

Lyapunov functions and stability of MPC
For linear MPC,
(
)
x(k + 1) = Ax(k) + BκN x(k)

Proposition
If there exists a scalar function V : Rn 7→ R such that
V(0) = 0
(

)

V(x) > 0, ∀x ∈ X \ {0}

V Ax + BκN (x) − V(x) < 0, ∀x ∈ X \ {0}
then the origin is an locally asymptotically stable
(LAS)
(
)
equilibrium point for x(k + 1) = Ax(k) + BκN x(k) , with region
of attraction X.

218 / 253

.
.

Stability of MPC
Proposition
If
1
(a) (Q 2 , A) is detectable, and Q ⪰ 0, R ≻ 0
(b) P satisfies the Lyapunov equation
(A + BK)⊤ P(A + BK) − P + (Q + K⊤ RK) = 0



for some K such that λi (A + BK) < 1 for all i,
(c) the terminal set Ω is constraint-admissible and invariant for
x(k + 1) = (A + BK)x(k)
then
▶ J∗ is a Lyapunov function for x ∈ X = XN
N
▶ The MPC-controlled system is locally asymptotically stable
▶ The region of attraction is X = XN
.

219 / 253

.

.

Stability of MPC
No constraints



XN = Rn



The value function J∗N is a global Lyapunov function



The MPC-controlled system is globally asymptotically
stable

220 / 253

.
.

Stability of MPC
Input and state/output constraints

.



XN ⊂ Rn



The value function J∗N is a local Lyapunov function



The MPC-controlled system is locally asymptotically stable

221 / 253

.

.

Stability of MPC
Input constraints


With strictly only input constraints XN = Rn



But we impose the terminal set Ω ⊂ Rn



Therefore, XN ⊂ Rn



The value function J∗N is a local Lyapunov function



The MPC-controlled system is locally asymptotically stable

(If the system is open-loop stable, however, it is possible to
guarantee global stability without the terminal state constraint,
via a different P and Lyapunov function – see Section 2.5.2 of
Rawlings and Mayne)

222 / 253

.
.

Part V
Offset-free tracking and
disturbance rejection

.

223 / 253

.

.

Section 1
Setpoint tracking

224 / 253

.
.

Setpoint tracking
To now, we’ve studied the regulation problem
.

MPC

u

System

x

We assumed

.



objective is to steer x(k) → 0 as k → ∞



the state x(k) is known at each k

225 / 253

.

.

Setpoint tracking
Commonly, however, we are interested in tracking a setpoint or
reference signal
r.

+


MPC

u

System

y

where the objective is to steer y(k) → r as k → ∞
Essentially, this is a problem of regulating the tracking error,
r − y(k), to zero, but there are some subtleties

226 / 253

.
.

Baseline assumptions


Discrete-time linear state-space model
x(k + 1) = Ax(k) + Bu(k)
y(k) = Cx(k)






Here x ∈ Rn , u ∈ Rm , y ∈ Rp
State x(k) is measurable at every k
Reference r is piecewise constant
Control objective is to have y track r while minimizing a
stage cost function


(
)
l y(k) − r, u(k) − uss
k=0



.

We’ll first assume no uncertainty, then generalize to the
uncertain case

227 / 253

.

.

Tracking output
Of course, it is not always possible or desirable to steer all
outputs to arbitrary setpoints


not possible to maintain a constant level in a tank while
maintaining a constant net in-/out-flow



not possible to maintain a car at constant speed and
position

Therefore, it is often convenient to define a tracking output
z(k) = Hx(k)
where z ∈ Rq , q < p, and aim for z(k) → r.
Here, to simplify exposition, we’ll assume H = C so z = y

228 / 253

.
.

Section 2
Setpoint tracking – no uncertainty

.

229 / 253

.

.

An MPC framework for setpoint tracking
r.

uss
xss +

SSTO







MPC

u

y

C

u(k) → uss

Define
ξ(k) ≜ x(k) − xss



x

Full state measurement/feedback
Steady-state target optimizer (SSTO) computes (xss , uss )
given setpoint r
MPC has to steer
x(k) → xss



Ax + Bu

υ(k) ≜ u(k) − uss

Then the aim is to regulate
ξ(k) → 0

υ(k) → 0

.

230 / 253

.

The LQ-MPC tracking problem
min
υ(k)

N−1

(

ξ⊤ (k + j|k)Qξ(k + j|k) + υ⊤ (k + j|k)Rυ(k + j|k)

)

j=0

+ ξ⊤ (k + N|k)Pξ(k + N|k)
subject to
ξ(k + j + 1) = Aξ(k + j|k) + Bυ(k + j|k), j = 0, 1, 2, . . . , N − 1
ξ(k|k) = ξ(k)
Px (ξ(k + j|k) + xss ) ⩽ qx , j = 0, 1, . . . , N − 1
Pu (υ(k + j|k) + uss ) ⩽ qu , j = 0, 1, . . . , N − 1
PxN (ξ(k + N|k) + xss ) ⩽ qxN
.

231 / 253

.

.

The LQ-MPC tracking problem
Dealing with constraints
Px (ξ(k + j|k) + xss ) ⩽ qx , j = 0, 1, . . . , N − 1
Pu (υ(k + j|k) + uss ) ⩽ qu , j = 0, 1, . . . , N − 1
PxN (ξ(k + N|k) + xss ) ⩽ qxN
Why? We want, for example,
xmin ⩽ x(k) ⩽ xmax
⇒ xmin ⩽ ξ(k) + xss ⩽ xmax
In practice, we change the right-hand sides
xmin − xss ⩽ ξ(k) ⩽ xmax − xss
Px ξ(k) ⩽ qx − Px xss

232 / 253

.
.

The LQ-MPC tracking problem
min
υ(k)

N−1

(

ξ⊤ (k + j|k)Qξ(k + j|k) + υ⊤ (k + j|k)Rυ(k + j|k)

)

j=0

+ ξ⊤ (k + N|k)Pξ(k + N|k)
subject to
ξ(k + j + 1) = Aξ(k + j|k) + Bυ(k + j|k), j = 0, 1, 2, . . . , N − 1
ξ(k|k) = ξ(k)
Px ξ(k + j|k) ⩽ qx − Px xss , j = 0, 1, . . . , N − 1
Pu υ(k + j|k) ⩽ qu − Pu uss , j = 0, 1, . . . , N − 1
PxN ξ(k + N|k) ⩽ qxN − PxN xss
This is a standard MPC problem, albeit in (ξ, υ)
.

233 / 253

.

.

The LQ-MPC tracking problem
Given (xss , uss ) the theory and algorithms developed for
regulation still apply
▶ The optimal control sequence is
{
}
υ∗ (k) = υ∗ (k|k), υ∗ (k + 1|k), . . . , υ∗ (k + N − 1|k)


The applied control is

(
)
υ(k) = υ (k|k) = κN ξ(k)




(

⇒ u(k) = uss + υ (k|k) = uss + κN x(k) − xss


)

If κN is stabilizing for ξ(k + 1) = Aξ(k) + Bυ(k), then
ξ(k) → 0, υ(k) → 0 ⇒ x(k) → xss , u(k) → uss



The subleties are in target calculation: we need (xss , uss ) to
correspond to
lim y(k) = yss = Cxss = r

.

k→∞

234 / 253

.

Section 3
Target optimization

.

235 / 253

.

.

Target equilibrium pairs


Given (xss , uss ), the system is in equilibrium if and only if
xss = Axss + Buss



The equilibrium is offset-free (i.e., y tracks r with zero
steady-state error) if and only if
Cxss = r



(9)

(10)

Combining (9) and (10),
[
][ ] [ ]
I − A −B xss
0
=
C
0
uss
r

236 / 253

.
.

Target equilibrium pairs
[

][ ] [ ]
I − A −B xss
0
=
C
0
uss
r
|
{z
}
T

When does a solution exist?
▶ T is an (n + p)-by-(n + m) matrix
▶ If rank(T) = n + p and p ⩽ m (T full row rank), then for any
r there exists a pair (xss , uss )
▶ (Note this requires number of outputs ⩽ number of inputs)
▶ If rank(T) = n + p and p = m (T full rank), then for any r
there exists a unique pair (xss , uss )
▶ (Note this requires number of outputs = number of inputs)
▶ If T is only full column rank, then a pair (xss , uss ) exists for
only some r
.

237 / 253

.

.

Target equilibrium pairs
Examples

[

]
[ ]
1 1
0.5
For the system x(k + 1) =
x(k) +
u(k)
0 1
1
▶ If C = [1 0], r = ysp , then m = p = 1, T is 3 × 3, rank(T) = 3
and
  


0
xss,1
0 −1 −0.5
0 0
−1  xss,2  =  0 
ysp
uss
1 0
0
⇒ xss,1 = ysp
uss = 0
xss,2 = 0


But if C = I, then p = 2 > m = 1, T is 4 × 3, rank(T) = 3, and
no solution exists unless r2 = 0

238 / 253

.
.

Target optimization under constraints
It might be that the desired (xss , uss ) does not satisfy constraints
In that case, a popular approach is to solve an optimization
problem to determine the “nearest” equilibrium pair within the
constraints. For example,
{
}
min ∥yss −r∥+∥uss ∥ : xss = Axss +Buss , Px xss ⩽ qx , Pu uss ⩽ qu
{xss ,uss }

A steady-state error is inevitable, since yss = Cxss ̸= r

.

239 / 253

.

.

Disadvantage of this approach
What if uss is not known exactly?
▶ An estimate has been used
▶ Modelling errors
▶ Numerical precision
˜ ss in the MPC
Suppose the correct value is uss but we use u
optimization and control law. Then,
˜ ss + υ∗ (k|k)
u(k) = u
˜ ss − uss ) + υ∗ (k|k)
= uss + (u
Applying this to the system
(
)
˜ ss − uss ) + υ∗ (k|k)
x(k + 1) = Ax(k) + B uss + (u
(
)
(
)
˜ ss − uss
= Ax(k) + B uss + υ∗ (k|k) + B u

240 / 253

.
.

Disadvantage of this approach
(
)
(
)
˜ ss − uss
x(k + 1) = Ax(k) + B uss + υ∗ (k|k) + B u
The system will not converge to xss . Suppose x(k) = xss
x(k) = xss ⇒ ξ(k) = 0
⇒ υ∗ (k|k) = 0
˜ ss − uss )
⇒ x(k + 1) = Axss + Buss + B(u
˜ ss − uss )
⇒ x(k + 1) = xss + B(u
It can be shown that the system will converge to some ˜xss ̸= xss
˜ ss and
Therefore, if there is even the smallest error between u
uss , then there will be a steady-state error between yss and r!
.

241 / 253

.

.

Eliminating offset – the ∆u form
To avoid this issue, an alternative formulation of MPC is used
min
u(k)

N−1

(

[y(k+j|k)−r]⊤ S[y(k+j|k)−r]+∆u⊤ (k+j|k)R∆u(k+j|k)

)

j=0

+ ξ⊤ (k + N|k)Pξ(k + N|k)
where
∆u(k + j|k) = u(k + j|k) − u(k + j − 1|k)
∆u(k|k) = u(k|k) − u(k − 1)
Can be shown that ∆u(k) → 0 – doesn’t matter what uss is
Assuming yss = r is feasible, y(k) → r
Standard practice in industry; also form adopted in Maciejowski

242 / 253

.
.

Section 4
Setpoint tracking – under uncertainty

.

243 / 253

.

.

Tracking under uncertainty
˜ ss ̸= uss issue indicates that there could be a
Our glance at the u
problem if a disturbance acts on the system
˜ −u )
x(k + 1) − xss = A(x(k) − xss ) + Bυ∗ (k|k) + B(u
| ss{z ss}
disturbance

since y(k) → yss ̸= r
More generally, we’ll consider the uncertain system
x(k + 1) = Ax(k) + Bu(k) + Bd d(k)
y(k) = Cx(k) + Dd d(k)
where d is an disturbance that affects the process and/or the
output

244 / 253

.
.

Tracking under output disturbance
Example








.

Let’s look again at

[

]
[ ]
1 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
[
]
y(k) = 1 0 x(k) + d

and r = ysp
Using the SSTO equations, we found that
[
][ ] [ ]
[ ]
I − A −B xss
0
y
=
⇒ xss = sp , uss = 0
C
0
uss
r
0
Applying tracking MPC, x(k) → xss , u(k) → uss = 0
Hence, y(k) → ysp + d
Clearly, unless we model d, there’ll always be an offset

245 / 253

.

.

An MPC framework for tracking under uncertainty
d

r.

SSTO

uss
+
xss −

MPC

u

Dd

Ax + Bu + Bd d

x

+ +

C

ˆ
d
Observer
ˆx

x(k + 1) = Ax(k) + Bu(k) + Bd d(k)
y(k) = Cx(k) + Dd d(k)

246 / 253

.
.

An MPC framework for tracking under uncertainty
Assumptions






The disturbance is (piecewise) constant: d(k + 1) = d(k)
We can measure perfectly
▶ the state: ˆ
x=x
ˆ=d
▶ the disturbance: d
SSTO computes (xss , uss ) given r and d
MPC has to steer
x(k) → xss



Define
ξ(k) ≜ x(k) − xss



υ(k) ≜ u(k) − uss

Then the aim is to regulate
ξ(k) → 0

.

u(k) → uss

υ(k) → 0

247 / 253

.

.

State and disturbance estimation
If we could not measure x and d, we would have to estimate
them from (possibly noisy) measurements of the output y(k)
To do this, we design an observer
[
] ([
] [ ]
)[
]
] ˆx(k|k − 1)
ˆx(k + 1|k)
A Bd
Lx [
C Dd

ˆ + 1|k) =
ˆ
0 I
Ld
d(k
d(k|k
− 1)
{z
}
|
A
[ ]
[ ]
B
L
+
u(k) + x y(k)
0
Ld



With no noise, estimation error converges to zero if A is
stable (i.e., has eigenvalues within the unit circle)
With noise, Kalman filter theory tells us how to set (Lx , Ld )
to minimize the mean-square estimation error

248 / 253

.
.

SSTO under output disturbance
The SSTO equations are modified from
[
][ ] [ ]
I − A −B xss
0
=
C
0
uss
r
|
{z
}
T

to

[

][ ] [
]
I − A −B xss
Bd d
=
C
0
uss
r − Dd d
|
{z
}
T

.



(xss , uss ) now account for d



Otherwise, the same conditions hold for the existence of
target equilibrium pairs

249 / 253

.

.

The LQ-MPC offset-free tracking problem
min
υ(k)

N−1

(

ξ⊤ (k + j|k)Qξ(k + j|k) + υ⊤ (k + j|k)Rυ(k + j|k)

)

j=0

+ ξ⊤ (k + N|k)Pξ(k + N|k)
subject to
ξ(k + j + 1) = Aξ(k + j|k) + Bυ(k + j|k), j = 0, 1, 2, . . . , N − 1
ξ(k|k) = ξ(k)
Px ξ(k + j|k) ⩽ qx − Px xss , j = 0, 1, . . . , N − 1
Pu υ(k + j|k) ⩽ qu − Pu uss , j = 0, 1, . . . , N − 1
PxN ξ(k + N|k) ⩽ qxN − PxN xss
(xss , uss ) already account for d

250 / 253

.
.

Offset-free tracking
Example








.

Our system

[

]
[ ]
1 1
0.5
x(k + 1) =
x(k) +
u(k)
0 1
1
[
]
y(k) = 1 0 x(k) + d

previously tracked r = ysp with non-zero offset
The revised SSTO equations give
[
][ ] [
]
[
]
I − A −B xss
0
r−d
=
⇒ xss =
, uss = 0
C
0
uss
r−d
0
Applying tracking MPC, x(k) → xss , u(k) → uss = 0
Hence, y(k) → (r − d) + d = r
The observer and controller ensure offset-free tracking

251 / 253

.

.

Offset-free tracking

d

Summary
r.

SSTO

uss
+
xss −

MPC

u

Dd

Ax + Bu + Bd d

x

+ +

C

ˆ
d
Observer
ˆx




Use SSTO with estimate of d to compute (xss , uss )
Use tracking MPC to send x(k) → xss , u(k) → uss
Disadvantages
▶ Whenever r or d changes, have to recompute (x , u )
ss ss
▶ Success depends on getting (x , u ) right
ss ss

252 / 253

.
.

Recall: eliminating offset with the ∆u form
To avoid this issue, an alternative formulation of MPC is used
min
u(k)

N−1

(





[y(k+j|k)−r] S[y(k+j|k)−r]+∆u (k+j|k)R∆u(k+j|k)

)

j=0

+ [x(k + N|k) − xss ]⊤ P[x(k + N|k) − xss ]
where
∆u(k + j|k) = u(k + j|k) − u(k + j − 1|k)
∆u(k|k) = u(k|k) − u(k − 1)
Can be shown that ∆u(k) → 0 – doesn’t matter what uss is
Assuming yss = r is feasible, y(k) → r
Standard practice in industry; also form adopted in Maciejowski
.

253 / 253

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close