Optimization Problem

Overview

Model Predictive Control solves an optimization problem – specifically, a quadratic program (QP) – at each control interval. The solution determines the manipulated variables (MVs) to be used in the plant until the next control interval.

This QP problem includes the following features:

  • The objective (or "cost") function — A scalar, nonnegative measure of controller performance to be minimized.

  • Constraints — Conditions the solution must satisfy, such as physical bounds on MVs and plant output variables.

  • Decision — The MV adjustments that minimizes the cost function while satisfying the constraints.

The following sections describe these features in more detail.

Standard Cost Function

The standard cost function is the sum of four terms, each focusing on a particular aspect of controller performance, as follows:

J(zk)=Jy(zk)+Ju(zk)+JΔu(zk)+Jϵ(zk).

Here, zk is the QP decision. As described below, each term includes weights that help you balance competing objectives. MPC controller provides default weights but you will usually need to adjust them to tune the controller for your application.

Output Reference Tracking

In most applications, the controller must keep selected plant outputs at or near specified reference values. MPC controller uses the following scalar performance measure:

Jy(zk)=j=1nyi=1p{wi,jysjy[rj(k+i|k)yj(k+i|k)]}2.

Here,

  • k — Current control interval.

  • p — Prediction horizon (number of intervals).

  • ny — Number of plant output variables.

  • zk — QP decision, given by:

    zkT=[u(k|k)Tu(k+1|k)Tu(k+p1|k)Tϵk].

  • yj(k+i|k) — Predicted value of jth plant output at ith prediction horizon step, in engineering units.

  • rj(k+i|k) — Reference value for jth plant output at ith prediction horizon step, in engineering units.

  • sjy — Scale factor for jth plant output, in engineering units.

  • wi,jy — Tuning weight for jth plant output at ith prediction horizon step (dimensionless).

The values ny, p, sjy, and wi,jy are controller specifications, and are constant. The controller receives rj(k+i|k) values for the entire prediction horizon. The controller uses the state observer to predict the plant outputs. At interval k, the controller state estimates and MD values are available. Thus, Jy is a function of zk only.

Manipulated Variable Tracking

In some applications, i.e. when there are more manipulated variables than plant outputs, the controller must keep selected manipulated variables (MVs) at or near specified target values. MPC controller uses the following scalar performance measure:

Ju(zk)=j=1nui=0p1{wi,jusju[uj(k+i|k)uj,target(k+i|k)]}2.

Here,

  • k — Current control interval.

  • p — Prediction horizon (number of intervals).

  • nu — Number of manipulated variables.

  • zk — QP decision, given by:

    zkT=[u(k|k)Tu(k+1|k)Tu(k+p1|k)Tϵk].

  • uj,target(k+i|k) — Target value for jth MV at ith prediction horizon step, in engineering units.

  • sju — Scale factor for jth MV, in engineering units.

  • wi,ju — Tuning weight for jth MV at ith prediction horizon step (dimensionless).

The values nu, p, sju, and wi,ju are controller specifications, and are constant. The controller receives uj,target(k+i|k) values for the entire horizon. The controller uses the state observer to predict the plant outputs. Thus, Ju is a function of zk only.

Manipulated Variable Move Suppression

Most applications prefer small MV adjustments (moves). MPC uses the following scalar performance measure:

JΔu(zk)=j=1nui=0p1{wi,jΔusju[uj(k+i|k)uj(k+i1|k)]}2.

Here,

Here,

  • k — Current control interval.

  • p — Prediction horizon (number of intervals).

  • nu — Number of manipulated variables.

  • zk — QP decision, given by:

    zkT=[u(k|k)Tu(k+1|k)Tu(k+p1|k)Tϵk].

  • sju — Scale factor for jth MV, in engineering units.

  • wi,jΔu — Tuning weight for jth MV movement at ith prediction horizon step (dimensionless).

The values nu, p, sju, and wi,jΔu are controller specifications, and are constant. u(k–1|k) = u(k–1), which are the known MVs from the previous control interval. JΔu is a function of zk only.

In addition, a control horizon m < p (or MV blocking) constrains certain MV moves to be zero.

Constraint Violation

In practice, constraint violations might be unavoidable. Soft constraints allow a feasible QP solution under such conditions. MPC controller employs a dimensionless, nonnegative slack variable, εk, which quantifies the worst-case constraint violation. (See Constraints) The corresponding performance measure is:

Jϵ(zk)=ρϵϵk2.

Here,

  • zk — QP decision, given by:

    zkT=[u(k|k)Tu(k+1|k)Tu(k+p1|k)Tϵk].

  • εk — Slack variable at control interval k (dimensionless).

  • ρ — Constraint violation penalty weight (dimensionless).

Alternative Cost Function

You can elect to use the following alternative to the standard cost function:

J(zk)=i=0p1{[eyT(k+i)Qey(k+i)]+[euT(k+i)Rueu(k+i)]+[ΔuT(k+i)RΔuΔu(k+i)]}+ρϵϵk2.

Here, Q (ny-by-ny), Ru, and RΔu (nu-by-nu) are positive-semi-definite weight matrices, and:

ey(i+k)=Sy1[r(k+i+1|k)y(k+i+1|k)]eu(i+k)=Su1[utarget(k+i|k)u(k+i|k)]Δu(k+i)=Su1[u(k+i|k)u(k+i1|k)].

Also,

  • Sy — Diagonal matrix of plant output variable scale factors, in engineering units.

  • Su — Diagonal matrix of MV scale factors in engineering units.

  • r(k+1|k) — ny plant output reference values at the ith prediction horizon step, in engineering units.

  • y(k+1|k) — ny plant outputs at the ith prediction horizon step, in engineering units.

  • zk — QP decision, given by:

    zkT=[u(k|k)Tu(k+1|k)Tu(k+p1|k)Tϵk].

  • utarget(k+i|k) — nu MV target values corresponding to u(k+i|k), in engineering units.

Output predictions use the state observer, as in the standard cost function.

The alternative cost function allows off-diagonal weighting, but requires the weights to be identical at each prediction horizon step.

The alternative and standard cost functions are identical if the following conditions hold:

  • The standard cost functions employs weights wi,jy, wi,ju, and wi,jΔuthat are constant with respect to the index, i = 1:p.

  • The matrices Q, Ru, and RΔu are diagonal with the squares of those weights as the diagonal elements.

Constraints

Certain constraints are implicit. For example, a control horizon m < p (or MV blocking) forces some MV increments to be zero, and the state observer used for plant output prediction is a set of implicit equality constraints. Explicit constraints that you can configure are described below.

Bounds on Plant Outputs, MVs, and MV Increments

The most common MPC constraints are bounds, as follows.

yj,min(i)sjyϵkVj,miny(i)yj(k+i|k)sjyyj,max(i)sjy+ϵkVj,maxy(i),  i=1:p,    j=1:nyuj,min(i)sjuϵkVj,minu(i)uj(k+i1|k)sjuuj,max(i)sju+ϵkVj,maxu(i),  i=1:p,    j=1:nuΔuj,min(i)sjuϵkVj,minΔu(i)Δuj(k+i1|k)sjuΔuj,max(i)sju+ϵkVj,maxΔu(i),  i=1:p,    j=1:nu.

Here, the V parameters (ECR values) are dimensionless controller constants analogous to the cost function weights but used for constraint softening (see Constraint Softening). Also,

  • k — Scalar QP slack variable (dimensionless) used for constraint softening.

  • sjy — Scale factor for jth plant output, in engineering units.

  • sju — Scale factor for jth MV, in engineering units.

  • yj,min(i), yj,max(i) — lower and upper bounds for jth plant output at ith prediction horizon step, in engineering units.

  • uj,min(i), uj,max(i) — lower and upper bounds for jth MV at ith prediction horizon step, in engineering units.

  • Δuj,min(i), Δuj,max(i) — lower and upper bounds for jth MV increment at ith prediction horizon step, in engineering units.

Except for the slack variable non-negativity condition, all of the above constraints are optional and are inactive by default (i.e., initialized with infinite limiting values). To include a bound constraint, you must specify a finite limit when you design the controller.

QP Matrices

This section describes the matrices associated with the model predictive control optimization problem described in Optimization Problem.

Prediction

Assume that the disturbance models described in Input Disturbance Model is unit gain, for example, d(k)=nd(k) is a white Gaussian noise). You can denote this problem as

x[xxd],A[ABdC¯0A¯],Bu[Bu0],Bv[Bv0],Bd[BdD¯B¯]C[CDdC¯]

Then, the prediction model is:

x(k+1) = Ax(k) +Buu(k) +Bvv(k)+Bdnd(k)

y(k) = Cx(k) +Dvv(k) +Ddnd(k)

Next, consider the problem of predicting the future trajectories of the model performed at time k=0. Set nd(i)=0 for all prediction instants i, and obtain

y(i|0)=C[Aix(0)+h=0i1Ai1(Bu(u(1)+j=0hΔu(j))+Bvv(h))]+Dvv(i)

This equation gives the solution

[y(1)y(p)]=Sxx(0)+Su1u(1)+Su[Δu(0)Δu(p1)]+Hv[v(0)v(p)]

where

Sx=[CACA2CAp]pny×nx,Su1=[CBuCBu+CABuh=0p1CAhBu]pny×nuSu=[CBu00CBu+CABuCBu0h=0p1CAhBuh=0p2CAhBuCBu]pny×pnuHv=[CBvDv00CABvCBvDv0CAp1BvCAp2BvCAp3BvDv]pny×(p+1)nv.

Optimization Variables

Let m be the number of free control moves, and let z= [z0; ...; zm–1]. Then,

[Δu(0)Δu(p1)]=JM[z0zm1](2-1)

where JM depends on the choice of blocking moves. Together with the slack variable ɛ, vectors z0, ..., zm–1 constitute the free optimization variables of the optimization problem. In the case of systems with a single manipulated variables, z0, ..., zm–1 are scalars.

Consider the blocking moves depicted in the following graph.

Blocking Moves: Inputs and Input Increments for moves = [2 3 2]

This graph corresponds to the choice moves=[2 3 2], or, equivalently, u(0)=u(1),  u(2)=u(3)=u(4),  u(5)=u(6), Δ u(0)=z0, Δ u(2)=z1, Δ u(5)=z2, Δ u(1)=Δ u(3)=Δ u(4)=Δ u(6)=0.

Then, the corresponding matrix JM is

JM=[I000000I000000000I000]

Cost Function

Standard Form.  The function to be optimized is

J(z,ε)=([u(0)u(p1)][utarget(0)utarget(p1)])TWu2([u(0)u(p1)][utarget(0)utarget(p1)])+[Δu(0)Δu(p1)]TWΔu2[Δu(0)Δu(p1)]           +([y(1)y(p)][r(1)r(p)])TWy2([y(1)y(p)][r(1)r(p)])+ρεε2

where

Wu=diag(w0,1u,w0,2u,...,w0,nuu,...,wp1,1u,wp1,2u,...,wp1,nuu)WΔu=diag(w0,1Δu,w0,2Δu,...,w0,nuΔu,...,wp1,1Δu,wp1,2Δu,...,wp1,nuΔu)Wy=diag(w1,1y,w1,2y,...,w1,nyy,...,wp,1y,wp,2y,...,wp,nyy)(2-2)

Finally, after substituting u(k), Δu(k), y(k), J(z) can be rewritten as

J(z,ε)=ρεε2+zTKΔuz+2([r(1)r(p)]TKr+[v(0)v(p)]TKv+u(1)TKu+[utarget(0)utarget(p1)]TKut+x(0)TKx)z           +constant(2-3)

    Note   You may want the QP problem to remain strictly convex. If the condition number of the Hessian matrix KΔU is larger than 1012, add the quantity 10*sqrt(eps) on each diagonal term. You can use this solution only when all input rates are unpenalized (WΔu=0) (see Weights in the Model Predictive Control Toolbox™ reference documentation).

Alternative Cost Function.  If you are using the alternative cost function shown in Alternative Cost Function, Equation 2-3, then Equation 2-2 is replaced by the following:

Wu=blkdiag(Ru,...,Ru)WΔu=blkdiag(RΔu,...,RΔu)Wy=blkdiag(Q,...,Q)(2-4)

In this case, the block-diagonal matrices repeat p times, for example, once for each step in the prediction horizon.

You also have the option to use a combination of the standard and alternative forms. See Weights in the Model Predictive Control Toolbox reference documentation for more details.

Constraints

Next, consider the limits on inputs, input increments, and outputs along with the constraint ɛ≥ 0.

[ymin(1)εVminy(1)ymin(p)εVminy(p)umin(0)εVminu(0)umin(p1)εVminu(p1)Δumin(0)εVminΔu(0)Δumin(p1)εVminΔu(p1)][y(1)y(p)u(0)u(p1)Δu(0)Δu(p1)][ymax(1)+εVmaxy(1)ymax(p)+εVmaxy(p)umax(0)+εVmaxu(0)umax(p1)+εVmaxu(p1)Δumax(0)+εVmaxΔu(0)Δumax(p1)+εVmaxΔu(p1)]

    Note   To reduce computational effort, the controller automatically eliminates extraneous constraints, such as infinite bounds. Thus, the constraint set used in real time may be much smaller than that suggested in this section.

Similar to what you did for the cost function, you can substitute u(k), Δu(k), y(k), and obtain

Mzz+MεεMlim+Mv[v(0)v(p)]+Muu(1)+Mxx(0)(2-5)

In this case, matrices Mz,Mɛ,Mlim,Mv,Mu,Mx are obtained from the upper and lower bounds and ECR values.

Unconstrained Model Predictive Control

The optimal solution is computed analytically

z*=KΔu1([r(1)r(p)]TKr+[v(0)v(p)]Kv+u(1)TKu+[utarget(0)utarget(p1)]TKut+x(0)TKx)T

and the model predictive controller sets Δu(k)=z*0, u(k)=u(k–1)+Δu(k).

More About

Was this topic helpful?