LMIs and LMI Problems

A linear matrix inequality (LMI) is any constraint of the form

A(x) := A0 + x1A1 + ... + xNAN < 0(3-1)

where

  • x = (x1, . . . , xN) is a vector of unknown scalars (the decision or optimization variables)

  • A0, . . . , AN are given symmetric matrices

  • < 0 stands for "negative definite," i.e., the largest eigenvalue of A(x) is negative

Note that the constraints A(x) > 0 and A(x) < B(x) are special cases of Equation 3-1 since they can be rewritten as –A(x) < 0 and A(x) – B(x) < 0, respectively.

The LMI of Equation 3-1 is a convex constraint on x since A(y) < 0 and A(z) < 0 imply that A(y+z2)<0. As a result,

  • Its solution set, called the feasible set, is a convex subset of RN

  • Finding a solution x to Equation 3-1, if any, is a convex optimization problem.

Convexity has an important consequence: even though Equation 3-1 has no analytical solution in general, it can be solved numerically with guarantees of finding a solution when one exists. Note that a system of LMI constraints can be regarded as a single LMI since

{A1(x)<0AK(x)<0

is equivalent to

A(x):=diag(A1(x),,AK(x))<0

where diag (A1(x), . . . , AK(x)) denotes the block-diagonal matrix with
A1(x), . . . , AK(x) on its diagonal. Hence multiple LMI constraints can be imposed on the vector of decision variables x without destroying convexity.

In most control applications, LMIs do not naturally arise in the canonical form of Equation 3-1 , but rather in the form

L(X1, . . . , Xn) < R(X1, . . . , Xn)

where L(.) and R(.) are affine functions of some structured matrix variables X1, . . . , Xn. A simple example is the Lyapunov inequality

ATX + XA < 0(3-2)

where the unknown X is a symmetric matrix. Defining x1, . . . , xN as the independent scalar entries of X, this LMI could be rewritten in the form of Equation 3-1. Yet it is more convenient and efficient to describe it in its natural form Equation 3-2, which is the approach taken in the LMI Lab.

Was this topic helpful?