フィルターのクリア

How to use LQR for setpoint tracking?

107 ビュー (過去 30 日間)
Pedro Carvalho
Pedro Carvalho 2024 年 5 月 1 日
コメント済み: Amirah Algethami 2024 年 6 月 27 日 12:47
Initially I was using LQR to regulate the error dynamics, i.e., I computed the gains for de = (A - BK)e, but this basically results in a PI controller since the control law (with integral action) is u = -K*e + ki*z. I have seen many sources teaching how to do it by augmenting the state vector with the integral of the error, expanding the matrices to A = [A 0; C 0], etc. but I still can't understand how that works. I am working on a first order cruise control problem. From my observations the integral action is doing all the tracking and the -Kx term is only getting in the way, trying to regulate the state x to zero. Here is my code:
% Parameters
X_u = 0;
X_uu = 22.7841;
m = 5037.7;
% Equilibrium point x0
x0 = 20*1.852/3.6; % Longitudinal linear velocity in m/s
u0 = X_u*x0 + X_uu*x0^2 % Thrust in N
u0 = 2.4120e+03
% Linearize system around x0
A = -(X_u/m + 2*X_uu/m*x0);
B = 1/m;
C = 1;
D = 0;
% System order
n = size(A,1);
% Open loop system
sys_ol = ss(A,B,C,D);
openloopPoles = eig(A)
openloopPoles = -0.0931
% Augmented system with the integral of the error
A_hat = [A zeros(n,1);...
-C 0 ];
B_hat = [B; 0];
Br = [zeros(n,1); 1];
C_hat = [1 zeros(1,n)];
D_hat = 0;
% Q R matrices
Q = 1000*(C'*C);
R = 0.5e-3;
% Feedback gain
K_hat = lqr(A_hat, B_hat,Q,R)
K_hat = 1x2
1.0e+03 * 3.5893 -1.4142
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
K = K_hat(1)
K = 3.5893e+03
ki = -K_hat(2)
ki = 1.4142e+03
% Scaling matrix
%Nbar = rscale(sys_ol,K)
% Closed loop system
AA = [A - B*K B*ki;-C 0];
BB = Br;
CC = [C 0];
DD = 0;
sys_cl = ss(AA, BB, CC, DD);
% Time vector
t = 0:0.1:30;
% Control input
u = u0*ones(size(t));
% Reference setpoint
r = 25*1.852/3.6*ones(size(t));
% Initial states
x0_hat = [x0,0]
x0_hat = 1x2
10.2889 0
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
% Simulate the response of the system
[y,t,x_hat] = lsim(sys_cl,r,t,x0_hat);
figure
plot(t, y*3.6/1.852, 'k', 'LineWidth', 1.5,'Color','k')
xlabel('Time (seconds)')
ylabel('Speed (knots)')
title('Closed loop response with integrator')
grid on
% Control effort (Thrust)
u_effort = -K*x_hat(:,1) + ki*x_hat(:,2);
figure
plot(t, u_effort,'Color','k')
xlabel('Time (s)')
ylabel('Thrust (N)')
title('Control effort')

採用された回答

Sam Chak
Sam Chak 2024 年 5 月 1 日
編集済み: Sam Chak 2024 年 5 月 1 日
Your code appears to be error-free. However, the control action you implemented differs from the error-based PI control scheme that was mentioned.
To comprehend why integral control can track the setpoint, it is important to visualize that the plant is a 1st-order transfer function, denoted as , where the plant lacks an integrator (referred to as a Type-0 system).
The state-feedback control term will shape and enhance the transient behavior, resulting in . Nevertheless, a steady-state error will persist since .
To eliminate the steady-state error, introducing an integrator in the cascade compensation path is necessary, transforming it into a Type-1 system. This results in . The closed-loop system can then be expressed as . Consequently, the steady-state error is eliminated since .
Update: In the code, the initial value of the Integrator output (2nd state variable, z0) should be set to a non-zero value. This is necessary because the initial velocity (x0) is non-zero. Therefore, the initial value of the Integrator output can be calculated by solving the control law and considering the initial error (r0 - x0).
% Parameters
X_u = 0;
X_uu = 22.7841;
m = 5037.7;
% Equilibrium point x0
x0 = 20*1.852/3.6; % Longitudinal linear velocity in m/s
u0 = X_u*x0 + X_uu*x0^2; % Thrust in N
% Linearize system around x0
A = -(X_u/m + 2*X_uu/m*x0)
A = -0.0931
B = 1/m
B = 1.9850e-04
C = 1;
D = 0;
% System order
n = size(A,1);
% Open loop system
sys_ol = ss(A,B,C,D);
Gp = tf(sys_ol)
Gp = 0.0001985 ----------- s + 0.09307 Continuous-time transfer function.
openloopPoles = eig(A);
% Augmented system with the integral of the error
A_hat = [A zeros(n,1);...
-C 0 ];
B_hat = [B;
0];
Br = [zeros(n,1); 1];
C_hat = [1 zeros(1,n)];
D_hat = 0;
% Q R matrices
Q = 1000*(C'*C);
R = 0.5e-3;
% Feedback gain
K_hat = lqr(A_hat, B_hat,Q,R)
K_hat = 1x2
1.0e+03 * 3.5893 -1.4142
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
K = K_hat(1);
ki = -K_hat(2);
% Scaling matrix
% Nbar = rscale(sys_ol,K)
% Closed loop system
AA = [A-B*K, B*ki;
-C, 0];
BB = Br;
CC = [C 0];
DD = 0;
sys_cl = ss(AA, BB, CC, DD);
A-B*K
ans = -0.8056
B*ki
ans = 0.2807
Gcl = tf(sys_cl)
Gcl = 0.2807 ----------------------- s^2 + 0.8056 s + 0.2807 Continuous-time transfer function.
% steady-state response
ssr = dcgain(Gcl)
ssr = 1.0000
% Time vector
t = 0:0.1:30;
% Control input
u = u0*ones(size(t));
% Reference setpoint
r0 = 25*1.852/3.6;
r = r0*ones(size(t));
% Initial states
z0 = (u0 + K*x0)/ki + (r0 - x0);
x0_hat = [x0, z0];
% Simulate the response of the system
[y, t, x_hat] = lsim(sys_cl, r, t, x0_hat);
figure
plot(t, y*3.6/1.852, 'k', 'LineWidth', 1.5, 'Color', '#265EF5')
xlabel('Time (seconds)')
ylabel('Speed (knots)')
title('Closed loop response with integrator')
grid on
% Control effort (Thrust)
u_effort = -K*x_hat(:,1) + ki*x_hat(:,2);
figure
plot(t, u_effort, 'LineWidth', 1.5, 'Color', '#F15EF5'), grid on
xlabel('Time (s)')
ylabel('Thrust (N)')
title('Control effort')
  3 件のコメント
Sam Chak
Sam Chak 2024 年 5 月 1 日
Good catch, @Pedro Carvalho. I overlooked that detail while focusing on the theoretical explanation. The issue arises because the initial value of the Integrator output (2nd state variable, z0) was set to zero. However, if the aircraft is already flying at 20 knots at the beginning, it is logical that z0 cannot be zero. I have made adjustments to my answer, and you can find the calculation of z0 in the corrected code.
Please let me know if the solution and explanation are satisfactory to you. 👍
Pedro Carvalho
Pedro Carvalho 2024 年 5 月 2 日
編集済み: Pedro Carvalho 2024 年 5 月 2 日
Yes, makes sense. Thanks!

サインインしてコメントする。

その他の回答 (1 件)

Joshua Levin Kurniawan
Joshua Levin Kurniawan 2024 年 5 月 1 日
Hello Pedro. Regarding the LQR controller it has a unique properties. In this case, without an integral action, the controller only tries to compesate the system into a steady state condition (i.e. it does not necessarily reach ), or in another word, we only want to regulate the system to have a stable behaviour. However, this is different for the case where we added the integral action, where the error e is supressed to reach zero.
Lets say that we want to track a specific state, , which we modelled as y for convenience. Then, as the standard state space term,
.
Then, the control function can be defined as . Here, we want to suppress the error e to equal to zero, naturally, we want to add them into the state matrix, which we called the augmented matrix . Remember that
Hence, the matrix can be defined as
Therefore, by conducting the LQR method to the augmented system as described as above, you can get the optimal control full-state feedback gain matrix for the integrating action.
  1 件のコメント
Amirah Algethami
Amirah Algethami 2024 年 6 月 27 日 12:47
Hi @Joshua Levin Kurniawan , thanks for comment it is helpful. Do you have matlab code example for that please.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeState-Space Control Design and Estimation についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by