How to solve a linear ill conditioned system of equations

55 ビュー (過去 30 日間)
KostasK
KostasK 2021 年 11 月 7 日
コメント済み: Matt J 2021 年 11 月 8 日
Hi all, I have a typical linear problem to solve , where A is a tridiagonal, illconditioned matrix. Using the solution of said system, I calculate a vector R, such that R = kv.*diff(x). I have what I know to be the correct value of R, and in my subsequent attempts I perform this calculation involving the solution that I obtain of the linear system, and compare to this value of R that I have.
I use two ways to work around the ill-conditioning:
  1. I use the typical L-U decomposition, however I still get a warning and the results are not that accurate as you can see.
  2. I perform the same process as 1, however I condition U by adding the identity matrix which appears to solve the ill conditioning problem. Moreover, I notice that the result R2 is away by the constant R2(1) from the known answer R, so by correcting for that this approach appears to give me the right answer.
clear ; clc
A = [663100000,-663100000,0,0,0,0,0,0,0,0,0,0;-663100000,1407100000.00000,-744000000,0,0,0,0,0,0,0,0,0;0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0,0,0;0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0,0;0,0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0;0,0,0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0;0,0,0,0,0,-744000000,1865000000.00000,-1121000000.00000,0,0,0,0;0,0,0,0,0,0,-1121000000.00000,3086000000.00000,-1965000000.00000,0,0,0;0,0,0,0,0,0,0,-1965000000.00000,51965000000.0000,-50000000000.0000,0,0;0,0,0,0,0,0,0,0,-50000000000.0000,50022150000.0000,-22150000,0;0,0,0,0,0,0,0,0,0,-22150000,111860000,-89710000;0,0,0,0,0,0,0,0,0,0,-89710000,89710000] ;
b = [0;97346.0079330557;97020.9494874157;97246.0491601032;97462.7637751785;97128.3959227217;97326.7056988805;0;0;0;0;-460875.388953404] ;
R = [0;97346.0079330557;194366.957420471;291613.006580575;389075.770355753;486204.166278475;583530.871977355;583530.871977355;583530.871977355;583530.871977355;583530.871977355] ;
kv = diag(A, -1) ;
% L U Decomposition
[L, U] = lu(A) ;
x1 = U\(L\b) ;
Warning: Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 1.113507e-16.
R1 = kv.*diff(x1) ;
% L U Decomposition with Preconditioning
x2 = (U + eye(length(A)))\(L\b) ;
R2 = kv.*diff(x2) ;
% Compare results. R is the correct solution of the problem
[R R1 R2-R2(1)]
ans = 11×3
1.0e+05 * 0 0 0 0.9735 0.9650 0.9735 1.9437 1.9583 1.9437 2.9161 2.9233 2.9161 3.8908 3.8882 3.8908 4.8620 4.8532 4.8620 5.8353 5.8157 5.8353 5.8353 5.8468 5.8353 5.8353 5.7220 5.8353 5.8353 4.6126 5.8353
Based on the above I have two questions:
  1. I cannot explain why my second approach works, and specifically why I have to perform the calculation R2-R2(1) in order to get the right result. Is this a coincidence for this case, or can I safely generalise this for the other similar ill-conditioned matrices like A that I have so that I can solve similar problems correctly?
  2. In case that the above is not a reliable solution, can you suggest of a method to handle this problem? (I have looked into the Tikhonov regularisation in regtools however I cannot get the function to work as it keeps returning NaN.)
Thanks for your help in advance.
  2 件のコメント
Matt J
Matt J 2021 年 11 月 7 日
I have a typical linear problem to solve , where A is a tridiagonal, illconditioned matrix. I have the correct answer to be R = kv.*diff(b)
That clearly isn't the case in your example. R does not solve the original equations:
A = [663100000,-663100000,0,0,0,0,0,0,0,0,0,0;-663100000,1407100000.00000,-744000000,0,0,0,0,0,0,0,0,0;0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0,0,0;0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0,0;0,0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0;0,0,0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0;0,0,0,0,0,-744000000,1865000000.00000,-1121000000.00000,0,0,0,0;0,0,0,0,0,0,-1121000000.00000,3086000000.00000,-1965000000.00000,0,0,0;0,0,0,0,0,0,0,-1965000000.00000,51965000000.0000,-50000000000.0000,0,0;0,0,0,0,0,0,0,0,-50000000000.0000,50022150000.0000,-22150000,0;0,0,0,0,0,0,0,0,0,-22150000,111860000,-89710000;0,0,0,0,0,0,0,0,0,0,-89710000,89710000] ;
b = [0;97346.0079330557;97020.9494874157;97246.0491601032;97462.7637751785;97128.3959227217;97326.7056988805;0;0;0;0;-460875.388953404] ;
R = [0;97346.0079330557;194366.957420471;291613.006580575;389075.770355753;486204.166278475;583530.871977355;583530.871977355;583530.871977355;583530.871977355;583530.871977355] ;
A*R-b
Error using *
Incorrect dimensions for matrix multiplication. Check that the number of columns in the first matrix matches the number of rows in the second matrix. To perform elementwise multiplication, use '.*'.
KostasK
KostasK 2021 年 11 月 7 日
編集済み: KostasK 2021 年 11 月 7 日
R is not suposed to solve the original equations, x1 and x2 are the ones who do. R ts a calculation which inolves the solution to the linear equations as you can see in the code that I have posted, which is R=kv.*diff(x). There is a physical meaning behind it, where kv is the 'stiffness coefficient vector' and x is the displacement, so R is the spring induced force.
So to clarify on your comment, R is not supposed to be the solution to the system. I have edited my question to make this more clear, and I have also changed a small typo that I had. Sorry for the misunderstanding.

サインインしてコメントする。

採用された回答

Matt J
Matt J 2021 年 11 月 7 日
編集済み: Matt J 2021 年 11 月 8 日
I don't think the equations should require regularization. The only vector in the nullspace of A is ones(12,1) , but R is insensitive to this null vector because kv.*(diff(x+c*ones(12,1))= kv.*diff(x).
The discrepancies you are getting in version R1 seem to be either because your expected values for R are incorrect or else there is something wrong in your A,b data, probably the final row. I suspect this because of the results I get below from lsqlin. Here I am solving for the least squares solution to A*x=b subject to the constraint R=kv.*diff(x). As you can see, the final equation in A*x=b is not solved very well compared to the other equations.
A = [663100000,-663100000,0,0,0,0,0,0,0,0,0,0;-663100000,1407100000.00000,-744000000,0,0,0,0,0,0,0,0,0;0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0,0,0;0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0,0;0,0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0,0;0,0,0,0,-744000000,1488000000.00000,-744000000,0,0,0,0,0;0,0,0,0,0,-744000000,1865000000.00000,-1121000000.00000,0,0,0,0;0,0,0,0,0,0,-1121000000.00000,3086000000.00000,-1965000000.00000,0,0,0;0,0,0,0,0,0,0,-1965000000.00000,51965000000.0000,-50000000000.0000,0,0;0,0,0,0,0,0,0,0,-50000000000.0000,50022150000.0000,-22150000,0;0,0,0,0,0,0,0,0,0,-22150000,111860000,-89710000;0,0,0,0,0,0,0,0,0,0,-89710000,89710000] ;
b = [0;97346.0079330557;97020.9494874157;97246.0491601032;97462.7637751785;97128.3959227217;97326.7056988805;0;0;0;0;-460875.388953404] ;
R = [0;97346.0079330557;194366.957420471;291613.006580575;389075.770355753;486204.166278475;583530.871977355;583530.871977355;583530.871977355;583530.871977355;583530.871977355] ;
s=1e6;
A=A/s; b=b/s; R=R/s;
kv = diag(A, -1) ;
[m,n]=size(A);
E=kv(:).*diff(eye(n));
opts=optimoptions('lsqlin','OptimalityTolerance',1e-20,'ConstraintTolerance',1e-20);
[x,fval]=lsqlin(A,b,[],[],E,R ,[],[],[],opts); fval
Minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
fval = 0.0150
equationError=A*x-b;
equationError([1:4,10:12])
ans = 7×1
0 -0.0000 0.0000 0.0000 -0.0000 0 -0.1227
  2 件のコメント
KostasK
KostasK 2021 年 11 月 8 日
You are right, the last value of the vector b should be different, this is the correct one:
b = [0;97346.0079330557;97020.9494874157;97246.0491601032;97462.7637751785;97128.3959227217;97326.7056988805;0;0;0;0; -5.835308719773553e+05 ];%-460875.388953404] ;
In which case, the matrix A still remains ill conditioned. Does that mean that I should solve using lsqlin?
Matt J
Matt J 2021 年 11 月 8 日
I don't think you need to. I think you can just do the calculation as,
R=kv.*diff(pinv(A)*b)

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeLinear Algebra についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by