MATLAB Answers

L
0

How to perform a truncated SVD

L
さんによって質問されました 2019 年 3 月 28 日
最新アクティビティ Bjorn Gustavsson さんによって コメントされました 2019 年 3 月 29 日
I am trying to solve a system of equations using a truncated SVD (m_trunc) after doing a regular SVD. Using the picard plot, I have a new Sk of k=10. Anyone familiar with SVD and truncated SVD, is this the proper way to perform a truncated SVD in Matlab? Any input appreciated, thanks!
%data
y=[0.0250 0.0750 0.1250 0.1750 0.2250 0.2750 0.3250 0.3750 0.4250 0.4750 0.5250 0.5750 0.6250 0.6750 0.7250 0.7750 0.8250 0.8750 0.9250 0.9750]';
d=[0.2388 0.2319 0.2252 0.2188 0.2126 0.2066 0.2008 0.1952 0.1898 0.1846 0.1795 0.1746 0.1699 0.1654 0.1610 0.1567 0.1526 0.1486 0.1447 0.1410]';
j=20;
dx=(1/20);
x=y;
x=x';
%create the G matrix
for k=1:20
for i=1:20
G(k,i)=(x(k)*exp(-x(k)*y(i))*dx);
end
end
d_sum=sum(G);
%SVD
[U,S,V] = svd(G);
%the MP psuedo inverse
m=pinv(G)*d;
s_di=diag(S);
model=V*((U'*d)./diag(S)); %testing
con=cond(G); %ill-conditioned
%Picard plot
figure(1)
clf
bookfonts
semilogy(diag(S),'ko');
xlabel('i')
ylabel('s_i')
%truncated
m_trunc=V(:,1:10)*((U(:,1:10)'*d)./s_di(1:10,1));

  0 件のコメント

サインイン to comment.

1 件の回答

回答者: Bjorn Gustavsson 2019 年 3 月 28 日
 採用された回答

That was an uggly uggly G-matrix. When solving inverse problems the truncation-point is not to be selected close to "at double-precision zero". The crucial thing to understand is why we have to truncate (or use "Tichonov-damping"). The problem is that your data will allways have some noise, and we need to control the impact of that noise in the solution. In the ideal case we have:
d_ideal = G*m_true;
[U,S,V] = svd(G);
With U and V being ortho-normal matrices spanning (parts of) the data space and model-space respectively. Therefore we can do (stepwise for clarity):
d = (U*S*V')*m_true;
% U'*d = (U'*U*S*V')*m_true; % Where U'*U == eye(numel(d)), if I got the numbers right
% inv(S)*U'*d = (inv(S)*S*V')*m_true; % inv(S)*S is also nice...
% V*diag(1./diag(S))*U'*d = (V*V')*m_true;
m_est = V*diag(1./diag(S))*U'*d;
But since your data is not that ideal you will have:
d_ideal = G*m_true;
d_real = (U*S*V')*m_true + noise; % where noise is random noise with some distribution and uncomfotable level...
m_est = V*diag(1./diag(S))*U'*d_ideal + V*diag(1./diag(S))*U'*noise;
Since U is an orthonormal eigenvector matrix |U*noise| will be of similar size as |noise|, and typically also rather evenly distributed over all components (I'm not sure this is true in a mathematical sense but in my practical experience it is never much better than that). A direct consequence of this is that:
diag(1./diag(S))*(U'*noise);
leads to a signifficant amplification of noise in the components that are multiplied with the inverse of the very smallest eigenvalues. (ever in the case of no explicit noise you will never have much better relative accuracy in your data than 1e-16 due to double precision representation...)
And that's why we have to regularize.
Your script is OK, but your truncation-point is way-over-ambitious (In my (maybe not so) humble opinion). I sugegst that you do something like this:
truncSVD = @(U,S,V,p) V(:,1:p)*diag(1./diag(S(1:p,1:p)))*U(:,1:p)';
for i1 = 1:10,
m_est = truncSVD(U,S,V,i1);
ph(i1) = plot(m_est);
hold on
pause % step your way through and look at the solutino step by step.
end
Then do the same wbut add a little bit of random noise. In your very ugggly case that wont show the problem as clearly - because your eigenvalues are decreasing "rather exponentially" and are all smaller than 1 - for an inverse problem to be nice to work with at least a couple of eigenvalues should be larger than 1 or at least of similar amplitude, then above that number they can start to fall off and you'll have your truncation point. Condition-number of the un-regularized forward matrix is not all that informative, but in this case you will have a "truncated condition-number" of S(1,1)/S(p,p) and with p = 10 you get TCN = 3.0641e+16 - which is not that much better than cond(G). I hope you get "not as degenerate" cases to work with next.
HTH

  3 件のコメント

L
2019 年 3 月 28 日
Thank you. I am working through your explanation now. I am entirely new to this so I appreciate your in-depth help! If helpful, this was the problem:
Screen Shot 2019-03-28 at 1.19.55 PM.png
Screen Shot 2019-03-28 at 1.19.59 PM.png
L
2019 年 3 月 28 日
If x=y, have I created my (ugly) G matrix properly?
Bjorn Gustavsson 2019 年 3 月 29 日
Yeah, that looks OK. If you want to be very rigorous about it you might sub-divide the cells you have in the X-dimension - such that you would calculate your integral over a much larger number or x-values, that would still leave you with the same problem, but with a better discretization. That would make G an [n_y x n_x] matrix with n_x much larger, so no longer square. But (again: in my not so humble opinion) it is in general preferable to have an overdiscretized G and "underdetermined" (it is not underdetermined - it is mixed-determined!) system of equations for m, then you let the SVD and your choise of Tichonov-damping, or truncation solve the problem with eigenvalues that are zero or small. That way your "measurement matrix" (G is in general a description o the entire geometry, physics experiment and set-up of your measurements), will automagically give you the natural resolution of the problem at hand. If one always were to keep G square then one do an implicit regularization by underdiscretize the problem.
HTH

サインイン to comment.



Translated by