How to represent gray scale images as affine subspaces?

4 ビュー (過去 30 日間)
M
M 2023 年 10 月 23 日
編集済み: M 2023 年 12 月 11 日
How to represent gray scale images as affine subspaces?
  4 件のコメント
M
M 2023 年 10 月 24 日
Hi @Walter Roberson do you have any idea please?
Walter Roberson
Walter Roberson 2023 年 10 月 27 日
This is not a topic I know anything about.

サインインしてコメントする。

回答 (4 件)

Image Analyst
Image Analyst 2023 年 10 月 23 日
I don't know what you mean. What's the context? What do you mean by "model"? What do you mean by "affine subspaces"? Do you just want to warp or spatially transform the image?
If you have any more questions, then attach your data and code to read it in with the paperclip icon after you read this:
  1 件のコメント
M
M 2023 年 10 月 23 日
編集済み: M 2023 年 10 月 23 日
@Image Analyst @Matt J In the attached paper they represented the images in Affine subspaces, I am asking generally if there is a popular method/code of representing the image in affine space.
My data is huge attached is a sample.

サインインしてコメントする。


Matt J
Matt J 2023 年 10 月 23 日
編集済み: Matt J 2023 年 10 月 23 日
One way, I suppose would be to train an affine neural network with the Deep Learning Toolbox, e.g.,
layers=[imageinputLayer(120,160,1);
convolution2dLayer([120,160],N) )
regressionLayer];
XTrain=images;
YTrain=zeros(1,1,N,size(XTrain,4));
net=trainNetwork(XTrain, YTrain, layers,.... )
but you would need a truly huge number of images and good regularization for the training to be well-posed. You should probably look at augmentedImageDatastore.
  34 件のコメント
M
M 2023 年 11 月 23 日
Hi @Matt J, I have a question please, Why did you decide her to use regressionLayer in your Network? And what is the advantage of using this layer? thanks
Matt J
Matt J 2023 年 11 月 23 日
As opposed to what? What else might we have used?

サインインしてコメントする。


Matt J
Matt J 2023 年 10 月 27 日
編集済み: Matt J 2023 年 10 月 27 日
Well, in general, we can write the estimation of A,b as the norm minimization problem,
If X can be fit in RAM, you could just use svd() to solve it
N=14;
X=images(:,:)';
vn=vecnorm(X,inf,1);
[~,~,V]=svd([X./vn, ones(height(X),1)] , 0);
Abt=V(:,end+1-N:end)./vn';
A=Abt(1:end-1,:)';
b=Abt(end,:)';
s=vecnorm(A,2,2);
[A,b]=deal(A./s, b./s);
  43 件のコメント
Torsten
Torsten 2023 年 12 月 5 日
編集済み: Torsten 2023 年 12 月 5 日
So you say you have 3 classes for your images that are known right from the beginning.
Say you determine the affine subspace for each 49000 images that best represents your image and you compute the mutual distance by SVD between these 49000 affine subspaces (which would give a 49000x49000 matrix). Now say you cluster your images according to this distance matrix into 3 (similar) clusters derived from the distance matrix.
The question you should ask yourself is: would these three clusters resemble the 3 classes that you think the images belong to right at the beginning ?
If the answer is no and if you consider the 3 classes from the beginning as fixed, then the distance measure via SVD is not adequate for your application.
M
M 2023 年 12 月 6 日
編集済み: M 2023 年 12 月 6 日
@Matt J unfortunately the pre-normalization, and increasing the number of N didnt improve the performance.
I sure that the problem in the indicitor (norm), other indicators may provide better results as Grassman Kernel and distance. because that's proved in the literature. but still I am looking how can I apply them

サインインしてコメントする。


Matt J
Matt J 2023 年 12 月 5 日
編集済み: Matt J 2023 年 12 月 5 日
So how Can I get the direction and origins so I can compute the grassman distance?
Here is another variant that gives a Basis/Origin description of the subspace.
N=100; %estimated upper bound on subspace dimension
X=reshape(XTrain,[], size(XTrain,4)); %<----no tranpose
mu=mean(X,2);
X=X-mu;
[Q,~,~]=qr(X , 0);
Basis=Q(:,1:N); %Basis direction vectors
Origin=mu-Basis*(Basis.'*mu); %Orthogonalize
  18 件のコメント
Matt J
Matt J 2023 年 12 月 7 日
編集済み: Matt J 2023 年 12 月 7 日
is there here a criteria for selcting the N final V columns of its SVD as you suggested for N initial columns of Q?
The dimension of the subspace that you fit to X will be NumPixels-N, where NumPixels is the total number of pixels in one image.
Also, Regarding Basis/Origin description, can you give me an idea how do we usually use this information for classification?
No, pursuing it was your idea. You said it would help you compute the Grassman distance, whatever that is.
M
M 2023 年 12 月 11 日
編集済み: M 2023 年 12 月 11 日
Dear @Matt J thank you for your suggestions and clarifications.
I reached to a conclusion that representing the images as it is as vectors in a subspace is not a good idea!
Especially if the test images are not typical for the training set.(stacking the images as vectors causes problems!)
I think I have to do some feature extractions of region of interest first then representing the features as subspaces.(whether as matrices or vectors) , Still I am thinking how to do that.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeCustom Training Loops についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by