現在この質問をフォロー中です
- フォローしているコンテンツ フィードに更新が表示されます。
- コミュニケーション基本設定に応じて電子メールを受け取ることができます。
Convert 2d image to 3d
7 ビュー (過去 30 日間)
古いコメントを表示
Dekel Mashiach
2022 年 6 月 6 日
Hi;
I'm trynig to convert 2d image to 3d with the intrinsic matrix, I expect to get in the x-axis the distance in millimeters and do not understand why it does not work. hope someone can help..
採用された回答
Matt J
2022 年 6 月 6 日
編集済み: Matt J
2022 年 6 月 6 日
You cannot recover a 3D point from only a single 2D projection of that point. This is because every point [X,Y,Z] along the line of sight from the camera to [u,v] also maps to [u,v] according to the equation you've shown. Because it is a many-to-one mapping from [X,Y,Z] to [u,v], you cannot invert the equation uniquely.
35 件のコメント
Dekel Mashiach
2022 年 6 月 6 日
編集済み: Dekel Mashiach
2022 年 6 月 7 日
Do you have any idea how can I do this?
Matt J
2022 年 6 月 6 日
As we discussed in previous posts, you need a view of [X,Y,Z] from the perspective of an additional camera. Then, you can use triangulate.
Matt J
2022 年 6 月 6 日
Alternatively, if you know that your 3D point lies in a particular 3D plane, then you will have an additional equation which can be used to with your projection equation to solve uniquely for the 3D point.
Dekel Mashiach
2022 年 6 月 6 日
編集済み: Dekel Mashiach
2022 年 6 月 6 日
sorry I don't understand the additional equation, how can I use it ? ;
(I have to use in only one camera- so triangulate can't help)
Matt J
2022 年 6 月 6 日
編集済み: Matt J
2022 年 6 月 6 日
The 2D point [u;v;1] backprojects to a 3D line, which as we have said contains many points, and makes the solution non-unique. If C is the camera center, then the line is given by,
L(s) = C+s*K^-1[u;v;1]
However, if you have additional information that the 3D point you're looking for lies in a particular 3D plane, then you just have to find the intersection of the backprojected line L(s) with that plane, and it will give you a unique point.
Dekel Mashiach
2022 年 6 月 7 日
編集済み: Dekel Mashiach
2022 年 6 月 7 日
And if I set the coordinates of 3d (x y z) in milimeters ,will I get the system in 2d point (pixels)?
Dekel Mashiach
2022 年 6 月 7 日
My plan was to find the factor s one time in the reverse way and then place it in order to find the 3d as I wanted at first.
C is the camera center' it's mean if my image is 720*1280, than C is [360;640]?
Dekel Mashiach
2022 年 6 月 7 日
I try to get 2d coordinates by : k*[x;y;z]
my line is only in x and is 400 milimeters so [400;0;0]
Dekel Mashiach
2022 年 6 月 7 日
編集済み: Dekel Mashiach
2022 年 6 月 7 日
My project is an autonomous vehicle that tracks a route using an image (the green line in the image at the top of the post). I need for my controller the route that the vehicle should travel, so I try to convert from 2d to 3d.
I realized I could not get to 3D coordinates as I wanted at first. So I try to reverse - that is, I actually measured the length of the line, I placed it in the equation as [400; 0; 0] and I try to take out the 2 d * s.
Matt J
2022 年 6 月 7 日
Going from 3D to 2D is a much easier thing. You just apply the equation in your posted question to map from [X,Y,Z] to [u,v]. Everything on the right hand side of the equation is known, I assume.
Dekel Mashiach
2022 年 6 月 8 日
When I place 400[mm] I expect to get the pixels in the green line in the image I posted at the beginning(y=722 x=686)
Matt J
2022 年 6 月 8 日
編集済み: Matt J
2022 年 6 月 8 日
Multiplying by the 3x3 intrinsic matrix K does not map 3D to 2D coordinates. Notice in the camera projection equation in your post, there is a 3x4 matrix of extrinsic parameters as well. Also, when you apply the camera matrix, it is with homogeneous coordinate vectors. You need to normalize the final vector component to 1 to retrieve the inhomogeneous u,v values, e.g.,
P=rand(3,4); %fake camera matrix
XYZ=[400;0;0] %3D coordinates (inhomogeneous)
XYZ = 3×1
400
0
0
uv_hom=P*[XYZ;1] %apply camera projection - gives homogeneous 2D point
uv_hom = 3×1
42.6613
211.8373
126.0988
uv=uv_hom(1:2)/uv_hom(3) %normalize to obtain inhomogeneous 2D point
uv = 2×1
0.3383
1.6799
Matt J
2022 年 6 月 8 日
編集済み: Matt J
2022 年 6 月 8 日
I doubt your R_T is really eye(3,4). If that were true, the point XYZ=[400;0;0] would be somewhere to the far right of and level with the camera. You should probably use the extrinsics command to get the actual extrinsics of your camera. Also, you should consider using worldToImage and pointsToWorld to map between 2D and 3D. Since they take as input many of hte objects that are already available inside your cameraParameters object, it might be easier to use.
Dekel Mashiach
2022 年 6 月 10 日
Hey; do you know why the pricipal point in my instrinsic matrix is (3,1) (3,2) and not (1,3) (2,3)?
Matt J
2022 年 6 月 10 日
編集済み: Matt J
2022 年 6 月 10 日
Yes, because Matlab Toolboxes like to organize coordinate vectors as row vectors and to apply transformations to them by multiplying transformatiion matrices on the right (i.e. x'*K' instead of K*x). Therefore, the intrinisc matrix as generated by Matlab is the transpose of the way you normally see it in textbooks.
Did my previous remarks resolve your question? If so, please Accept-click the answer.
Dekel Mashiach
2022 年 6 月 10 日
編集済み: Dekel Mashiach
2022 年 6 月 11 日
You have helped me a lot and I am very thankful to you ,it is not obvious all the patience you have in giving help! But unfortunately I still can not solve it and I will be really messed up ... I found that Z=262 (u=fX/Z , v=fY/Z), But I still can't write the code in a way that it will work .. If you can help me I would be happy...
XYZ = [400;0;0];
R_T = eye(3,4);
k = cameraParams2.Intrinsics.IntrinsicMatrix
k =
648.7306 0 0
0 651.8988 0
311.8627 242.7315 1.0000
uv = k*R_T*[XYZ;1];
uv_n = uv(1:2)/uv(3);
Matt J
2022 年 6 月 11 日
Yes, but you appear not to have implemented any of my suggestions. R_T=eye(3,4) is wrong and k needs to be transposed. Use extrinsics() to get the real extrinsic data and use worldToImage() to apply it.
Dekel Mashiach
2022 年 6 月 11 日
I need to use with the equationI attached at the beginning of the post, can you please show me how I should write down the equation and position the data correctly? (Assuming that R_T = eye (3,4))
Dekel Mashiach
2022 年 6 月 11 日
編集済み: Dekel Mashiach
2022 年 6 月 11 日
So basically the u, v I will get I have to subtract from them the principal point and then divide them by Z?
I have another problem when I do k', it causes after normalization Inf, NaN. you know how to solve it?
Dekel Mashiach
2022 年 6 月 11 日
Given that I placed the correct R_T, after normalizing what is the next step I need to do for u, v?
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Import, Export, and Conversion についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!エラーが発生しました
ページに変更が加えられたため、アクションを完了できません。ページを再度読み込み、更新された状態を確認してください。
Web サイトの選択
Web サイトを選択すると、翻訳されたコンテンツにアクセスし、地域のイベントやサービスを確認できます。現在の位置情報に基づき、次のサイトの選択を推奨します:
また、以下のリストから Web サイトを選択することもできます。
最適なサイトパフォーマンスの取得方法
中国のサイト (中国語または英語) を選択することで、最適なサイトパフォーマンスが得られます。その他の国の MathWorks のサイトは、お客様の地域からのアクセスが最適化されていません。
南北アメリカ
- América Latina (Español)
- Canada (English)
- United States (English)
ヨーロッパ
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom(English)
アジア太平洋地域
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)