# How can I get model coordinates from pixel coordinates, using an uncalibrated camera, if I know the model coordinates of a few pixels?

41 ビュー (過去 30 日間)
Eric Smith 2018 年 7 月 26 日
コメント済み: Florian Morsch 2018 年 7 月 30 日
I have series of images of a hemispherical model from 4 different camera locations. Images from each camera location were taken with the same camera at different times. I know the exact dimensions of the hemispherical model. I have several fiduciary marks on the model that are visible in the images and whose real world coordinates are known. I also know the orientation and location of my 4 different camera positions.
So my question is, is there a way I can transform the pixel coordinates to real world coordinates with the information at hand and without any preliminary camera calibration?

サインインしてコメントする。

### 採用された回答

Florian Morsch 2018 年 7 月 27 日

Lets assume you have point P=[X; Y; Z], then your camera coordinates of that point would be x'= X/Z and y'=Y/Z. You then project it to the image plane with u = fx * x' + cx and v = fy * y' + cy with fx, fy are the focal length in pixel and cx, cy the coordinates of the principal point in the image. You can reverse the math to get you coordinates, but: this is without any distortion taken into accound and its only the projection of a simple pinhole camera. If you want to compute the distortion the whole process gets a lot more complicated.
Another thing you could do to get the camera coordinates from your world coordinates is [x'; y'; z'] = R*[X; Y; Z]+t . Since you said you have the orientation and location of your camera you can compute the rotation matrix R and translation vector t.
It would be a lot easier to just calibrate the camera and use the camera parameters, since in the calibration all distortions are taken into account. If you dont do that you have to calculate the distortion by hand, and thats a lot more work.
EDIT: Another possible, but not very accurate method would be to take the relations of the object respectivly to each other. You know the exact heigth and width of you object, then you can calculate the pixel ration of that (lets say you object is 10 cm, in the image its 100 pixel, so 10 pixel equals 1 cm). Now if you do that for all 4 pictures you have you will get a relation with each of them. With that you can calculate the distance to the object, which would be your first coordinate. Then you could try to triangulate the other positions with each other. Keep in mind this method would be not very accurate since you also dont take distortion into account and depending on your picture quality the result may vary.
##### 2 件のコメント表示非表示 1 件の古いコメント
Florian Morsch 2018 年 7 月 30 日
fx/fy are your focal length, so for a quick calculation you your CCD/CMOS chip size and/or your field of view (FOV):
Depending on how accurate you need the focal you can get a good quick estimate if you know either of the following:
If you know the sensor's physical width in mm, then the focal in pixels is:
focal_pixel = (focal_mm / sensor_width_mm) * image_width_in_pixels
And if you know the horizontal field of view, say in degrees,
focal_pixel = (image_width_in_pixels * 0.5) / tan(FOV * 0.5 * PI/180)
Your focal length in mm should be known, thats a camera parameter which normaly is given somewhere in the camera description.
For the principle point you can take a picture of a fine grid pattern (a mm paper for example) without any distortion correction. With third order distortion, the grid lines should be spaced quadratically (with the space between each grid line increasing linearly). On the point where your grid spacing is maximal (or minimal, depending on the sensor and lense) you have your principal point. Now you just need to extract those coordinates.

サインインしてコメントする。

R2016b

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by