- #1
BlueScreenOD
- 14
- 0
I'm an undergraduate computer-science student doing research in the field of computer vision, and one of the tasks I've been charged with is calibrating the camera on a robot.
I understand the basic principles at work: a vector in 3D world coordinates is transformed into homogeneous 2-space through the pinhole model, and camera calibration is supposed to find the parameters that created that transformation. However, I'm a little stumped on the actual application of these ideas.
I'm using the "Camera Calibration Toolbox for Matlab" (http://www.vision.caltech.edu/bouguetj/calib_doc/). I've successfully used the program to analyze a series of images and determined the intrinsic parameters, and I have a set of extrinsic parameters (one for each image I fed into the program); however, I can't figure out how to generate the matrix that transforms the pixel coordinates into real-world coordinates.
If someone could point me in the right direction and tell me where I can learn what I need to know, I would be greatly appreciative.
I understand the basic principles at work: a vector in 3D world coordinates is transformed into homogeneous 2-space through the pinhole model, and camera calibration is supposed to find the parameters that created that transformation. However, I'm a little stumped on the actual application of these ideas.
I'm using the "Camera Calibration Toolbox for Matlab" (http://www.vision.caltech.edu/bouguetj/calib_doc/). I've successfully used the program to analyze a series of images and determined the intrinsic parameters, and I have a set of extrinsic parameters (one for each image I fed into the program); however, I can't figure out how to generate the matrix that transforms the pixel coordinates into real-world coordinates.
If someone could point me in the right direction and tell me where I can learn what I need to know, I would be greatly appreciative.