- #1
alvin6688
- 1
- 0
Hello everyone,
I have been using the well-known Cal Tech camera calibration toolbox built by Jean-Yves Bouguet for stereo triangulation.
Part of the toolbox's utility is that it's able to give 3D world coordinates from 2D coordinates in stereo image pairs, given a set of camera calibration parameters (intrinsic and extrinsic)
What I would like to do is the inverse. Given the 3D world coordinates of some object, to predict the cooresponding 2D image coordinates.
My question to anyone who has experience with this: Is this an ill-posed problem? The algorithms (image --> world) employ a significant amount of inner product algebra, and I've been unable to decompose these steps when going in the inverse direction (world --> image).
Thank you in advance for your time,
Alvin Chen
Below is the relevant Matlab code for image --> world stereo triangulation:
% --- Known inputs from calibration:
om: 3D rotation matrix (extrinsic parameter)
T: translation matrix (extrinsic parameter)
R: R = rodrigues(om)
xt: normalized left image coordinate
xtt: normalized right image coordinate
% --- Stereo triangulation
u = R * xt;
n_xt2 = dot(xt,xt);
n_xtt2 = dot(xtt,xtt);
T_vect = repmat(T, [1 N]);
DD = n_xt2 .* n_xtt2 - dot(u,xtt).^2;
dot_uT = dot(u,T_vect);
dot_xttT = dot(xtt,T_vect);
dot_xttu = dot(u,xtt);
NN1 = dot_xttu.*dot_xttT - n_xtt2 .* dot_uT;
NN2 = n_xt2.*dot_xttT - dot_uT.*dot_xttu;
Zt = NN1./DD;
Ztt = NN2./DD;
X1 = xt .* repmat(Zt,[3 1]);
X2 = R'*(xtt.*repmat(Ztt,[3,1]) - T_vect);
% --- Left world coordinates:
XL = 1/2 * (X1 + X2);
% --- Right worldcoordinates:
XR = R*XL + T_vect;
I have been using the well-known Cal Tech camera calibration toolbox built by Jean-Yves Bouguet for stereo triangulation.
Part of the toolbox's utility is that it's able to give 3D world coordinates from 2D coordinates in stereo image pairs, given a set of camera calibration parameters (intrinsic and extrinsic)
What I would like to do is the inverse. Given the 3D world coordinates of some object, to predict the cooresponding 2D image coordinates.
My question to anyone who has experience with this: Is this an ill-posed problem? The algorithms (image --> world) employ a significant amount of inner product algebra, and I've been unable to decompose these steps when going in the inverse direction (world --> image).
Thank you in advance for your time,
Alvin Chen
Below is the relevant Matlab code for image --> world stereo triangulation:
% --- Known inputs from calibration:
om: 3D rotation matrix (extrinsic parameter)
T: translation matrix (extrinsic parameter)
R: R = rodrigues(om)
xt: normalized left image coordinate
xtt: normalized right image coordinate
% --- Stereo triangulation
u = R * xt;
n_xt2 = dot(xt,xt);
n_xtt2 = dot(xtt,xtt);
T_vect = repmat(T, [1 N]);
DD = n_xt2 .* n_xtt2 - dot(u,xtt).^2;
dot_uT = dot(u,T_vect);
dot_xttT = dot(xtt,T_vect);
dot_xttu = dot(u,xtt);
NN1 = dot_xttu.*dot_xttT - n_xtt2 .* dot_uT;
NN2 = n_xt2.*dot_xttT - dot_uT.*dot_xttu;
Zt = NN1./DD;
Ztt = NN2./DD;
X1 = xt .* repmat(Zt,[3 1]);
X2 = R'*(xtt.*repmat(Ztt,[3,1]) - T_vect);
% --- Left world coordinates:
XL = 1/2 * (X1 + X2);
% --- Right worldcoordinates:
XR = R*XL + T_vect;