- #1
slamo
- 2
- 0
Dear all,
I'm trying to implement an Extended Kalman Filter based SLAM for orientation and angular rates.
To ease up things I discarded the estimation of position and linear velocities. I'm using a quaternion based approach with translation vectors of 3D-points/landmarks as measurements.
For tests I'm only using one landmark/measurement. Unfortunately the filter is not behaving as supposed. If the orientation-state (rotation only around z-axis) is within the range 90°-270° or multiples everything works just fine. If the orientation is in the other 180° range I get an innovation inverse to the correct one (the predicted state rotates in the false direction). The problem seems to be in the correction part of the filter. I'm pretty sure the prediction part has nothing to with the problem. The innovation vectors are also correct.
Changing the direction of the simulated rotation or the position of the landmark to another side of the robot shows no change.
Here is some of the formulas/matlab-code I use:
with
The EKF-update is:
Is there something I’m missing? I'm really stuck with this problem. So any help would be welcome.
I'm trying to implement an Extended Kalman Filter based SLAM for orientation and angular rates.
To ease up things I discarded the estimation of position and linear velocities. I'm using a quaternion based approach with translation vectors of 3D-points/landmarks as measurements.
For tests I'm only using one landmark/measurement. Unfortunately the filter is not behaving as supposed. If the orientation-state (rotation only around z-axis) is within the range 90°-270° or multiples everything works just fine. If the orientation is in the other 180° range I get an innovation inverse to the correct one (the predicted state rotates in the false direction). The problem seems to be in the correction part of the filter. I'm pretty sure the prediction part has nothing to with the problem. The innovation vectors are also correct.
Changing the direction of the simulated rotation or the position of the landmark to another side of the robot shows no change.
Here is some of the formulas/matlab-code I use:
Code:
% Predict measurement -> measurement model h
%
% Calculate translation in world coordinates (feature-pos)
dx = x(fpos);
dy = x(fpos+1);
dz = x(fpos+2);
%
% Get rotation matrix
R = quat2RotMat( quatConjungate(x(1:4)) );
%
zp = R * [dx; dy; dz];
% Calculate H_clm for this landmark (H_c 0 ... 0 H_lm 0 ... 0 ) -> h
% derived by state
%
qr = x(1);
qx = -x(2);
qy = -x(3);
qz = -x(4);
%
%
% dht_dqc
%
dht_dqrc = 2* [ qr -qz qy;
qz qr -qx;
-qy qx qr ] * [dx; dy; dz];
%
dht_dqxc = 2* [ qx qy qz;
qy -qx -qr;
qz qr -qx ] * [dx; dy; dz];
%
dht_dqyc = 2* [-qy qx qr;
qx qy qz;
-qr qz -qy ] * [dx; dy; dz];
%
dht_dqzc = 2* [-qz -qr qx;
qr -qz qy;
qx qy qz ] * [dx; dy; dz];
%
% dht_ dyi
%
dht_dyi = R;
% Part of the camera Hc
%
Hcy(:,1:7) = [dht_dqrc dht_dqxc dht_dqyc dht_dqzc zeros(3)];
% Part of the landmark Hy
%
Hcy(:,fpos:fpos+Nxf-1) = dht_dyi;
with
Code:
function R = quat2RotMat(q_in)
qzqz = q_in(4)^2;
qyqy = q_in(3)^2;
qxqx = q_in(2)^2;
qrqr = q_in(1)^2;
qrqz = q_in(1)*q_in(4);
qxqy = q_in(2)*q_in(3);
qrqy = q_in(1)*q_in(3);
qxqz = q_in(2)*q_in(4);
qyqz = q_in(3)*q_in(4);
qrqx = q_in(1)*q_in(2);
R = [ qrqr+qxqx-qyqy-qzqz -2*qrqz+2*qxqy 2*qrqy+2*qxqz;
2*qxqy+2*qrqz qrqr-qxqx+qyqy-qzqz -2*qrqx+2*qyqz;
-2*qrqy+2*qxqz 2*qyqz+2*qrqx qrqr-qxqx-qyqy+qzqz ];
The EKF-update is:
Code:
% filter gain
K = PX * H' * inv(H*PX*H' + R)
% updated state and covariance
XX = XX + K * v;
PX = (eye(size(PX,1)) - K*H ) * PX;
% Normalize the quaternion
%
XX(1:4) = XX(1:4) / norm(XX(1:4))
Jnorm = eye(size(XX,1));
Jnorm(1:4,1:4) = normJac(XX(1:4));
PX = Jnorm * PX * Jnorm';
Is there something I’m missing? I'm really stuck with this problem. So any help would be welcome.