Department of Electrical Engineering Linköping University, 2008. —107 p.
This thesis deals with estimating position and orientation in real-time, using measurements from vision and inertial sensors. A system has been developed to solve this problem in unprepared environments, assuming that a map or scene model is available. Compared to ‘camera-only’ systems, the combination of the complementary sensors yields an accurate and robust system which can handle periods with uninformative or no vision data and reduces the need for high frequency vision updates.
Abstract.
Problem formulation.
Contributions.
Thesis outline.
System overview.
Sensors.
Sensor fusion.
Implementation considerations.
Experiments.
Sensors.
Inertial measurement unit.
Sensor model.
Calibration.
Strapdown inertial navigation.
Vision.
Sensor model.
Calibration.
Correspondence detection.
State space models.
Kinematics.
Translation.
Rotation.
Time derivatives.
Continuous-time models.
Discrete-time models.
Calibration theory.
Kinematic relations.
Acceleration.
Angular velocity.
Geometric measurements.
Direction vectors.
Position and orientation.
Mixing kinematic and geometric measurements.
Calibration algorithms.
Internal calibration.
External calibration.
Experiments.
Application example.
Concluding remarks.
Conclusions.
Future work.
A.
Quaternion preliminaries.
Operations and properties.
Exponential.
Matrix/vector notation.
B.
Conversions.
Rotation matrices.
Euler angles.
Rotation vector.