For this introductory post I reviewed a paper on autonomous navigation using augmented visual localization. In this case the proprioceptive sensor was an odometer and the exteroceptive sensor was an electro-optical camera. The authors’ were tasked with designing localizing algorithms and control laws for a driverless taxi in an urban environment. This is challenged by two main factors. First, reduced Global Positioning System (GPS) accuracy is caused by the “urban canyon” effect, which can limit the number of satellites in view. Second, the complex street layout also requires high frequency position updates, which GPS does not provide.
Visual localization can be the primary navigation solution in a situation where the user has high fidelity images of the operational area. Reference frames are pre-generated, uploaded to vehicles, and then compared to frames from an onboard camera. The largest source of error stems from the use of global scaling factors. These are part of the image comparison algorithms and basically state that for a given commanded speed, prominent references in the field of view should get larger at a certain rate. The authors proposed a local scaling factor determined by an odometer. This input provides the distance traveled along the actual trajectory. An extended Kalman filter was also used for command generation after comparing estimated and actual location. A benefit of the odometer and filter addition is that the system can still estimate location if the camera becomes temporarily obscured or washed out by the sun, although with some drift.
The paper provided experimental results that showed good correlation between expected and actual trajectories, and predicted that this kind of approach would be versatile in that many types of additional sensor data could be used to augment visual localization. Future work was aimed at increasing accuracy at night with headlights and employing a second aft-facing camera to combat prolonged washout when traveling into a low sun. I would propose adjusting the system to work with a self-calibrating infrared imaging system to eliminate the lighting issues, and an edge detection algorithm that would correlate reference frames regardless of whether the camera was in a black or white hot mode.
Reference
Karam, N., Hadj-Abdelkader, H., Deymier, C., & Ramadasan, D. (October, 2010). Improved Visual Localization and Naviagtion Using Proprioceptive Sensors. Conference on Intelligent Robots and Systems. 2010 IEEE/RSJ, Taipei, Taiwan.
No comments:
Post a Comment