Towards illumination invariance for visual localization
IEEE Int. Conf. on Robotics and Automation (ICRA)
While a large amount of work exists in the literature relating to place/location recognition, very few of these provide a robust way of dealing with large amounts of lighting changes in locations of interest. In this paper, we address the problem under the additional constraint that a pose estimate from the current location of the camera to the reference location is to be estimated. This requires robust feature matching to estimate corresponding points, and not just image-level matching, as is often done in the literature. We present a method to learn a matching function from training data that is representative of the lighting variations to be modeled, under weak assumptions. Lighting variation in the image descriptors is modeled using a probability distribution on the discretized descriptor space. Results are presented on a live visual SLAM system in outdoor environments and in an indoor simulated environment, which demonstrate the efficacy of the proposed method.