The goal of the project is to achieve robust lane-level localization for cars using low cost mass producible sensors.
The goal of the project is to achieve robust lane-level localization for cars using low cost mass producible sensors. We assume the availability of fairly detailed offline maps with various landmarks accurately surveyed.
Some of the sensors that we use are cameras, CAN bus, and GPS. Cameras are used to identify various landmarks such as lane marks, road marks, and signs, which are then used for estimating pose. Wheel speed and yaw data from the CAN bus is used for dead reckoning. Finally, information from multiple sensors is fused efficiently using Bayesian filtering techniques.
The attached video visualizes corrections achieved by detecting and localizing with respect to painted road marks. Blue cross marks show positions estimated relative to the detected roadmark. Surveyed lane marks, curbs, etc. are also shown. The ego location and yaw estimate are represented by the long red triangle. In this example, incremental position and orientation are estimated using visual odometry.