A computational framework for driver's visual attention using a fully convolutional architecture
IEEE Intelligent Vehicles Symposium (IV)
It is a challenging and important task to perceive and interact with other traffic participants in a complex driving environment. The human vision system plays one of the crucial roles to achieve this task. Particularly, visual attention mechanisms allow a human driver to cleverly attend to the salient and relevant regions of the scene to further make necessary decisions for the safe driving. Thus, it is significant to investigate human vision systems with great potential to improve assistive, and even autonomous, vehicular technologies. In this paper, we investigate driver's gaze behavior to understand visual attention. We, first, present a Bayesian framework to model visual attention of a human driver. Further, based on the framework, we develop a fully convolutional neural network to estimate the salient region in a novel driving scene. We systematically evaluate the proposed method using on-road driving data and compare it with other state-of-the-art saliency estimation approaches. Our analyses show promising results.