Driver Situational Awareness
One of the key technology towards situationally adaptive driver assistance systems is understanding and monitoring driver situational awareness (SA). Current collision warning systems simply make warnings based only on the risk of collision and driver’s awareness and intentions are disregarded. As the result, many users think warning and annoying especially they are already aware of the dangers; in the worst case, they turn off the function. Our technology aims to solve the issue by enabling the system to track driver’s awareness of objects in the traffic scene in real-time.
The research consists of two components 1) understanding/modeling of human driver awareness and 2) tracking of user’s awareness based on driver’s gaze behavior. To achieve 1) we collected driver gaze data from professional drivers and modeled the way professional drivers pay attention to the surroundings. Those objects are considered important actors for safe driving. To achieve 2) we made a machine learning system that can predict driver’s situational awareness based on the past gaze trajectory of the driver. By combining 1) and 2) we can identify objects that are important for safe driving, yet the driver is not aware of. We believe by a warning based on this strategy will make the warning more effective and trustworthy.
Related Publications
Sensor data and Vehicle-to-Everything (V2X) communication can greatly assist Connected and Autonomous Vehicles (CAVs) in situational awareness and provide a safer driving experience. While sensor data recorded from devices such as radar and camera can assist in local awareness in the close vicinity of the Host Vehicle (HV), the information obtained is useful solely for the HV itself. On the other hand, V2X communication can allow CAVs to communicate with each other and transceive basic and/or advanced safety information, allowing each CAV to create a sophisticated local object map for situational awareness. This paper introduces a point-to-point Driver Messenger System (DMS) that regularly maintains a local object map of the HV and uses it to convey HV's Over-the-Air (OTA) Driver Intent Messages (DIMs) to nearby identified Target Vehicle(s) (TV(s)) based on a list of pre-defined common traffic applications. The focus of this paper is on the lane change application where DMS can use the local object map to automatically identify closest TV in adjacent lane in the direction of HV's intended lane change and inform the TV via a DIM. Within DMS, the paper proposes a TV recognition algorithm for lane change application that utilizes the HV's Path History (PH) to accurately determine the closest TV that could potentially benefit from receiving a DIM from HV. Finally, DMS is also shown to act as an advanced warning system by providing extra time and space headway measurements between the HV and TVs upon a number of simulated lane change scenarios.
Eye-tracking techniques have the potential for estimating driver awareness of road hazards. However, traditional eye-movement measures based on static areas of interest may not capture the unique characteristics of driver eyeglance behavior and challenge the real-time application of the technology on the road. This article proposes a novel method to operationalize driver eye-movement data analysis based on moving objects of interest. A human-subject experiment conducted in a driving simulator demonstrated the potential of the proposed method. Correlation and regression analyses between indirect (i.e., eye-tracking) and direct measures of driver awareness identified some promising variables that feature both spatial and temporal aspects of driver eye-glance behavior relative to objects of interest. Results also suggest that eye-glance behavior might be a promising but insufficient predictor of driver awareness. This work is a preliminary step toward real-time, on-road estimation of driver awareness of road hazards. The proposed method could be further combined with computer-vision techniques such as object recognition to fully automate eye-movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.
A vehicle driving along the road is surrounded by many objects, but only a small subset of them influence the driver’s decisions and actions. Learning to estimate the importance of each object on the driver’s real-time decisionmaking may help better understand human driving behavior and lead to more reliable autonomous driving systems. Solving this problem requires models that understand the interactions between the ego-vehicle and the surrounding objects. However, interactions among other objects in the scene can potentially also be very helpful, e.g., a pedestrian beginning to cross the road between the ego-vehicle and the car in front will make the car in front less important. We propose a novel framework for object importance estimation using an interaction graph, in which the features of each object node are updated by interacting with others through graph convolution. Experiments show that our model outperforms state-of-the-art baselines with much less input and pre-processing.
Toward Prediction of Driver Awareness of Automotive Hazards: Driving-Video-Based Simulation Approach
Road-users are a critical part of decision-making for both self-driving cars and driver assistance systems. Some road-users, however, are more important for decision-making than others because of their respective intentions, ego vehicle's intention and their effects on each other. In this paper, we propose a novel architecture for road-user importance estimation which takes advantage of the local and global context of the scene. For local context, the model exploits the appearance of the road users (which captures orientation, intention, etc.) and their location relative to ego-vehicle. The global context in our model is defined based on the feature map of the convolutional layer of the module which predicts the future path of the ego-vehicle and contains rich global information of the scene (e.g., infrastructure, road lanes, etc.), as well as the ego vehicle's intention information. Moreover, this paper introduces a new data set of real-world driving, concentrated around inter-sections and includes annotations of important road users. Systematic evaluations of our proposed method against several baselines show promising results.
Automotive manufactures are rapidly developing more advanced in-vehicle systems that seek to provide a driver with more active safety and information in real-time, in particular human machine interfaces (HMIs) using mixed or augmented reality (AR) graphical elements. However, it is difficult to properly test novel AR interfaces in the same way as traditional HMIs via on-road testing. Instead, simulation could likely offer a safer and more financially viable alternative for testing AR HMIs, inconsistent simulation quality may confound HMI research depending on the visual fidelity of each simulation environment. We investigated how visual fidelity in a virtual environment impacts the quality of resulting driver behavior, visual attention, and situational awareness when using the system. We designed two large-scale immersive virtual environments; a “low” graphic fidelity driving simulation representing most current research simulation testbeds and a “high” graphic fidelity environment created in Unreal Engine that represents state of the art graphical presentation. We conducted a user study with 24 participants who navigated a route in a virtual urban environment via direction of AR graphical cues while also monitoring the road scene for pedestrian hazards, and recorded their driving performance, gaze patterns, and subjective feedback via situational awareness survey (SART). Our results show drivers change both their driving and visual behavior depending upon the visual fidelity presented in the virtual scene. We further demonstrate the value of using multi-tiered analysis techniques to more finely examine driver performance and visual attention.