Trust Calibration
Properly calibrated human trust is essential for successful interaction between humans and automation. While human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload.
Related Publications
Autonomous vehicles (AVs) are expected to handle traffic scenarios more safely and efficiently than human drivers. However, it needs to be better understood which AV decisions are perceived to be unsafe or risky by drivers. To investigate drivers' perceived risk, we conducted a driving simulator experiment where participants are driven around by two types of AVs---car and sidewalk mobility---with a driving style that matches the participant's driving style. We developed a computational model that allows us to examine drivers' perceived risk of scenarios when interacting with an AV based on the drivers' interventions. The model allows us to quantify and compare the relative perceived risk of different scenarios for the two mobility types. Our results indicate that 1) drivers perceived higher risk in scenarios where the AV attempts to match the driver's preferred driving style, and 2) different scenarios were perceived as having higher risk across the two mobility types. The ability to quantify the perceived risk of scenarios and an understanding of how perceived risk differs across mobility types will provide critical insights for the design of human-aware mobility.
Does prior experience influence trust in novel automation systems? A study on two mobility platforms
Rapid advances in shared mobility and vehicle automation have led to a monumental shift of interest toward developing automated systems. However, automation development has wider implications for how end users would trust these novel automation systems. This research explored how the experience with prior automation systems and exposure to vehicles of similar form factors may influence trust when they would encounter novel mobility with a similar type of automation and form factor. A simulator study was designed with 48 participants when they interacted with AV cars and AV sidewalk mobility. Participants engaged in a simulator study consisting of two interactions separated by a few hours. Findings from the self-reported surveys suggest that prior experience with ADAS features and the preferred driving style of participants influenced dimensions of trust.
Advanced driver assistance systems (ADAS) need to account for the driver’s awareness of the environment to be effectively used. This study examines the impact of environmental features (eg, visual complexity, object density, roadway type, lighting) on drivers’ situation awareness (SA). This is achieved using a controlled study with 40 participants. Using a split-plot design, the participants were shown 30 out of 75 real-world driving scenarios displayed in a driving simulator environment. Participants’ responses to Situational Awareness Global Assessment Technique (SAGAT) queries on the type and coordinates of objects in the scene were used to calculate SA scores. A hurdle model was developed to estimate participants’ SA scores. The key findings highlight visual complexity as a significant predictor of SA scores. This predictor was easy to compute and able to capture the complexity of objects that impact road safety as well as the visual clutter in the background. The model showed that drivers were able to identify at least one object of interest in complex environments with high visual complexity and with many objects. A higher proportion of vulnerable road users was associated with a greater likelihood of a non-zero SA score, but the SA score was lower compared to environments with higher proportions of cars. The findings of this study provide insights into the environmental factors to be considered for SA predictive models.
We propose a framework for detecting user driving style preference with multimodal signals, to adapt autonomous vehicle driving style to drivers’ preferences in an automatic manner. Mismatch between the automated vehicle driving style and the driver’s preference can lead to more frequent takeovers or even disabling the automation features. We collected multi-modal data from 36 human participants on a driving simulator, including eye gaze, steering grip force, driving maneuvers, brake and throttle pedal inputs as well as foot distance from pedals, pupil diameter, galvanic skin response, heart rate, and situational drive context. Based on the data, we constructed a data-driven framework using convolutional Siamese neural networks (CSNNs) to identify preferred driving styles. The model performance has significant improvement compared to that in the existing literature. In addition, we demonstrated that the proposed framework can improve model performance without network training process using data from target users. This result validates the potential of online model adaption with continued driver-system interaction. We also perform an ablation study on sensing modalities and present the importance of each data channel.
This study aimed to investigate the impact of automated vehicle (AV) interaction mode on drivers’ trust and preferred driving styles in response to pedestrian- and traffic-related road events.The rising popularity of AVs highlights the need for a deeper understanding of the factors that influence trust in AV. Trust is a crucial element, particularly because current AVs are only partially automated and may require manual takeover; miscalibrated trust could have an adverse effect on safe driver-vehicle interaction. However, before attempting to calibrate trust, it is vital to comprehend the factors that contribute to trust in automation.Thirty-six individuals participated in the experiment. Driving scenarios incorporated adaptive SAE Level 2 AV algorithms, driven by participants’ event-based trust in AVs and preferences for AV driving styles. The study measured participants’ trust, preferences, and the number of takeover behaviors. Higher levels of trust and preference for more aggressive AV driving styles were found in response to pedestrian-related events compared to traffic-related events. Furthermore, drivers preferred the trust-based adaptive mode and had fewer takeover behaviors than the preference-based adaptive and fixed modes. Lastly, participants with higher trust in AVs favored more aggressive driving styles and made fewer takeover attempts. Adaptive AV interaction modes that depend on real-time event-based trust and event types may represent a promising approach to human-automation interaction in vehicles. Findings from this study can support future driver- and situation-aware AVs that can adapt their behavior for improved driver-vehicle interaction.
While trust in different types of automated vehicles has been a major focus for researchers and vehicle manufacturers, few studies have explored how people trust automated vehicles that are not cars, nor how their trust may transfer across different mobilities enabled with automation. To address this objective, a dual mobility study was designed to measure how trust in an automated vehicle with a familiar form factor—a car—compares to, and influences, trust in a novel automated vehicle—termed sidewalk mobility. A mixed-method approach involving both surveys and a semi-structured interview was used to characterize trust in these automated mobilities. Results found that the type of mobility had little to no effect on the different dimensions of trust that were studied, suggesting that trust can grow and evolve across different mobilities when the user is unfamiliar with a novel automated driving-enabled (AD-enabled) mobility. These results have important implications for the design of novel mobilities.
A key factor to optimal acceptance and comfort of automated vehicle features is the driving style. Mismatches between the automated and the driver preferred driving styles can make users take over more frequently or even disable the automation features. This work proposes identification of user driving style preference with multimodal signals, so the vehicle could match user preference in a continuous and automatic way. We conducted a driving simulator study with 36 participants and collected extensive multimodal data including behavioral, physiological, and situational data. This includes eye gaze, steering grip force, driving maneuvers, brake and throttle pedal inputs as well as foot distance from pedals, pupil diameter, galvanic skin response, heart rate, and situational drive context. Then, we built machine learning models to identify preferred driving styles, and confirmed that all modalities are important for the identification of user preference. This work paves the road for implicit adaptive driving styles on automated vehicles.
The objective of this study is to assess drivers’ ability to detect objects and the trajectory of these objects in scenarios with different environmental complexity levels. This is examined in the context of situation awareness (SA), defined as theperception, comprehension and projection of the environmental properties andpositions. The Situation Awareness Global AssessmentTechnique (SAGAT) was used in a video-based driving simulation study, where participants were asked to mark all objects in the order ofperceived risk and select the corresponding objecttype. This provided spatially continuous SA responses for the objects of interest (i.e.,pedestrians, cars and cyclists). The findings showed that object type and size, visual complexity, number of objects and roadway typehada significant impact ontheoperator’s ability toperceiveobjectsaswellastoprojectthe object trajectories. The resultsprovideus some insightsinchoosing predictorsbesideseye-tracking data for SA predictive model.
As autonomous vehicles (AVs) increase in popularity, it is necessary to understand the factors that may affect drivers’ willingness and performance using AVs. Trust is one of these factors, especially because current AVs are partially automated and may require drivers to take over; miscalibrated trust may impact safe driver-vehicle interaction. This study aimed to test the effects of adaptive AV driving modes under different road events on drivers’ trust, and observe corresponding driving behaviors. Twelve people participated in SAE Level 2 simulated drives and experienced a set of pedestrian- and traffic-related events under adaptive AV modes. Overall, drivers had marginally higher trust levels when the vehicle drove through pedestrian-related events. Additionally, drivers with higher trust in AVs preferred a more aggressive driving style and exhibited a reduced number of takeover attempts. Findings from this study can inform the design of future AVs.
In automated driving, it is important to maintain drivers’ situational awareness (SA) in order to help them avoid unnecessary interventions and negotiate challenging scenarios where human takeovers are needed. Our study developed computational models to predict a driver’s SA of a target object. Using the SEEV (Salience, Effort, Expectancy, and Value) and ACT-R (Adaptive Control of Thought-Rational) framework, the model achieved an accuracy of 78.3%, an F1-score of 0.66, and the area under the receiver operating characteristic (AUROC) value of 0.773 with object features as inputs. On average, the model had a Root Mean Square Error (RMSE) of 0.18 to predict the SA of a target object across participants. In relative to the existing models, our model not only had comparable predictive performance but also considered the underlying mechanism of SA to increase model interpretability. Our research will provide essential and necessary steps toward developing in-vehicle SA prediction and assistance systems.
Augmented reality (AR) head-up display (HUD) can be a promising solution to increase drivers' situation awareness (SA) and their trust in automation. However, literature on the limitations of drivers' attention is largely lacking when it comes to designing AR-based assistive interfaces for partially automated driving systems (ADS). The major question that needs to be answered is: "What and how much information should be presented to drivers in order to see the benefits of AR-HUDs?" A within-subjects experimental design was utilized in this study. The manipulated variables were a combination of (1) cueing strategy, (2) driving task, and (3) scenario complexity. A total of 36 adults (18 M/18 F) participated in two laboratory-based studies where data was collected using both quantitative and qualitative measures. The preliminary results from some of the subjective measures are provided in this paper. Key aspects of how this data is useful in assessing the impact of different cueing strategies on workload are also established.
Shared autonomous vehicles (SAVs) will be introduced in greater numbers over the coming decade. Due to rapid advances in shared mobility and the slower development of fully autonomous vehicles (AVs), SAVs will likely be deployed before privately-owned AVs. Moreover, existing shared mobility services are transitioning their vehicle fleets toward those with increasingly higher levels of driving automation. Consequently, people who use shared vehicles on an “as needed” basis will have infrequent interactions with automated driving, thereby experiencing interaction gaps. Using human trust data of 25 participants, we show that interaction gaps can affect human trust in automated driving. Participants engaged in a simulator study consisting of two interactions separated by a one-week interaction gap. A moderate, inverse correlation was found between the change in trust during the initial interaction and the interaction gap, suggesting people “forget” some of their gained trust or distrust in automation during an interaction gap.
As more vehicles on the road are equipped with driver assistance technologies, research on perceived driver discomfort in highly automated vehicles emerged. Driver discomfort estimation is important to automated feature acceptance, user satisfaction and economic factors. Existing literature mainly focused on driver takeover detections, because drivers are more likely to takeover with high level of discomfort. This paper investigated detections of driver discomfort through both takeovers and physiological spikes, to include more subtle and latent discomfort. We present a multimodal dataset from an automated vehicle simulator study, with eye gaze and physiological measurements. The dataset included 32 participants, and each experiment lasted 120 minutes. We recorded their takeover intentions and extracted the physiological spikes that were related to driver discomfort. Machine learning models were then built to detect driver takeovers, physiological spikes and them together through mutli-task learning. Machine learning results showed good performance on takeover and physiological spike detections. The multi-task learning-based detection had improved performance, indicating correlations between takeovers and physiological spikes. Our results demonstrate the potential of driver discomfort prediction through driver’s physiological and behavioral data. The result indicates a potential that AD system can learn user preference from driver without explicit takeovers.
Computational models embedded in advanced driver assistance systems (ADAS) require insights on drivers’ perception and understanding of their environment. This is particularly important as vehicles become increasingly automated and the partnership between the controllers (driver or vehicle) needs to be attentive to each other’s future intentions. This study investigates the impact of environmental factors (road type, lighting) on driver situation awareness (SA) using 75 real-world driving scenes viewed within a driving simulator environment. The Situational Awareness Global Assessment Technique (SAGAT) was adopted to compute SA scores from spatially continuous data. A hurdle model showed that visual complexity, which was not considered in previous SA prediction models, significantly impacted driver SA. The number of objects in the visual scene as well as in the peripheral view were also found to significantly affect driver SA. The findings of this study provide insights on environmental factors that may impact SA predictions.
Users prefer different styles (more defensive or aggressive) for their autonomous vehicle (AV) to drive. This preference depends on multiple factors including user’s trust in AV and the scenario. Understanding users’ preferred driving style and takeover behavior can assist in creating comfortable driving experiences. In this driving simulator study, participants were asked to interact with L2 driving automation with different driving style adaptations. We analyze the effects of different AV driving style adaptations on users’ survey responses. We propose linear and generalized linear mixed effect models for predicting the user’s preference and takeover actions. Results suggest that trust plays an important role in determining users’ preferences and takeover actions. Also, the scenario, pressing brakes, and AV’s aggressiveness level are among the main factors correlated with users’ preferences. The results provide a step toward developing human-aware driving automation that can implicitly adapt its driving style based on the user’s preference.
With the advent of advanced safety features and automated vehicles, driver safety has become critical in situations where the human is expected to disengage or drive partially. It is therefore vital to understand driver profiles in the development of systems that can adapt to the user and to which they can trust. Understanding the driving profile is challenging as it is composed of several factors, including driving style, mood states, and personality traits. To fulfill the purpose of modeling driver profiles, this paper proposed a comprehensive framework. A total of 28 licensed male drivers between the ages of 21 and 40 participated in the study; their driving behavior was recorded to create an integrated dataset. Additionally, mood states and personality traits were collected via surveys. The fuzzy logic inference system identified driving styles based on this integrated dataset. The relationship between driving styles, mood states, and a prediction model using random forest was developed for driving styles and personality types (obtained through clustering). Ultimately, findings from prediction can be utilized in risky driving style detection and driver preference sharing for the Mobility-as-a-Service purpose.
A lack of sufficient situational awareness is a primary cause of traffic crashes due to human error. Redirecting a driver’s attention to critical objects is essential, but alerting driver about all critical objects can lead to distraction. This paper develops and evaluates an adaptive support system that incorporates drivers’ fixations as a proxy for their situational awareness. We implement an experimental system that detects a driver’s gaze on important objects in the traffic scene and adapts a cueing strategy in an augmented reality-based driver awareness assistance interface. We collect and analyze data from 15 participants and show that our adaptive support system strategy is effective without increasing the drivers’ cognitive workload. Finally, we show that our system can increase ratio of drivers’ fixations on critical objects in their view without significantly increasing dwell time per object.
Although partially autonomous driving (AD) systems are already available in production vehicles, drivers are still required to maintain a sufficient level of situational awareness (SA) during driving. Previous studies have shown that providing information about the AD's capability using user interfaces can improve the driver's SA. However, displaying too much information increases the driver's workload and can distract or overwhelm the driver. Therefore, to design an efficient user interface (UI), it is necessary to understand its effect under different circumstances. In this paper, we focus on a UI based on augmented reality (AR), which can highlight potential hazards on the road. To understand the effect of highlighting on drivers' SA for objects with different types and locations under various traffic densities, we conducted an in-person experiment with 20 participants on a driving simulator. Our study results show that the effects of highlighting on drivers' SA varied by traffic densities, object locations and object types. We believe our study can provide guidance in selecting which object to highlight for the AR-based driver-assistance interface to optimize SA for drivers driving and monitoring partially autonomous vehicles.
Despite the recent advancement in driver assistance systems, most existing solutions and partial automation systems such as SAE Level 2 driving automation systems assume that the driver is in the loop; the human driver must continuously monitor the driving environment. Frequent transition of maneuver control is expected between the driver and the car while using such automation in difficult traffic conditions. In this work, we aim to predict driver takeover timing in order for the system to prepare transition from automation to driver control. While previous studies indicated that eye gaze is an important cue to predict driver takeover, we hypothesize that traffic condition as well as the reliability of the driving automation also have a strong impact. Therefore, we propose an algorithm that jointly consider the driver’s gaze information and contextual driving environment, which is complemented with the vehicle operational and driver physiological signals. Specifically, we consider joint embedding of traffic scene information and gaze behavior using 3DConvolutional Neural Network (3D-CNN). We demonstrate that our algorithm is successfully able to predict driver takeover intent, using user study data from 28 participants collected in simulated driving environments.
As autonomous vehicles (AVs) become ubiquitous, users' trust will be critical for the successful adoption of such systems. Prior works have shown that the driving styles of AVs can impact how users trust and rely on such systems. However, users' preferred driving style may vary with changes in trust or road conditions, experience, and personal driving preferences. We explore methods to adapt the driving style of an AV to match the preferred driving style of users to improve their trust in the vehicle. We conducted a pilot study (n=16) on a simulated urban environment, where the users experience various static and adaptive driving styles for different pedestrian and traffic-related scenarios. Our results indicate that users best trust AVs that closely match their preferences (p< 0.05). We believe that exploring the effects of AV driving style on users' trust and workload will provide necessary steps towards developing human-aware automated systems.
Interaction research has been initially focusing on partially and conditionally automated vehicles. Augmented Reality (AR) may provide a promising way to enhance drivers' experience when using autonomous driving (AD) systems. This study sought to gain insights on drivers' subjective assessment of a simulated driving automation system with AR-based support. A driving simulator study was conducted and participants' rating of the AD system in terms of information imparting, nervousness and trust was collected. Cumulative Link Models (CLMs) were developed to investigate the impacts of AR cues, traffic density and intersection complexity on drivers' attitudes towards the presented AD system. Random effects were incorporated in the CLMs to account for the heterogeneity among participants. Results indicated that AR graphic cues could significantly improve drivers' experience by providing advice for their decision-making and mitigating their anxiety and stress. However, the magnitude of AR's effect was impacted by traffic conditions (i.e. diminished at more complex intersections). The study also revealed a strong correlation between self-rated trust and takeover frequency, suggesting takeover and other driving behavior need to be further examined in future studies.
Properly calibrated human trust is essential for successful interaction between humans and automation. However, while human trust calibration can be improved by increased automation transparency, too much transparency can overwhelm human workload. To address this tradeoff, we present a probabilistic framework using a partially observable Markov decision process (POMDP) for modeling the coupled trust-workload dynamics of human behavior in an action-automation context. We specifically consider hands-off Level 2 driving automation in a city environment involving multiple intersections where the human chooses whether or not to rely on the automation. We consider automation reliability, automation transparency, and scene complexity, along with human reliance and eye-gaze behavior, to model the dynamics of human trust and workload. We demonstrate that our model framework can appropriately vary automation transparency based on real-time human trust and workload belief estimates to achieve trust calibration