Augmented Reality Head-up Display - Honda Research Institute USA

Augmented Reality Head-up Display

Augmented Reality Head-up Display

The goal is to develop driving aids that enhance the driver's situational awareness and give drivers a sense of confidence and trust in the vehicles they are operating.

The idea of creating a 3D Head-up Display (3D-HUD) to create an augmented reality windshield started with the need for an automobile to communicate with its driver using a modality other than audio and haptics. Audio beeps and alarms, like haptic vibrations, lack specificity on what the source of a problem could be. Speech interfaces, while being very informative and symbolic in nature, do not have the bandwidth to commuicate rapidly changing data, especially in a fast, moving vehicle. Driving is also a highly spatio-temporal task, and the visual modality could communicate these concepts and phenomena in an unambiguous and direct manner. We wanted to create entire new categories of driving experiences and solutions to address emerging challenges for drivers. In particular, we sought answers to the following questions. How can an autonomous vehicle communicate its intentions to its passengers? How can we improve situation awareness for drivers to help prevent accidents in a stress-free manner? How can we safely enrich a driver’s access to new sources of location-based information while driving?

Publications

Open Journal Vehicular Technology 2024 2025
Samuel Thornton, Nithin Santhanam, Rajeev Chhajer, Sujit Dey
IEEE Conference on Decision and Control (CDC) 2024
Sooyung Byeon, Danyang Tian, Jackie Ayoub, Miao Song, Ehsan Moradi Pari, Inseok Hwang
Neural Information Processing Systems (NeurIPS), 2024. 2024
Seunggeun Chi, Pin-Hao Huang, Enna Sachdeva, Hengbo Ma, Karthik Ramani, Kwonjoon Lee
NeurIPS 2024 2024
Huao Li, Hossein Nourkhiz Mahjoub, Behdad Chalaki, Vaishnav Tadiparthi, Kwonjoon Lee, Ehsan Moradi-Pari, Charles Michael Lewis, Katia P. Sycara
Robotics and Automation Letters (RA-L) 2024
Jinning Li, Jiachen Li, Sangjae Bae, and David Isele
Conference on Robot Learning (CoRL) 2024 Learning Robot Fine and Dexterous Manipulation Workshop 2024
Thomas Power, Abhinav Kumar, Fan Yang, Sergio Aguilera Marinovic, Soshi Iba, Rana Soltani Zarrin, Dmitry Berenson
Empirical Methods in Natural Language Processing (EMNLP 2024) 2024
Muhan Lin, Shuyang Shi, Yue Guo, Behdad Chalaki, Vaishnav Tadiparthi, Simon Stepputtis, Joseph Campbell, Katia P. Sycara, Ehsan Moradi-Pari
Frontiers in Robotics and Automation 2024
Hifza Javed, Weinan Wang, Affan Bin Usman, and Nawid Jamali
International Journal on Robotics Research 2024
Muchen Sun, Francesca Baldini, Pete Trautman, Todd Murphey
Robotics and Automation Letters (RA-L) 2024
Mansur M. Arief, Mike Timmerman, Jiachen Li, David Isele, and Mykel J. Kochenderfer
Nat. Commun. 15, 10080 (2024) 2024
Xufan Li, Samuel Wyss, Emanuil Yanev, Qing-Jie Li, Shuang Wu, Yongwen Sun, Raymond R. Unocic, Joseph Stage, Matthew Strasbourg, Lucas M. Sassi, Yingxin Zhu, Ju Li, Yang Yang, James Hone, Nicholas Borys, P. James Schuck, Avetik R. Harutyunyan
NeurIPS 2024 Workshop Open-World Agents 2024
Nikki_Lijing_Kuang, Songpo Li, Soshi Iba
Intelligent Robots and Systems (IROS) 2024
Hongyu Li, Snehal Dikhale, Jinda Cui, Soshi Iba, and Nawid Jamali
IROS 2024 2024
Viet-Anh Le​, Vaishnav Tadiparthi, Behdad Chalaki,​ Hossein Nourkhiz Mahjoub, Jovin D’sa, Ehsan Moradi-Pari​
arXiv preprint arXiv:2409.09415 (2024) 2024
Lingo, Ryan, Martin Arroyo, and Rajeev Chhajer
Conference on Robot Learning (CoRL) 2024
Patrick Naughton, Jinda Cui, Karankumar Patel, and Soshi Iba
European Conference on Computer Vision (ECCV), 2024 2024
Yuchen Yang, Kwonjoon Lee, Behzad Dariush, Yinzhi Cao, Shao-Yuan Lo
European Conference on Computer Vision (ECCV), 2024 2024
Seunggeun Chi, Hyung-gun Chi, Hengbo Ma, Nakul Agarwal, Faizan Siddiqui, Karthik Ramani, Kwonjoon Lee
European Conference on Computer Vision (ECCV), 2024 2024
Shijie Wang, Qi Zhao, Minh Quan Do, Nakul Agarwal, Kwonjoon Lee, Chen Sun