Self-Supervised Learning-Based Multimodal Prediction on Prosocial Behavior Intentions

Abinay Reddy Naini Zhaobo Zheng Teruhisa Misu Kumar Akash

International Conference on Acoustics, Speech and Signal Processing

Human state detection and behavior prediction have seen significant advancements with the rise of machine learning and multimodal sensing technologies. However, predicting prosocial behavior intentions in mobility scenarios, such as helping others on the road, is an underexplored area. Current research faces a major limitation. There are no large, labeled datasets available for prosocial behavior, and small-scale datasets make it difficult to train deep-learning models effectively. To overcome this, we propose a self-supervised learning approach that harnesses multi-modal data from existing physiological and behavioral datasets. By pre-training our model on diverse tasks and fine-tuning it with a smaller, manually labeled prosocial behavior dataset, we significantly enhance its performance. This method addresses the data scarcity issue, providing a more effective benchmark for prosocial behavior prediction, and offering valuable insights for improving intelligent vehicle systems and human-machine interaction

Downloadable item

Interaction