Situated reference resolution using visual saliency and crowdsourcing-based priors for a spoken dialog system within vehicles

Situated reference resolution using visual saliency and crowdsourcing-based priors for a spoken dialog system within vehicles

Journal Article

Abstract

​​In this paper, we address issues in situated language understanding in a moving car. More specifically, we propose a reference resolution method to identify user queries about specific target objects in their surroundings. We investigate methods of predicting which target object is likely to be queried given a visual scene and what kind of linguistic cues users naturally provide to describe a given target object in a situated environment. We propose methods to incorporate the visual saliency of the visual scene as a prior. Crowdsourced statistics of how people describe an object are also used as a prior. We have collected situated utterances from drivers using our research system, which was embedded in a real vehicle. We demonstrate that the proposed algorithms improve target identification rate by 15.1% absolute over the baseline method that does not use visual saliency-based prior and depends on public database with a limited number of category information.​

Details

PUBLISHED IN
Computer Speech & Language, Vol. 48
PUBLICATION DATE
01 mrt 2018
AUTHORS
T. Misu