[Human-Machine Interaction] Vision and Language Navigation

[Human-Machine Interaction] Vision and Language Navigation

Job Number: P20INT-29
This project focuses on developing vision and language algorithms to advance research in vision-language navigation. The project involves developing algorithms that interpret visually-grounded natural language instructions and conduct driving/in-house navigation tasks based on the input text. You are expected to:
San Jose, CA
  • Develop algorithms to advance research in vision-language navigation. You will develop algorithms that interpret visually-grounded natural language instructions and conduct system navigation tasks based on the input text.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Strong familiarity with computer vision, natural language processing and machine learning techniques pertaining to vision and language navigation.
  • Experience in open-source deep learning frameworks (TensorFlow or PyTorch).
  • Excellent programming skills in Python.

Duration: 3 months

How to apply

Candidates must have the legal right to work in the U.S.A.​ Please add Cover Letter and CV in the same document

Text to Identify Refresh CAPTCHA