[Computer Vision] Domain Adaptation and Video Style Transfer

[Computer Vision] Domain Adaptation and Video Style Transfer

Job Number: P20INT-36
Inferences and results obtained via a simulated driving environment are typically different as compared to that obtained using real-world data.
San Jose, CA

This position investigates how to use domain adaptation and video style transfer to adapt driving scenes from a simulated environment to photo-realistic videos and analyze human perception for the synthesized video for user studies. You are expected to: 

  • Implement state-of-the-art domain adaptation and video style transfer algorithms.
  • Evaluate the performance of the algorithms based on human perception.
  • Analyze the impact in transfer-learning for driving-related tasks.

Qualifications:

  • M.S. or Ph.D. candidate in computer science or related STEM field.
  • Familiarity and research experience in domain adaptation and style transfer.
  • Highly proficient in software engineering using C++ and/or Python.

Bonus Qualifications:

  • Experience with deep learning software like TensorFlow or PyTorch.
  • Experience in behavioral research and human-computer interaction.
  • Familiarity with working on driving datasets.

Duration: 3 months

How to apply

Candidates must have the legal right to work in the U.S.A.​ Please add Cover Letter and CV in the same document

Text to Identify Refresh CAPTCHA