[Robotics] Visuotactile Perception and Deep RL for Contact-Rich Robotic Manipulation

[Robotics] Visuotactile Perception and Deep RL for Contact-Rich Robotic Manipulation

Job Number: P20INT-32
This title includes multiple positions. The focus of the research is to use vision and tactile sensor data, exploiting finger-object contact information, to enable robots manipulate objects in unstructured environments using machine-learning approaches.
San Jose, CA

You are expected to:

  • Explore temporal approaches to track the state of an object over time.
  • Explore deep learning approaches that take advantage of object geometry to improve the estimate of the object's state.
  • Explore deep reinforcement learning and learning from demonstration approaches to robotic manipulation.
  • Implement planning and control algorithms on hardware.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, mechanical engineering, or related field.
  • Experience in deep learning and other machine learning methods.
  • Good programming skills in either C++ or Python.
  • Experience in Robot Operating System (ROS).
  • Experience working with sensors and actuators.

Bonus Qualifications:

  • Experience with deep reinforcement learning and sim-to-real approaches.
  • Experience in manipulation, grasping and tactile sensing.
  • Experience with PyTorch and TensorFlow.
  • Experience with game engines such as Unity and Unreal Engine.
  • Experience with implementation of real-time control algorithms on robotic systems.

Duration: 3 months

 

How to apply

Candidates must have the legal right to work in the U.S.A.​ Please add Cover Letter and CV in the same document

Text to Identify Refresh CAPTCHA