Postdoctoral Scientist: Embodied Intelligence and Robot Learning for Dexterous Manipulation - Honda Research Institute USA

Postdoctoral Scientist: Embodied Intelligence and Robot Learning for Dexterous Manipulation

Your application is being processed

Postdoctoral Scientist: Embodied Intelligence and Robot Learning for Dexterous Manipulation

Job Number: P24T07
Honda Research Institute USA (HRI-US) in San Jose, California, is seeking a highly motivated Postdoctoral Scientist to join our robotics and embodied AI research team. In this role, you will advance embodied intelligence and dexterous robotic manipulation through multi-modal representation learning that integrates tactile, force, vision, audio, and language inputs. Our goal is to develop next-generation embodied AI systems with rich, multi-sensory representations for contact-rich physical interactions, pushing the boundaries of physical intelligence in robotics. We are particularly interested in candidates who can leverage and adapt large-scale pre-trained models—such as vision-language-action (VLA) models or other multi-modal transformers (i.e. robotics foundation models)—to build robust representations of robot-object interactions. These learned representations will be integrated into action policies and control architectures, enabling multi-fingered robotic hands to perform precise, in-hand manipulation in unstructured, real-world environments. The successful candidate will contribute to algorithm development in both simulation and on real hardware, integrate with advanced robotic hand platforms, and help curate multi-modal datasets to drive learning and sim-to-real transfer. Ideal candidates will have a strong background in tactile sensing, ROS, and multi-sensory machine learning algorithms for robotic manipulation and control.
San Jose, CA

 

Key Responsibilities

 

  • Design and develop multi-modal representation learning algorithms to support embodied AI and physical intelligence, integrating tactile, force, vision, audio, and language input.
  • Adapt and fine-tune large-scale pre-trained models (e.g., vision-language models, foundation models for robotics) to enable physical reasoning and manipulation capabilities.
  • Develop action policies or control networks that directly leverage these learned representations to perform precise, in-hand manipulation using multi-fingered robotic hands.
  • Deploy and validate perception and control algorithms on real robotic platforms, including sensor calibration, system integration, and testing on state-of-the-art multi-fingered hands.
  • Collect and curate multi-modal datasets from both physical hardware and simulation (digital twin) environments to support model training, benchmarking, and sim-to-real transfer.
  • Publish and present research findings at top-tier robotics and AI conferences and journals (e.g., ICRA, RSS, CoRL, NeurIPS).

 

Minimum Qualifications

 

  • Ph.D. in computer science, robotics, or a related field.
  • Experience working with tactile, force, and/or visual sensing in robotic manipulation tasks.
  • Strong background in machine learning and experience with deep learning frameworks such as PyTorch or TensorFlow.
  • Proven ability to implement, train, and deploy machine learning models on physical robotic hardware.
  • Strong programming skills in Python or C++.
  • Familiarity with ROS or ROS2 and developing end-to-end systems.

 

Bonus Qualifications

  • Experience with robotic manipulation, grasping, and control using multi-fingered hands.
  • Hands-on experience with tactile sensing technologies (e.g., taxel-based sensors, GelSight).
  • Familiarity with simulation platforms such as Isaac Sim, Isaac Lab, or Mujoco.
  • Experience building digital twins or sim-to-real pipelines for robotics.
  • Experience with computer vision and multisensory perception.
  • Experience with large models (e.g., VLMs, multi-modal transformers, or foundation models) for perception or control.

 

Desired Start Date 1/12/2026
Contract Duration 3 years
Position Keywords Emobdied AI, Robotics, Perception, Object Manipulation, Representation Learning, Vision, Tactile, VisuoTactile, Vision-Language Models, Deep Learning

Alternate Way to Apply

Send an e-mail to careers@honda-ri.com with the following:
- Subject line including the job number(s) you are applying for 
- Recent CV 
- A cover letter highlighting relevant background (Optional)

Please, do not contact our office to inquiry about your application status.