Research Scientist: LLMs for Robot Navigation - Honda Research Institute USA

Research Scientist: LLMs for Robot Navigation

Your application is being processed

Research Scientist: LLMs for Robot Navigation

Job Number: P24F09
Honda Research Institute USA is seeking ​a passionate Research Scientist to join our Cooperative-Intelligence Mobility Research (CMR) group. This role is ideally suited for a researcher with a strong background in multi-agent planning, interaction-aware planning, and foundation models, including Large Language Models (LLMs) and Vision-Language Models (VLMs). The selected candidate will help shape the future of intelligent and resilient mobility systems by developing scalable task and mission planning architectures for heterogeneous robot teams. This includes incorporating LLM-guided hierarchical planning and failure-aware recovery frameworks to enhance robustness and adaptability. Leveraging expertise across robotics, decision theory, deep learning, and control, the candidate will lead research into foundational methods that harness the reasoning, planning, and world modeling capabilities of LLMs. The ultimate goal is to enable autonomous agents to interpret complex instructions, understand spatial environments, and execute navigation tasks in dynamic, unstructured settings.
Mountain View, CA

 

Key Responsibilities

 

  • Design and implement deep learning algorithms to represent semantic information as quantitative information and vice versa.
  • Design novel task and mission planning strategies for heterogeneous multi-robot systems by leveraging LLMs, VLMs, and hierarchical reasoning architectures.
  • Develop advanced decision-making algorithms that integrate visual inputs, contextual cues, and common-sense reasoning from foundation models to enable robust mobility solutions in complex environments.
  • Conduct research and build machine learning models for interaction-aware trajectory planning and prediction, with a focus on safety, scalability, and generalization.
  • Bridge semantic and quantitative representations by creating deep learning frameworks that translate symbolic knowledge into actionable metrics and vice versa, enabling explainable and adaptable robotic behaviors.
  • Author and publish high-impact research papers in premier conferences and journals in robotics, AI, and machine learning.
  • Collaborate across interdisciplinary teams of researchers and engineers to integrate algorithms with motion planning, control systems, and simulation or hardware platforms.

 

Minimum Qualifications

 

  • Ph.D. in Robotics, Electrical Engineering, Computer Science, or a closely related field, with a strong emphasis on planning, control, or machine learning.
  • Familiarity with transformer-based architectures and experience working with multimodal foundation models, including LLMs and VLMs.
  • Deep understanding and hands-on experience with mobility systems, particularly interaction-aware planning and end-to-end autonomous mobility frameworks.
  • Strong foundation in robotics, multi-agent systems, and decision-making under uncertainty, including sequential planning and feedback-driven systems.
  • Hands-on experience developing planning and control algorithms for mobility systems, such as autonomous driving or heterogeneous robot teams.
  • Proficient in Python, with practical experience using modern ML libraries such as PyTorch or TensorFlow.
  • Strong publication record in top-tier robotics and machine learning conferences and journals.
  • Excellent communication skills, with a proven ability to present complex technical concepts clearly in papers, presentations, and collaborative discussions.
  • Highly self-motivated, with the ability to take initiative, work independently, and meet research and development milestones on schedule.   

 

Bonus Qualifications

 

  • Demonstrated expertise in mission and task planning for robotic systems, especially leveraging foundation models such as Large Language Models (LLMs) and Vision-Language Models (VLMs) for symbolic and semantic reasoning.
  • Proven experience integrating LLMs (e.g., GPT, PaLM, LLaMA) with perception modules, knowledge graphs, or planning algorithms in embodied AI or autonomous systems.
  • Proficiency with robotics middleware and simulation environments, such as ROS, Gazebo, CARLA, or similar platforms for developing and evaluating robotic behaviors.
  • Experience contributing to multi-disciplinary research teams, with a strong emphasis on system integration, evaluation, and deployment in simulation or real-world settings.

 

Desired Start Date 11/1/2025
Years of Work Experience Required 3 - 5 years
Position Keywords Mission Planning, PLanning & Control, LLMs, VLMs, Scene Understanding, PDDL, Task Planning, Knowledge Distillation

Alternate Way to Apply

Send an e-mail to careers@honda-ri.com with the following:
- Subject line including the job number(s) you are applying for 
- Recent CV 
- A cover letter highlighting relevant background (Optional)

Please, do not contact our office to inquiry about your application status.