Currently, HRI-US (Silicon Valley) is offering research internships to highly motivated Ph.D. (and qualified M.S.) students.  Interns will work closely with HRI researchers, and publishing results in academic forums is highly encouraged.  We are looking for candidates with good publication track records and excellent programming skills to join our team! These positions are for the 2021 Spring/Summer.

Material Sciences (Job Number: )
San Jose, CA
Machine Learning/AI (Job Number: P20INT-26, P20INT-27, P20INT-45 )
San Jose, CA

Machine Learning for Novel Gas Sensors

(Job Number: P20INT-26)

This position requires processing time series data from novel gas sensors for machine learning and data analysis, and then developing algorithms to classify the sequences. Algorithms will be required to perform multi-label classification/regression, prediction and latent space analysis. You are expected to:

  • Process noisy sensor data.
  • Compare learned features vs. engineered features for time series data.
  • Implement state-of-the-art classification and regression models.

Qualifications:

  • M.S. or Ph.D. candidate computer science, or related STEM field.
  • Strong familiarity and research experience in machine learning, deep learning, and signal processing. 
  • Highly proficient in software engineering using C++ and/or Python.

Bonus Qualifications:

  • Experience with deep learning software like TensorFlow or Pytorch.
  • Hands on experience with data processing and analysis.
  • Familiarity with chemistry / chemical sensing.

Duration: 3–6 months

Learning and Control Research

(Job Number: P20INT-27)

​Harmonizing formal control-theoretic methods with data-driven approaches may open a new opportunity for driving safely through crowded environments. This position explores how data-driven approaches can be effectively integrated into control-theoretic methods, aiming to sustain both the interpretability and capabilities of evaluating complex environments. You are expected to:

  • Implement state-of-the-art RL and control theoretic methods (e.g., MPC, convex optimization).
  • Research and develop a mathematical framework that analytically/numerically integrates data-driven algorithms (e.g., deep learning) into formal control methods.

Qualifications:

  • M.S. or Ph.D. candidate computer science, or related STEM field.
  • Strong familiarity and research experience in deep learning, RL, optimizations, and control of robotic systems.
  • Highly proficient in software engineering using C++ and/or Python.

Bonus Qualifications:

  • Experience with deep learning software like TensorFlow or PyTorch.
  • Familiarity with ROS.
  • Hands-on experience with robotic control.

Duration: 3 months

Path Planning and Lane Selection Research

(Job Number: P20INT-45)

​​Highway is an essential road type in urban transportation systems and one of the major areas where fatal accidents occur. Motion planning under high speed traffic hence requires multi-directional perceptions and prediction of other drivers, as well as a timely decision making, and needs to be formulated as a multi-objective problem. This position investigates how formal path planning methods and/or data driven approaches can be used as a multi-objective lane selection algorithm. You are expected to:

  • Implement state-of-the-art long horizon lane selection algorithms (for lane choice and/or gap choice).
  • Research and develop a long horizon lane selection algorithm while considering risk, drive comfort, and travel time.

Qualifications:

  • M.S. or Ph.D. candidate computer science, or related STEM field.
  • Strong familiarity and research experience in deep learning, optimizations, and control of robotic systems.
  • Highly proficient in software engineering using C++ and/or Python.

Bonus Qualifications:

  • Experience with deep learning software like TensorFlow or PyTorch.
  • Familiarity with ROS.
  • Hands-on experience with robotic control.

Duration: 3 months

 

Human-Machine Interaction (Job Number: P20INT-28, P20INT-29, P20INT-30, P20INT-35, P20INT-42, P20INT-43 )
San Jose, CA

Human Factors for Next Generation Mobility Interfaces

(Job Number: P20INT-28)

​​This position offers the opportunity to evaluate and optimize the human-machine interface (HMI) for next generation mobility based on experimental data. You are expected to:

  • Identify metrics and develop framework to evaluate HMIs including users' workload, situation awareness, and trust on system.
  • Create data analysis (and data mining) plan for both subjective and objective data using a wide variety of quantitative methods.
  • Conduct statistical analysis and construct probabilistic models, and interpret results through the lens of HMI, UX and social science.
  • Present the results through presentations and scientific publications.

Qualifications:

  • Highly qualified M.S. candidate in human behavior related field (human computer interaction, human factor engineering, cognitive science), psychology, or social science.
  • Familiarity with human-machine interface, interaction/behavioral modeling for automobiles.
  • Experience in hypothesis test, probabilistic models, time series data analysis and survey data analyses.
  • Research experience in automotive HMIs.

Bonus Qualifications:

  • Ph.D. candidate in human behavior related field (human computer interaction, human factor engineering, cognitive science), psychology, or social science.

  • Experience in experimental design, human behavioral and physiological signals from human monitoring sensors such as head-pose detection, eye tracker and physiology measurement sensors

Duration: 3 months

 

Vision and Language Navigation 

(Job Number: P20INT-29)

This project focuses on developing vision and language algorithms to advance research in vision-language navigation. The project involves developing algorithms that interpret visually-grounded natural language instructions and conduct driving/in-house navigation tasks based on the input text. You are expected to:

  • Develop algorithms to advance research in vision-language navigation. You will develop algorithms that interpret visually-grounded natural language instructions and conduct system navigation tasks based on the input text.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Strong familiarity with computer vision, natural language processing and machine learning techniques pertaining to vision and language navigation.
  • Experience in open-source deep learning frameworks (TensorFlow or PyTorch).
  • Excellent programming skills in Python.

Duration: 3 months

 

Interaction with Next Generation Mobility 

(Job Number: P20INT-30)

This position focus on user study and data analysis on user trust with novel mobility concepts. You are expected to:

  • Design and conduct user studies in simulated mobility systems.
  • Identify metrics and develop framework to evaluate interaction including users' workload, situation awareness, and trust on system.
  • Create data analysis plan for both subjective and objective data using a wide variety of quantitative methods.
  • Conduct statistical analysis and construct probabilistic models, and interpret results through the lens of HMI, UX and social science. 

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in human behavior related field (human computer interaction, human factor engineering, cognitive science), psychology, or social science.
  • Experience with designing and conducting human subject study.
  • Experience in hypothesis test, probabilistic models, time series data analysis and survey data analyses.

Bonus Qualifications:

  • Experience in human behavioral and physiological signals from human monitoring sensors such as head-pose detection, eye tracker and physiology measurement sensors.

Duration: 3 months

 

Human Behavior Modeling

(Job Number: P20INT-35)

​This position focuses on behavioral model for social human machine interaction (interacting with regular or L2 automated vehicles). The project involves human behavior pattern recognition, development of behavioral model using probabilistic models.  You are expected to:

  • Define behavioral patterns and conduct recognition analysis.
  • Reasoning of causalities between human characteristics with specific behavioral patterns.
  • Build behavior modeling framework to understand and predict interactions between human and machine.
  • Develop more usable machine/deep learning tools for improve system performance and mobility safety.

Qualifications:

  • Highly qualified M.S. candidate in human behavior related field (human computer interaction, human factor engineering, cognitive science), psychology, or social science, statistics or operation/industrial engineering.
  • Familiarity with human-machine interface, interaction/behavioral modeling for automobiles.
  • Knowledge in multivariate statistical methodologies e.g. causal inference with observable data, longitudinal analysis, classification, dimension reduction, clustering, hierarchical linear (random effects) modeling.
  • Experience in machine learning algorithms.

Bonus Qualifications:

  • Ph.D. candidate in computer science, electrical engineering, mathematics, statistics, psychology, cognitive science or human behavior related field in Applied Statistics, Mathematics, Psychology, Cognitive Science and Human Computer Interaction.
  • Experience in applied statistics i.e. probabilistic models and Bayesian models, machine/deep learning.

Duration: 3 months

 

Biomechanical Simulation of Human Movement

(Job Number: P20INT-42)

​The project focuses on research and development of biomechanical simulation technology from motion data obtained from video inputs. The simulation is used for analysis and prediction of human performance for various applications including human-robot interaction, ergonomic risks assessment automation, and pose/action forecasting.  You are expected to:

  • Create a full-body biomechanical simulation pipeline and analyze human motion using SimTK OpenSim environment.
  • Simulate various kinematic, dynamic, and musculoskeletal performance metrics.
  • Integrate developed pipeline with other software applications such as monocular human pose estimation software.
  • Contribute to a portfolio of patents, academic publications, and prototypes to demonstrate research value. 

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, engineering, or related field.
  • Strong familiarity with kinematics and dynamics of human movement.
  • Extensive experience in multi-body dynamic simulation packages.
  • Extensive experience in analysis and synthesis of human motion using OpenSim environment, and preferable experience using computed muscle control (CMC) algorithms.
  • Strong programming skills in Python and/or C++.

Bonus Qualifications:

  • Experience with deep learning libraries for 3D human pose estimation from video inputs and/or RGBD sensors.

Duration: 3 months

 

HMI Engineer

(Job Number: P20INT-43)

​You are expected to:

  •  Support Motion Sickness prototyping (includes; animations, coding, vehicle-installation, visual/auditory/haptics). Conduct mini clinics.

  • Support all dept. showcase/MCG graphics, posters, animations, etc. for demos.

  • Graphics development for all HMI needs and/or mobile applications, including mental model design and user testing.

  • Explore emerging driver-needs to identify elements that support new generation of HMI.

Qualifications:

  • ​Data Analytics with Minors in Economics
  • Development of Graphics for HMI, animations and 2D.

  • CAD related tasks - 3D Modeling, 3D printing, etc.

  • Programming: C#.NET, Unity 3D, Python, JavaScript, Unreal4, Datorama, Smartsheets, QT, Arduino.     

Duration: 6 months

Location: Ann Arbor, MI

Robotics (Job Number: P20INT-31, P20INT-32, P20INT-44 )
San Jose, Ann Arbor

Intention Estimation for Teleoperation

(Job Number: P20INT-31)

This position focuses on development, implementation and testing of algorithms to model and infer intention of a human operator during object manipulation by tele-operated robot, to enhance human robot interaction performance using algorithms involving probabilistic modeling and machine learning. You are expected to:

  • Conduct literature survey on related work.
  • Build a framework to model human behavior from multi-modal input and a priori knowledge for human intention prediction in robot teleoperation environment.
  • Design and implement the algorithms both in simulation and on hardware related to robot manipulation including perception, planning, and optimization.
  • Design experiments to evaluate the human-machine interaction performance.
  • Prepare written and oral reports on the code and result.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in robotics, computer science, electrical engineering, or related field.
  • Experience in probabilistic approach, computer vision, path planning, and control of robotic systems.
  • Excellent programming skills in either C++ or Python.
  • Experience in conducting hardware experiments using ROS.

Bonus Qualifications:

  • Experience in conducting human-robot interaction experiments in teleoperation environment.
  • Experience in robotic tele-manipulation, teleoperation.
  • Experience in applied statistics and probabilistic programming language (PPL).
  • Experience with PyTorch and TensorFlow.

Duration: 3 months

Visuotactile Perception and Deep RL for Contact-Rich Robotic Manipulation

(Job Number: P20INT-32)

This title includes multiple positions. The focus of the research is to use vision and tactile sensor data, exploiting finger-object contact information, to enable robots manipulate objects in unstructured environments using machine-learning approaches. You are expected to:

  • Explore temporal approaches to track the state of an object over time.
  • Explore deep learning approaches that take advantage of object geometry to improve the estimate of the object's state.
  • Explore deep reinforcement learning and learning from demonstration approaches to robotic manipulation.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Experience in deep learning and other machine learning methods.
  • Good programming skills in either C++ or Python.
  • Experience in Robot Operating System (ROS).

Bonus Qualifications:

  • Experience with deep reinforcement learning and sim-to-real approaches.
  • Experience in manipulation, grasping and tactile sensing.
  • Experience with PyTorch and TensorFlow.
  • Experience with game engines such as Unity and Unreal Engine.

Duration: 3 months

Robotics Controls Software Engineer

(Job Number: P20INT-44)

​We are seeking a passionate and driven intern to play a crucial role on our Advanced Technical Research team, located at our Ann Arbor, MI office. The intern should support the research, development, and validation of robust algorithms based on Honda's requirements to design automotive features. Support the analysis of vehicle system behavior and plant modeling, monitor signal processing, including real-time data processing from on-board systems with vehicle testing and prototyping. You are expected to:

  • Design, evaluate, develop, tune control algorithms for applications in Automated Vehicle Projects.
  • Test control algorithms in simulation and vehicle.
  • Assisting with Software development for different modules of the AV stack.
  • Working with system integration tasks within software stack.
  • Participate in brainstorming activities related to CAV application project(s).

Qualifications:

  • Must be currently enrolled in a postgraduate program with a focus on Computer Science, Computer Engineering, Electrical Engineering, Mechanical Engineering with a focus on robotics, controls, and planning.
  • Proficient in C++, Matlab, Simulink, and Python.
  • Hands on experience with Linux.
  • Understanding of application of control algorithms in the AV field.

Bonus Qualifications:

  • Passion for learning new software tools and languages.
  • Good understanding of advanced control techniques, such as Model Predictive Control, Adaptive Control, Optimal Control, or Intelligent Control, to enable innovative automotive technologies and in-vehicle features.
  • Robot Operating System (ROS) experience.
  • Experience with dSPACE prototyping systems.
  • Prior experience with AV software development and testing.
  • Familiarity with VCS tools like GitHub etc.

Duration: 4.5 months
Location: Ann Arbor, MI

 

 

 

Computer Vision (Job Number: P20INT-36, P20INT-37, P20INT-38, P20INT-39, P20INT-40, P20INT-41 )
San Jose, CA

Domain Adaptation and Video Style Transfer

(Job Number: P20INT-36)

Inferences and results obtained via a simulated driving environment are typically different as compared to that obtained using real-world data. This position investigates how to use domain adaptation and video style transfer to adapt driving scenes from a simulated environment to photo-realistic videos and analyze human perception for the synthesized video for user studies. You are expected to: 

  • Implement state-of-the-art domain adaptation and video style transfer algorithms.
  • Evaluate the performance of the algorithms based on human perception.
  • Analyze the impact in transfer-learning for driving-related tasks.

Qualifications:

  • M.S. or Ph.D. candidate in computer science or related STEM field.
  • Familiarity and research experience in domain adaptation and style transfer.
  • Highly proficient in software engineering using C++ and/or Python.

Bonus Qualifications:

  • Experience with deep learning software like TensorFlow or PyTorch.
  • Experience in behavioral research and human-computer interaction.
  • Familiarity with working on driving datasets.

Duration: 3 months

 

Visual Understanding of Traffic Scenes

(Job Number: P20INT-37)

​The title includes multiple positions, which focus on developing computer vision and machine learning algorithms to capture the detailed semantics of 2D and/or 3D traffic scenes.  You are expected to: 

  • Capturing the semantics of visual scenes by explicit modeling of objects, their attributes, and relationships to other objects and the environment.
  • Higher-level classification/recognition of dynamic traffic scenes including place, conditions, and spatial relationships using temporal event detection, action recognition, and localization.
  • Detection and understanding of unstructured events that impact navigation, such as disabled vehicles, construction zones, traffic accidents, etc.
  • Develop and evaluate metrics to verify reliability of the proposed algorithms.
  • Contribute to a portfolio of patents, academic publications, and prototypes to demonstrate research value.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Strong familiarity with computer vision and machine learning techniques pertaining to scene understanding, image classification, and object detection.
  • Hands-on experience in one or more of the following: scene graphs, spatio-temporal graphs, graph neural networks, visual recognition, video classification.
  • Experience in open-source deep learning frameworks such as TensorFlow or PyTorch preferred
  • Excellent programming skills in Python or C++.

Duration: 3 months

 

Video Captioning in Traffic Scenes

(Job Number: P20INT-38)
    
​This title includes multiple positions, which focus on developing computer vision and machine learning algorithms to generate linguistic descriptions of traffic scene events that are important for the development of advanced driver assistance systems.  You are expected to: 

  • Use video inputs to generate natural language description of important or unstructured traffic scene events that impact the driver's decision making and motion planning strategies.
  • Participate in creating a dataset to support activities in video-based captioning of traffic scenes.
  • Develop and evaluate metrics to verify reliability of the proposed algorithms.
  • Contribute to a portfolio of patents, academic publications, and prototypes to demonstrate research value.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Strong familiarity with computer vision and machine learning techniques pertaining to video captioning.
  • Excellent programming skills in Python or C++.

Bonus Qualifications:

  • Familiarity with creating datasets for video captioning, including visual question and answering methods is preferred for one position.
  • Experience in open-source deep learning frameworks such as TensorFlow or PyTorch preferred.

Duration: 3 months

 

Human Action Understanding

(Job Number: P20INT-39)
     
​The project focuses on research and development of computer vision and machine learning algorithms toward human action understanding, including action recognition, action segmentation, and human-object interaction detection from video. You are expected to: 

  • Develop algorithm for online and offline action recognition and segmentation using supervised and weakly supervised methods.
  • Support development of human-object interaction algorithms from video.
  • Support development of a benchmark dataset for evaluation of results.
  • Develop and evaluate metrics to verify reliability of the proposed algorithms.
  • Contribute to a portfolio of patents, academic publications, and prototypes to demonstrate research value . 

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Strong research experience in computer vision and machine learning.
  • Hands-on experience in one or more of the following from video inputs: pose estimation, human object interaction detection, human activity recognition.
  • Excellent programming skills in Python / C++ / MATLAB.
  •  

Bonus Qualifications:

  • Experience with contact event detection from video.
  • Experience in open-source deep learning frameworks such as TensorFlow or PyTorch preferred.

Duration: 3 months

Multi-Agent Relational Reasoning in Driving Scenarios

(Job Number: P20INT-40)
     
​The title includes multiple positions, which focus on developing computer vision and machine learning algorithms for analysis, prediction, and understanding of human behavior in various domains to support ongoing research on next-generation intelligent mobility systems. You are expected to: 

  • Trajectory prediction
  • Social interaction modeling
  • Important agent identification
  • Intention prediction
  • Uncertainty estimation and quantification

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Strong research experience in computer vision, machine learning, robotics.
  • Experience in open-source deep learning frameworks such as TensorFlow or PyTorch.

Bonus Qualifications:

  • Hands-on experience in one or more of the following: graph neural networks, graph convolutional networks, probabilistic neural networks, deep generative models (GAN/VAE), reinforcement learning.
  • Publications in top-tier conferences (CVPR, ICCV, ECCV, ICML, NeurIPS, ICLR, ICRA, IROS, etc.).

Duration: 3 months

Human Pose Prediction

(Job Number: P20INT-41)
     
​We seek a qualified candidate who will focus on developing computer vision algorithms for modeling the dynamics of human motion and predicting human poses from video sequences. You are expected to: 

  • Design and develop new pose prediction algorithms.
  • Implement state-of-the-art methods and evaluate metrics to verify the reliability of the proposed algorithms.
  • Creation of a benchmark dataset for various applications.

Qualifications:

  • Ph.D. or highly qualified M.S. candidate in computer science, electrical engineering, or related field.
  • Strong research experience in computer vision, machine learning, robotics.
  • Experience in open-source deep learning frameworks such as TensorFlow or PyTorch.

Bonus Qualifications:

  • Hands-on experience in one or more of the following: graph neural networks, probabilistic neural networks, deep generative models (GAN/VAE), reinforcement learning.
  • Publications in top-tier conferences (CVPR, ICCV, ECCV, ICML, NeurIPS, ICLR, ICRA, IROS, etc.).

Duration: 3 months

Other (Job Number: P20INT-46 )
Ann Arbor, MI

Prototyping Engineer

(Job Number: P20INT-46)

Mobility Collaboration Garage (MCG) is a hub a for conceptualizing and developing future mobility innovations. You will be an integral part of the team making sure the MCG facility is ready to support the development and showcase of prototype applications in mobility such as in connected and autonomous vehicles. As part of the MCG team, you will also champion the innovation process, participate in continuous improvement, and develop best practices. You are expected to:

  • Support maintaining Mobility Collaboration Garage facility including by not limited to process development, material handling, parts ordering and procurement.

  • Assist in planning and coordination of Honda's R&D innovation initiatives and events including but not limited to vendor management, event coordination, graphics or presentation design. 

  • Maintain positive and professional working relationships with internal and external teams to support prototype application development.

  • Participates in continuous improvement and best practice development with respect to MCG facility and innovation pipeline.

  • Lead ideation and collaboration internally and with industry partners through projects including but not limited to design thinking workshops/sprints and innovation meetups.

  • Support rapid prototyping of innovative concepts by leveraging a hands on approach to quickly implement an idea (such as the use of Arduino, 3D printing, machining, etc.).

  • Take ownership of assigned projects with demonstrated ability to prioritize and execute multiple projects independently with high willingness to learn. 

  • Support maintaining Mobility Collaboration Garage facility.

  • Participates in continuous improvement and best practice development.

  • Lead ideation and collaboration internally and with industry partners.

  • Support rapid prototyping.

  • Take ownership of assigned projects.

Qualifications:

  • Demonstrated interest or coursework in one or more of these mobility technology areas: connected and autonomous vehicles, robotics, human machine interface or user interface design, product design, fabrication (3D printing, CNC, Arduino, etc.).

  • Ability to synthesize business and technical requirements into feasible prototype designs and communicating it using clear reports and presentations. 

Bonus Qualifications:

  • ​Pursuing bachelors degree in design, business, engineering or related field.

  • Previous experience as an active member of a collaborative work environment (student design teams, etc.)

  • Previous experience organizing or participating in product design sprints (such as in student design teams, hackathons, design thinking workshops, etc.)

Duration: 6 months

Location: Ann Arbor, MI

How to apply

​Please send an email to careers@honda-ri.com with the following:

  • ​​Subject line including the ​job number you are applying for
  • Recent CV
  • A cover letter explaining how your background matches the qualifications
Candidates must have the legal right to work in the U.S.A.​