Metacognitive AI for Enhancing Reliability of Reasoning - Honda Research Institute USA

Metacognitive AI for Enhancing Reliability of Reasoning

Your application is being processed

Metacognitive AI for Enhancing Reliability of Reasoning

Job Number: P24INT-02
This project aims to develop "metacognitive AI" that can self-assess and adapt its behavior to mitigate erroneous behaviors. Specifically, we aim to address the following two issues. (Hallucination) LLMs may generate content that is irrelevant, made up, or inconsistent with the input data; (Misalignment) LLMs are prone to generating responses that diverge from human values, ethics, security, or intentions.
San Jose, CA

 

Key Responsibilities

 

  • Meta-Cognitive algorithms (assessment and intervention) for enhancing efficiency and safety of multi-modal foundation models.
  • Aligning multi-modal foundation models towards human preferences, goals, and/or affective states.

 

Minimum Qualifications

 

  • Ph.D. or highly qualified M.S. student in computer science, cognitive science, electrical engineering, robotics, or related field.
  • Strong familiarity with computer vision, natural language processing and/or multi-modal learning techniques.
  • Experience in open-source deep learning frameworks (PyTorch, JAX, etc.).

 

Bonus Qualifications

  • Experience in state-of-the-art foundation models.
  • Experience with uncertainty quantification.
  • Publications in top-tier conferences (CVPR, ICCV, ECCV, ACL, EMNLP, ICML, NeurIPS, ICLR, etc.)

 

Years of Work Experience Required  0
Desired Start Date 5/19/2025
Internship Duration 3 Months
Position Keywords MetaCognitive AI, Large Language Model, Hallucination

 

 

 

   

Alternate Way to Apply

Send an e-mail to careers@honda-ri.com with the following:
- Subject line including the job number(s) you are applying for 
- Recent CV 
- A cover letter highlighting relevant background (Optional)

Please, do not contact our office to inquiry about your application status.