Scientist: Multimodal Signal Processing

Scientist: Multimodal Signal Processing

Job Number: P20T15
​This position offers the opportunity to conduct innovative research on a broad set of problems related to multi-modal temporal data understanding for human state sensing in future mobility applications.
San Jose, CA

Key Responsibilities:

  • Propose, create, and implement supervised and unsupervised data understanding algorithms from multimodal and multisensory data streams obtained from human and environmental monitoring sensors.
  • Develop and evaluate metrics to verify reliability of the proposed algorithms.
  • Participate in ideation, creation, and evaluation of related technologies in various mobility-oriented domains.
  • Contribute to a portfolio of patents, academic publications, and prototypes to demonstrate research value.
  • Participate in data collection, sensor calibration, and data processing. Compare learned features vs. engineered features for time series data. Implement state-of-the-art classification and regression models.

Minimum Qualifications:

  • Ph.D. or similar level knowledge in computer science, electrical engineering, or related field.
  • Research experience in multimodal sensory signal processing (e.g., vision, speech, vehicle sensory data, human behavioral data & physiology), and machine learning.
  • Strong familiarity with machine learning techniques pertaining to sequential data processing.
  • Experience in open-source Deep Learning frameworks such as TensorFlow and PyTorch.
  • Highly proficient in software engineering using Python.
  • Strong written and oral communication skills including development and delivery of presentations, proposals, and technical documents.
  • Representative publications in one or more of the following areas: signal processing and machine learning.

Duration: 3 years



How to apply

Candidates must have the legal right to work in the U.S.A.​ Please add Cover Letter and CV in the same document

Text to Identify Refresh CAPTCHA