Develop and deploy ML models for perception/navigation tasks such as object detection, semantic segmentation, tracking, scene understanding, localization, and path prediction.
Design and implement sensor fusion and mapping pipelines combining vision, depth, LIDAR, IMU, and other signals for robust perception and navigation in dynamic spaces.
Build real-time ML inference pipelines optimized for robotic hardware and embedded compute.
Setup data collection, labeling strategies, dataset curation, and synthetic data augmentation for training and evaluation.
Establish metrics, benchmarks, and test frameworks to validate ML models in both simulation and real-world environments.
Collaborate with robotics software engineers to integrate perception and navigation intelligence into autonomy stacks.
Work with operations to analyze field data, diagnose performance gaps, and iterate on model improvements.
Contribute to long-term ML and perception and navigation architecture decisions, influencing the roadmap for future robots.
Mentor junior ML engineers and contribute to building strong applied ML best practices within the team.
Requirements
Master’s or PhD in Computer Science, Robotics, Machine Learning, or related field.
5+ years of experience in applied machine learning, computer vision, or robotics perception.
Strong background in deep learning frameworks (PyTorch, TensorFlow, JAX).