Develop computer vision and machine learning models for real-time perception systems, enabling tractors to identify crops, obstacles, and terrain in varying unpredictable conditions.
Build sensor fusion algorithms to combine camera, LiDAR, and radar data, creating robust 3D scene understanding that handles challenges like crop occlusions or GNSS drift.
Optimize models for low-latency inference on resource-constrained hardware, balancing accuracy and performance.
Design and test data pipelines to curate and label large sensor datasets, ensuring high-quality inputs for training and validation, with tools to visualize and debug failures.
Analyze performance metrics and iterate on algorithms to improve accuracy and efficiency of various perception subsystems.
Write production-grade software and optimize models for embedded hardware.
Test systems on real tractors at operating farms worldwide and collaborate with team members across the autonomy stack.
Own critical pieces of the perception stack and drive innovations to make systems generalized, safe and reliable.
Requirements
A MS, or PhD in Computer Science, AI, or a related field, or 5+ years of industry experience building vision-based perception systems.
Deep expertise in developing and deploying machine learning models, particularly for perception tasks such as object detection, segmentation, mono/stereo depth estimation, sensor fusion, and scene understanding.
Strong understanding of integrating data from multiple sensors like cameras, LiDAR, and radar.
Experience handling large datasets efficiently and organizing them for labeling, training and evaluation.
Fluency in Python and experience with ML/CV frameworks like TensorFlow, PyTorch, or OpenCV, with the ability to write efficient, production-ready code for real-time applications.
Proven ability to design experiments, analyze performance metrics (e.g., mAP, IoU, latency), and optimize algorithms to meet stringent performance requirements in dynamic settings.
Experience architecting multi-sensor ML systems from scratch.
Experience with Foundational models for robotics or Vision-Language-Action (VLA) models.
Experience with compute-constrained pipelines including optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc.
Experience implementing custom operations in CUDA.
Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.).
Passion for sustainable agriculture and securing our food supply chain.
Benefits
100% covered medical, dental, and vision for the employee (partner, children, or family is additional)
Commuter Benefits
Flexible Spending Account (FSA)
Life Insurance
Short- and Long-Term Disability
401k Plan
Stock Options
Equity
Collaborative work environment working alongside passionate mission-driven team!
Unlimited PTO
Applicant Tracking System Keywords
Tip: use these terms in your resume and cover letter to boost ATS matches.