We are seeking a highly skilled Machine Learning Engineer with deep expertise in developing Bird’s Eye View (BEV) fusion models using multimodal sensor inputs, particularly LiDAR. You will play a central role in designing scalable perception algorithms that integrate data from camera, LiDAR, and radar sensors to support autonomous driving and 3D scene understanding.
Our compensations (cash and equity) are determined based on the position, your location, qualifications, and experience.
Responsibilities:
Design, implement, and optimize BEV-based perception models that fuse camera, LiDAR, and radar inputs.Benchmark perception models using large-scale datasets and well-defined quantitative metrics.Collaborate cross-functionally with research, data, and deployment engineers to refine models and support real-world applications.Maintain a strong focus on performance, robustness, and scalability for deployment in production systems.,
Required Skills:
Master’s or Ph.D. in AI, Computer Science, Electrical Engineering, Robotics, or a related field.Proficiency in Python and experience building deep learning pipelines.Strong expertise in PyTorch, TensorFlow, or JAX.Proven experience with LiDAR-based 3D perception and BEV representation modelsDeep understanding of multimodal sensor fusion architectures and techniques.Familiarity with camera, LiDAR, and radar modalities and their synchronization, calibration, and integration in perception pipelines.Solid foundation in computer vision, deep learning, and 3D geometry.,
Preferred Skills:
Industry or academic experience in autonomous vehicle perception, robotics, or related areas.Hands-on experience developing deep learning models in real-world or production environments.Experience with distributed training, high-performance computing, or GPU acceleration.