Job Description
About Luma AI
Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change.
About the Role
The Training Infrastructure team at Luma is responsible for building and maintaining the distributed systems that enable training of our large-scale multimodal models across thousands of GPUs. This team ensures our researchers can focus on innovation while having access to reliable, efficient, and scalable training infrastructure that pushes the boundaries of what's possible in AI model development. We are looking for engineers with significant experience solving hard problems in PyTorch, CUDA and distributed systems. You will work alongside the rest of the research team to build & train cutting edge foundation models on thousands of GPUs that are built to scale from the ground up.
Responsibilities
- Design, implement, and optimize efficient distributed training systems for models with thousands of GPUs
- Research and implement advanced parallelization techniques (FSDP, Tensor Parallel, Pipeline Parallel, Expert Parallel)
- Build monitoring, visualization, and debugging tools for large-scale training runs
- Optimize training stability, convergence, and resource utilization across massive clusters
Experience
- Extensive experience with distributed PyTorch training and parallelisms in foundation model training
- Deep understanding of GPU clusters, networking, and storage systems
- Familiarity with communication libraries (NCCL, MPI) and distributed system optimization
- (Preferred) Strong Linux systems administration and scripting capabilities
- (Preferred) Experience managing training runs across >100 GPUs
- (Preferred) Experience with containerization, orchestration, and cloud infrastructure
Luma AI
6 jobs posted
About the job
Posted on
Feb 5, 2026
Apply before
Mar 7, 2026
Job typeFull-time
CategoryResearch Scientist
Location
Palo Alto, Palo Alto, CA
Similar Jobs
Anthropic
21 days agoResearch Engineer / Research Scientist, Pre-training
Zürich, CHView detailsAnthropic
21 days agoResearch Engineer / Research Scientist, Tokens
New York City, NY; New York City, NY | Seattle, WA; San Francisco, CA$340K - $425K/yrView detailsJasper
21 days agoResearch Scientist Intern - Post-Training (Distillation)
FranceView detailsJasper
21 days agoResearch Scientist Intern - Post-Training (RLHF)
FranceView detailsAirbnb
13 days agoSenior Software Engineer, ML Infrastructure - Training
United States$191K - $225K/yrView detailsApplied Intuition
21 days agoResearch Engineer - AI Infrastructure
Sunnyvale, California, United States$153K - $222K/yrView detailsAnthropic
21 days agoResearch Engineer, Pre-training
Remote$340K - $425K/yrView detailsAnthropic
21 days agoResearch Engineer, Pre-training
London, UK£250K - £270K/yrView detailsAnthropic
19 days agoResearch Engineer/Research Scientist, Audio
Remote$350K - $500K/yrView detailsAnthropic
19 days agoResearch Engineer / Research Scientist, Vision
New York City, NY; San Francisco, CA; Seattle, WA$350K - $850K/yrView details
View all Research Scientist jobs
Looking for something different?
Browse all AI jobs