Luma AI logo

(Internship) Research Scientist / Engineer — Foundation Model

Posted 4 hours ago

Apply Now

Job Description

About Luma AI:
Luma’s mission is to build multimodal AGI. Through our research on video, 3D, and now multimodal models at Luma, we believe that AI needs to be jointly trained over all signal modalities – text, video, audio, images – analogous to the human brain. 

To advance our mission, we build and operate the full stack end-to-end, spanning foundation models, inference systems, and products. This integrated approach powers technologies like Ray3, which is seeing rapidly growing adoption among Fortune 500 companies across media, entertainment, and advertising. Backed by a recent $900M Series C and our partnership with Humain to build a 2 GW compute supercluster (Project Halo), our models and the Dream Machine platform are now enabling creatives worldwide to tell some of the most impactful stories of our time.

Where You Come In:
This is a rare and foundational opportunity to define the future of multimodal AI. You will be at the forefront of building and training large-scale multimodal models and systems that complete multimodal work. This role offers the chance to bridge cutting-edge research with magical, shipped products, working end-to-end on novel problems with no existing playbook.

What You'll Do:
This opportunity involves both the “science” and “engineering” parts of research, two aspects that are of equal importance.

This is a multi-stack opportunity where you will work on the intersection of modeling, data, systems, and evaluation that enable building agents that can complete multimodal work end-to-end.
  • Modeling: Architect large-scale multimodal agentic models that use reasoning, planning, coding, and tool calling to achieve complex, multi-step multimodal work.
  • Data: Hillclimbing existing tasks and formulating new tasks through data. Design, implement, and run robust data pipelines for constructing, enriching, and filtering agentic datasets.
  • Systems: Train large-scale multimodal agents on massive datasets and GPU clusters.
  • Evaluation: Define and build novel evaluation frameworks to measure multimodal agents.

Who You Are:
  • Strong foundation in machine learning, foundation models and agentic systems.
  • Deep understanding of agentic systems and approaches in LLM/VLM reasoning, coding models, LLM/VLM tool calling.
  • Hands-on experience with PyTorch and large-scale training (distributed, mixed precision, large datasets).
  • Able to contribute continuous 6 months in the internship


What Sets You Apart (Bonus Points):
Experience in the following around data, modeling, or evaluation:
  • State-of-the-art foundation models in reasoning
  • State-of-the-art foundation models in coding
  • State-of-the-art foundation models in tool calling
  • State-of-the-art multimodal agents

Your application are reviewed by real people.

Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

Luma AI logo

Luma AI

5 jobs posted

View all Luma AI jobs

About the job

Posted on

Feb 13, 2026

Apply before

Mar 15, 2026

Job typeFull-time
Location
Palo Alto, Palo Alto, Canada
Skills
PytorchLLM

Share this job opportunity