Luma AI logo

Research Scientist / Engineer — Voice Agents

Posted 9 hours ago

Apply Now

Job Description

About Luma AI
Luma’s mission is to build multimodal AGI. Through our research on video, 3D, and now multimodal models at Luma, we believe that AI needs to be jointly trained over all signal modalities – text, video, audio, images – analogous to the human brain. 

To advance our mission, we build and operate the full stack end-to-end, spanning foundation models, inference systems, and products. This integrated approach powers technologies like Ray3, which is seeing rapidly growing adoption among Fortune 500 companies across media, entertainment, and advertising. Backed by a recent $900M Series C and our partnership with Humain to build a 2 GW compute supercluster (Project Halo), our models and the Dream Machine platform are now enabling creatives worldwide to tell some of the most impactful stories of our time.

Where You Come In
This is a rare opportunity to work at the absolute frontier of creative AI, building the next generation of interactive voice agents. You will join a foundational team responsible for developing the core models that allow humans to converse with AI in real-time with unprecedented realism and expressiveness. Your work will bridge the gap between deep research and magical, shipped products that millions of users will interact with.

What You'll Do
This opportunity involves both the “science” and “engineering” parts of research, so feel free to choose the title you think is appropriate for you (Research Scientist or Research Engineer).

This is a multi-stack opportunity where you will work on the intersection of modeling, data, systems, and evaluation.
  • Modeling: Build next-generation voice agents that tightly integrate audio understanding (e.g., ASR, diarization, emotion recognition) and audio generation (e.g., TTS, voice conversion) for real-time, interactive use.
  • Data: Design, implement, and run robust data pipelines and training curricula for speech and audio, including large-scale pretraining, fine-tuning, and data quality iteration.
  • Systems: Train large-scale video and audio generative models on massive datasets and GPU clusters, and develop low-latency architectures and inference strategies for streaming, conversational, and on-device deployment.
  • Evaluation: Define and build novel evaluation frameworks for voice agents, covering accuracy, robustness, latency, controllability, and human perceptual quality.

Who You Are
  • A strong background in machine learning and generative modeling.
  • Practical understanding of speech and audio modeling, including representation learning, sequence modeling, and conditioning/control mechanisms.
  • Experience building and training models in PyTorch, including large-scale or latency-sensitive systems.

What Sets You Apart (Bonus Points)
  • Experience with speech or audio understanding tasks (e.g., ASR, diarization, speaker/emotion recognition, audio classification).
  • Experience with speech or audio generation (e.g., TTS, voice conversion, expressive or controllable speech).
  • Familiarity with streaming or real-time inference, model compression, or deployment on consumer hardware.
  • A portfolio of past projects, publications, or open-source contributions demonstrating your work in generative audio or speech AI.

Your application are reviewed by real people.

Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

Luma AI logo

Luma AI

6 jobs posted

View all Luma AI jobs

About the job

Posted on

Feb 5, 2026

Apply before

Mar 7, 2026

Job typeFull-time
Location
Palo Alto, CA

Share this job opportunity