Company
Staff Research Engineer, Pre-training Science
Job Description
Reddit is continuing to grow our teams with the best talent. This role is completely remote friendly within the United States. If you happen to live close to one of our physical office locations (San Francisco, Los Angeles, New York City & Chicago) our doors are open for you to come into the office as often as you'd like.
The AI Engineering team at Reddit is embarking on a strategic initiative to build our own Reddit-native foundational Large Language Models (LLMs). This team sits at the intersection of applied research and massive-scale infrastructure, tasked with training models that truly understand the unique culture, language, and structure of Reddit communities. You will be joining a team of distinguished engineers and safety experts to build the "engine room" of Reddit's AI future—creating the foundational models that will power Safety & Moderation, Search, Ads, and the next generation of user products.
As a Staff Research Engineer for Pre-training Science, you will serve as the technical lead for defining the Continual Pre-Training (CPT) strategies that transform generic foundation models into Reddit-native experts. You will bridge the gap between "General Intelligence" and "Community Context," designing scientific frameworks that inject Reddit’s unique knowledge (conversational trees, slang, multimodal memes) into base models without causing catastrophic forgetting. You will define the "learning recipe"—the precise mix of data, hyperparameters, and architectural adaptations needed to build a model that speaks the language of the internet.
Responsibilities:
- Architect and validate rigorous Continual Pre-Training (CPT) frameworks, focusing on domain adaptation techniques that effectively transfer Reddit’s knowledge into licensed frontier models.
- Design the "Science of Multimodality": Lead research into fusing vision and language encoders to process Reddit’s rich media (images, video) alongside conversational text threads.
- Formulate data curriculum strategies: scientifically determining the optimal ratio of "Reddit data" vs. "General data" to maximize community understanding while maintaining safety and reasoning capabilities.
- Conduct deep-dive research into Scaling Laws for Graph-based data: investigating how Reddit’s tree-structured conversations impact model convergence compared to flat text.
- Design and scale continuous evaluation pipelines (the "Reddit Gym") that monitor model reasoning and safety capabilities in real-time, enabling dynamic adjustments to training recipes.
- Drive high-stakes architectural decisions regarding compute allocation, distributed training strategies (3D parallelism), and checkpointing mechanisms on AWS Trainium/Nova clusters.
- Serve as a force multiplier for the engineering team by setting coding standards, conducting high-level design reviews, and mentoring senior engineers on distributed systems and ML fundamentals.
Required Qualifications:
- 7+ years of experience in Machine Learning engineering or research, with a specific focus on LLM Pre-training, Domain Adaptation, or Transfer Learning.
- Expert-level proficiency in Python and deep learning frameworks (PyTorch or JAX), with a track record of debugging complex training instabilities at scale.
- Deep theoretical understanding of Transformer architectures and Pre-training dynamics—specifically regarding Catastrophic Forgetting and Knowledge Injection.
- Experience with Multimodal models (VLM): understanding how to align image/video encoders (e.g., CLIP, SigLIP) with language decoders.
- Experience implementing continuous integration/evaluation systems for ML models, measuring generalization and reasoning performance.
- Demonstrated ability to communicate complex technical concepts (like loss spikes or convergence issues) to leadership and coordinate efforts across Infrastructure and Data teams.
Nice to Have:
- Published research or open-source contributions in Continual Learning, Curriculum Learning, or Efficient Fine-Tuning (LoRA/Peft).
- Experience with Graph Neural Networks (GNNs) or processing tree-structured data.
- Proficiency in low-level optimization (CUDA, Triton) or distributed training frameworks (Megatron-LM, DeepSpeed, FSDP).
- Familiarity with Safety alignment techniques (RLHF/DPO) to understand how pre-training objectives impact downstream safety.
Benefits:
- Comprehensive Healthcare Benefits and Income Replacement Programs
- 401k with Employer Match
- Global Benefit programs that fit your lifestyle, from workspace to professional development to caregiving support
- Family Planning Support
- Gender-Affirming Care
- Mental Health & Coaching Benefits
- Flexible Vacation & Paid Volunteer Time Off
- Generous Paid Parental Leave
#LI-SP1
Pay Transparency:
This job posting may span more than one career level.
In addition to base salary, this job is eligible to receive equity in the form of restricted stock units, and depending on the position offered, it may also be eligible to receive a commission. Additionally, Reddit offers a wide range of benefits to U.S.-based employees, including medical, dental, and vision insurance, 401(k) program with employer match, generous time off for vacation, and parental leave. To learn more, please visit https://www.redditinc.com/careers/.
To provide greater transparency to candidates, we share base pay ranges for all US-based job postings regardless of state. We set standard base pay ranges for all roles based on function, level, and country location, benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including, skills, depth of work experience and relevant licenses/credentials, and may vary from the amounts listed below.
In select roles and locations, the interviews will be recorded, transcribed and summarized by artificial intelligence (AI). You will have the opportunity to opt out of recording, transcription and summarization prior to any scheduled interviews.
During the interview, we will collect the following categories of personal information: Identifiers, Professional and Employment-Related Information, Sensory Information (audio/video recording), and any other categories of personal information you choose to share with us. We will use this information to evaluate your application for employment or an independent contractor role, as applicable. We will not sell your personal information or disclose it to any third party for their marketing purposes. We will delete any recording of your interview promptly after making a hiring decision. For more information about how we will handle your personal information, including our retention of it, please refer to our Candidate Privacy Policy for Potential Employees and Contractors.
Reddit is proud to be an equal opportunity employer, and is committed to building a workforce representative of the diverse communities we serve. Reddit is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If, due to a disability, you need an accommodation during the interview process, please let your recruiter know.
38 jobs posted
About the job
Jan 27, 2026
Feb 26, 2026
Similar Jobs
Discover more opportunities that match your interests
Anthropic
12 days agoResearch Engineer, Pre-training
Remote$340K - $425K/yrView detailsAnthropic
12 days agoResearch Engineer, Pre-training
London, UK£250K - £270K/yrView detailsAnthropic
12 days agoResearch Engineer / Research Scientist, Pre-training
Zürich, CHView detailsCanva
26 days agoStaff Research Engineer - Generative Video
San Francisco, USView detailsOpenAI
13 days agoResearch Engineer, AI for Science
San FranciscoView detailsAMD
12 days agoSenior Staff AI Research Engineer
Santa Clara, CaliforniaView detailsOpenAI
13 days agoResearch Engineer, AI for Science
San FranciscoView detailsCanva
26 days agoStaff Research Engineer - Generative Video
San Francisco, US$270K - $310K/yrView detailsAMD
24 days agoStaff Graphics Research Engineer (AI/ML).
Warsaw, PolandView detailsAnthropic
12 days agoResearch Engineer, Production Model Post-Training - London
London, UK£270K - £340K/yrView details
Looking for something different?
Browse all AI jobs