TikTok logo

Efficient ML Engineer Research Intern, (AI Platform) - 2026 Start (PhD)

Posted 11 hours ago

Job Description

TEAM INTRODUCTION
The Vision Engineering Team at TikTok is at the forefront of delivering GenAI technologies directly into the TikTok products worldwide. Leveraging our proprietary AI infrastructures, we streamline the creation, integration, testing, and deployment of the GenAI features. This also encompasses large-scale training stability and optimization for acceleration, as well as large model inference and multi-machine multi-card deployment. Our work enhances user experience by powering diverse functionalities, including visual enhancements, video editing tools, and creative camera filters, both within TikTok and other applications.

We are looking for talented individuals to join us for an internship in 2026. PhD Internships at TikTok aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies.

Internships at TikTok aim to provide students with hands-on experience in developing fundamental skills and exploring potential career paths. A vibrant blend of social events and enriching development workshops will be available for you to explore. Here, you will utilize your knowledge in real-world scenarios while laying a strong foundation for personal and professional growth. It runs for 12 weeks.

PhD internships at TikTok provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts.

Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date).

Responsibilities
- Develop algorithm acceleration technologies for text-to-image/text-to-video models through knowledge distillation, model architecture redesign (dynamic MoE routing/sparse attention), and parameter-efficient design (low-bit quantization) to achieve order-of-magnitude efficiency gains.
- Lead generative model innovation with focus on diffusion acceleration (sampling step reduction/latent optimization), autoregression model efficiency.
- Collaborate cross-functionally to identify performance bottlenecks, optimize vision models via algorithmic breakthroughs, and enhance ByteDance's product capabilities.

Minimum qualifications
- Currently pursuing a PhD in Computer Science, engineering or quantitative field
- Proficient in C++/Python and high-performance coding.
- Expertise in diffusion models (Stable Diffusion/DiT) with deep understanding of computational bottlenecks and optimization methodologies.
- Proven experience in ≥1 domain: model compression (quantization/knowledge distillation), efficient architectures (MoE/sparse attention), generative alignment (RLHF/DPO).
- Excellent communication and teamwork skills, capable of thriving in a fast-paced work environment.

Preferred Qualifications
- Kaggle competition achievements, publications at ICML/NeurIPS/CVPR, or open-source contributions (e.g., HuggingFace Diffusers optimization).
- Research experience in GenAI /MLsys areas.
- Familiarity with open source deep learning frameworks such as Pytorch/DeepSpeed/Jax etc.

By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy

Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

Share this job

TikTok logo

TikTok

86 jobs posted

View all TikTok jobs

About the job

Posted on

Apr 12, 2026

Apply before

May 12, 2026

Job typeInternship
CategoryML Engineer
Location
San Jose, CA

Similar Jobs

ML Engineer jobs

Looking for something different?

Browse all AI jobs

Free AI job alerts

Get the latest AI jobs delivered to your inbox every week. Free, no spam.