AMD
Company
AI Model Training Development Engineer
Beijing, China
Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. Responsibilities Develop and optimize core training operators on AMD GPUs (GEMM, GroupedGEMM, Attention, DeepEP, etc.), continuously pursuing state-of-the-art performance. Conduct in-depth analysis of performance bottlenecks in large-scale model training and drive targeted end-to-end performance optimizations. Collaborate closely with AMD’s software and hardware teams to enhance the performance and stability of the ROCm ecosystem. Participate in cutting-edge technology research, including but not limited to next-generation GPU hardware, compute-communication operator fusion, and AGI-driven automatic generation of high-performance operators. Qualifications Solid foundation in computer architecture and high-performance computing. Proficient in C/C++, familiar with GPU programming (HIP / CUDA) and parallel development languages such as Triton, with strong engineering implementation skills. Familiar with parallel computing principles and GPU execution models, demonstrating excellent performance analysis and optimization capabilities. Understanding of large-model training workflows and hands-on experience with operator-level performance optimization during training. Strong teamwork and cross-functional communication skills. Preferred Qualifications Familiarity with the latest GPU architectural features (e.g., AMD CDNA4 / NVIDIA Blackwell) and their performance optimization methodologies. Experience in high-performance optimization of core operators (GEMM, Attention, GroupedGEMM, DeepEP, etc.). Familiarity with the implementation and performance tuning of communication operators (AllReduce, AllToAll, ReduceScatter, etc.). Development or research experience in low-precision computation (FP8 / FP4), compute-communication overlap (CCO), compiler optimizations, or automatic operator generation. Experience in developing or optimizing large-model training systems (such as Megatron-LM, TorchTitan, etc.). Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
Responsibilities Develop and optimize core training operators on AMD GPUs (GEMM, GroupedGEMM, Attention, DeepEP, etc.), continuously pursuing state-of-the-art performance. Conduct in-depth analysis of performance bottlenecks in large-scale model training and drive targeted end-to-end performance optimizations. Collaborate closely with AMD’s software and hardware teams to enhance the performance and stability of the ROCm ecosystem. Participate in cutting-edge technology research, including but not limited to next-generation GPU hardware, compute-communication operator fusion, and AGI-driven automatic generation of high-performance operators. Qualifications Solid foundation in computer architecture and high-performance computing. Proficient in C/C++, familiar with GPU programming (HIP / CUDA) and parallel development languages such as Triton, with strong engineering implementation skills. Familiar with parallel computing principles and GPU execution models, demonstrating excellent performance analysis and optimization capabilities. Understanding of large-model training workflows and hands-on experience with operator-level performance optimization during training. Strong teamwork and cross-functional communication skills. Preferred Qualifications Familiarity with the latest GPU architectural features (e.g., AMD CDNA4 / NVIDIA Blackwell) and their performance optimization methodologies. Experience in high-performance optimization of core operators (GEMM, Attention, GroupedGEMM, DeepEP, etc.). Familiarity with the implementation and performance tuning of communication operators (AllReduce, AllToAll, ReduceScatter, etc.). Development or research experience in low-precision computation (FP8 / FP4), compute-communication overlap (CCO), compiler optimizations, or automatic operator generation. Experience in developing or optimizing large-model training systems (such as Megatron-LM, TorchTitan, etc.).
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
Responsibilities Develop and optimize core training operators on AMD GPUs (GEMM, GroupedGEMM, Attention, DeepEP, etc.), continuously pursuing state-of-the-art performance. Conduct in-depth analysis of performance bottlenecks in large-scale model training and drive targeted end-to-end performance optimizations. Collaborate closely with AMD’s software and hardware teams to enhance the performance and stability of the ROCm ecosystem. Participate in cutting-edge technology research, including but not limited to next-generation GPU hardware, compute-communication operator fusion, and AGI-driven automatic generation of high-performance operators. Qualifications Solid foundation in computer architecture and high-performance computing. Proficient in C/C++, familiar with GPU programming (HIP / CUDA) and parallel development languages such as Triton, with strong engineering implementation skills. Familiar with parallel computing principles and GPU execution models, demonstrating excellent performance analysis and optimization capabilities. Understanding of large-model training workflows and hands-on experience with operator-level performance optimization during training. Strong teamwork and cross-functional communication skills. Preferred Qualifications Familiarity with the latest GPU architectural features (e.g., AMD CDNA4 / NVIDIA Blackwell) and their performance optimization methodologies. Experience in high-performance optimization of core operators (GEMM, Attention, GroupedGEMM, DeepEP, etc.). Familiarity with the implementation and performance tuning of communication operators (AllReduce, AllToAll, ReduceScatter, etc.). Development or research experience in low-precision computation (FP8 / FP4), compute-communication overlap (CCO), compiler optimizations, or automatic operator generation. Experience in developing or optimizing large-model training systems (such as Megatron-LM, TorchTitan, etc.).
AMD
98 jobs posted
About the job
Similar Jobs
Discover more opportunities that match your interests
- 21 days ago
AI Model Training Development Engineer
AMD
Beijing, ChinaView details - 30 days ago
AI Engineer - IT Platform Development
Spotify
View details - 29 days ago
Software Development Engineer, Alexa AI
Amazon
PL, GdanskView details - 22 days ago
Software Development Engineer, Alexa AI
Amazon
PL, GdanskView details - 16 days ago
AI Engineer
CloudWalk
View details - 27 days ago
Sr. Software Development Engineer - Gen AI
Esri
Redlands, CAView details - 19 days ago
Senior Development Engineer-Python & AI, Agentic AI
Dell Technologies
Bangalore, IndiaView details - 17 days ago
Software Development Engineer, AWS Agentic AI
Amazon
IL, HaifaView details - 14 days ago
AI/HPC Network Development Engineer - Networking
xAI
Palo Alto, CAView details - 14 days ago
AI/HPC Network Development Engineer - Networking
xAI
Dublin, IEView details
View all ML Engineer jobs
Looking for something different?
Browse all AI jobs