AMD
Company
Principal ML Engineer - Large Scale Training Performance Optimization
San Jose, California
Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. Principal Large Scale Training Performance Optimization Engineer THE ROLE: We are looking for a Principal Machine Learning Engineer to join our Models and Applications team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI at scale. THE PERSON: The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel ZeRO), and be familiar with training large models at scale. KEY RESPONSIBILITIES: Train large models to convergence on AMD GPUs at scale. Improve the end-to-end training pipeline performance. Optimize the distributed training pipeline and algorithm to scale out. Contribute your changes to open source. Stay up-to-date with the latest training algorithms. Influence the direction of AMD AI platform. Collaborate across teams with various groups and stakeholders. PREFERRED EXPERIENCE: Experience with ML/DL frameworks such as PyTorch, JAX, or TensorFlow. Experience with distributed training and distributed training frameworks, such as Megatron-LM, MaxText, TorchTitan. Experience with LLMs or computer vision, especially large models, is a plus. Experience with GPU kernel optimization is a plus. Excellent Python or C++ programming skills, including debugging, profiling, and performance analysis at scale. Experience with ML infra at kernel, framework, or system level Strong communication and problem-solving skills. ACADEMIC CREDENTIALS: A master's degree or PhD degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. LOCATION: San Jose, CA or Bellevue, WA preferred. May consider other US markets within proximity of US AMD offices. #LI-MV1 #HYBRID Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Principal Large Scale Training Performance Optimization Engineer THE ROLE: We are looking for a Principal Machine Learning Engineer to join our Models and Applications team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI at scale. THE PERSON: The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel ZeRO), and be familiar with training large models at scale. KEY RESPONSIBILITIES: Train large models to convergence on AMD GPUs at scale. Improve the end-to-end training pipeline performance. Optimize the distributed training pipeline and algorithm to scale out. Contribute your changes to open source. Stay up-to-date with the latest training algorithms. Influence the direction of AMD AI platform. Collaborate across teams with various groups and stakeholders. PREFERRED EXPERIENCE: Experience with ML/DL frameworks such as PyTorch, JAX, or TensorFlow. Experience with distributed training and distributed training frameworks, such as Megatron-LM, MaxText, TorchTitan. Experience with LLMs or computer vision, especially large models, is a plus. Experience with GPU kernel optimization is a plus. Excellent Python or C++ programming skills, including debugging, profiling, and performance analysis at scale. Experience with ML infra at kernel, framework, or system level Strong communication and problem-solving skills. ACADEMIC CREDENTIALS: A master's degree or PhD degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. LOCATION: San Jose, CA or Bellevue, WA preferred. May consider other US markets within proximity of US AMD offices. #LI-MV1 #HYBRID
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Principal Large Scale Training Performance Optimization Engineer THE ROLE: We are looking for a Principal Machine Learning Engineer to join our Models and Applications team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI at scale. THE PERSON: The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel ZeRO), and be familiar with training large models at scale. KEY RESPONSIBILITIES: Train large models to convergence on AMD GPUs at scale. Improve the end-to-end training pipeline performance. Optimize the distributed training pipeline and algorithm to scale out. Contribute your changes to open source. Stay up-to-date with the latest training algorithms. Influence the direction of AMD AI platform. Collaborate across teams with various groups and stakeholders. PREFERRED EXPERIENCE: Experience with ML/DL frameworks such as PyTorch, JAX, or TensorFlow. Experience with distributed training and distributed training frameworks, such as Megatron-LM, MaxText, TorchTitan. Experience with LLMs or computer vision, especially large models, is a plus. Experience with GPU kernel optimization is a plus. Excellent Python or C++ programming skills, including debugging, profiling, and performance analysis at scale. Experience with ML infra at kernel, framework, or system level Strong communication and problem-solving skills. ACADEMIC CREDENTIALS: A master's degree or PhD degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. LOCATION: San Jose, CA or Bellevue, WA preferred. May consider other US markets within proximity of US AMD offices. #LI-MV1 #HYBRID
AMD
121 jobs posted
About the job
Similar Jobs
Discover more opportunities that match your interests
- 6 days ago
Staff Machine Learning Engineer, ML Performance & Optimization
Waymo
Mountain View, CA, USA; San Francisco, CA, USA; Bellevue, WA, USA$238K - $302K/yrView details - 6 days ago
Principal ML Engineer, Ads
Coupang
Seattle, USA$184K/yrView details - 6 days ago
Senior/Principal Platform Engineer, ML Platform
Roblox
San Mateo, CA, United States$277K - $343K/yrView details - 6 days ago
Principal Software Engineer, ML Systems
Waymo
Mountain View, CA, USA; San Francisco, CA, USA; Seattle, Washington, USA$332K - $421K/yrView details - 28 days ago
Software Engineer, ML Infrastructure, Optimization
Nuro
Mountain View, California (HQ)$160K/yrView details - 28 days ago
Senior/Staff ML Performance Engineer, Low-Precision Training & Model Quantization
Nuro
Mountain View, California (HQ)$194K/yrView details - 8 days ago
Principal AI/ML System Software Engineer
d-Matrix
Santa ClaraView details - 15 days ago
Senior Software Engineer, ML Training Platform
DoorDash
San Francisco, CA; Sunnyvale, CA; Seattle, WA$131K - $192K/yrView details - 2 hours ago
Principal Software Development Engineer - Efficient Gen AI training and inferencing at scale
AMD
MARKHAM, CanadaView details - 28 days ago
Senior/Staff ML Engineer, AI-Assisted Efficiency Optimization
Nuro
Mountain View, California (HQ)$194K/yrView details
View all ML Engineer jobs
Looking for something different?
Browse all AI jobs