Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: We are looking for a Fellow/Sr. Fellow Machine Learning Engineer to join our Training At Scale team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI. THE PERSON: The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel), and be familiar with training large models. KEY RESPONSIBILITIES: Train large models to convergence on AMD GPUs at scale. Improve the end-to-end training pipeline performance on large scale GPU cluster. Improve the end-to-end debuggability on large scale GPU cluster. Design and optimize the distributed training pipeline and software stack to scale out. Contribute your changes to open source. Stay up-to-date with the latest training algorithms/frameworks. Influence the direction of AMD AI platform. Collaborate across teams with various groups and stakeholders. PREFERRED EXPERIENCE: Strong background in machine learning, distributed systems, or AI infrastructure. Proven experience building and optimizing distributed training systems for large models. Prefer experience in both model and application-level development and optimization. Strong familiarity with ML frameworks (PyTorch, JAX, TensorFlow) and distributed frameworks (TorchTitan, Megatron-LM). Hands-on expertise with LLMs, recommendation systems, or ranking models. Proficiency in Python and C++, including performance profiling, debugging, and large-scale optimization. Experience collaborating across hardware, compiler, and system software layers. Excellent communication, and problem-solving skills. ACADEMIC CREDENTIALS Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. LOCATION San Jose, CA or Bellevue, WA preferred. Other U.S. locations near AMD offices may be considered. #LI-MV1 #HYBRID Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: We are looking for a Fellow/Sr. Fellow Machine Learning Engineer to join our Training At Scale team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI. THE PERSON: The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel), and be familiar with training large models. KEY RESPONSIBILITIES: Train large models to convergence on AMD GPUs at scale. Improve the end-to-end training pipeline performance on large scale GPU cluster. Improve the end-to-end debuggability on large scale GPU cluster. Design and optimize the distributed training pipeline and software stack to scale out. Contribute your changes to open source. Stay up-to-date with the latest training algorithms/frameworks. Influence the direction of AMD AI platform. Collaborate across teams with various groups and stakeholders. PREFERRED EXPERIENCE: Strong background in machine learning, distributed systems, or AI infrastructure. Proven experience building and optimizing distributed training systems for large models. Prefer experience in both model and application-level development and optimization. Strong familiarity with ML frameworks (PyTorch, JAX, TensorFlow) and distributed frameworks (TorchTitan, Megatron-LM). Hands-on expertise with LLMs, recommendation systems, or ranking models. Proficiency in Python and C++, including performance profiling, debugging, and large-scale optimization. Experience collaborating across hardware, compiler, and system software layers. Excellent communication, and problem-solving skills. ACADEMIC CREDENTIALS Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. LOCATION San Jose, CA or Bellevue, WA preferred. Other U.S. locations near AMD offices may be considered. #LI-MV1 #HYBRID
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: We are looking for a Fellow/Sr. Fellow Machine Learning Engineer to join our Training At Scale team. If you are excited by the challenge of distributed training of large models on a large number of GPUs, and if you are passionate about improving training efficiency while innovating and generating new ideas, then this role is for you. You will be part of a world class team focused on addressing the challenge of training generative AI. THE PERSON: The ideal candidate should have experience with distributed training pipelines, be knowledgeable in distributed training algorithms (Data Parallel, Tensor Parallel, Pipeline Parallel, Expert Parallel), and be familiar with training large models. KEY RESPONSIBILITIES: Train large models to convergence on AMD GPUs at scale. Improve the end-to-end training pipeline performance on large scale GPU cluster. Improve the end-to-end debuggability on large scale GPU cluster. Design and optimize the distributed training pipeline and software stack to scale out. Contribute your changes to open source. Stay up-to-date with the latest training algorithms/frameworks. Influence the direction of AMD AI platform. Collaborate across teams with various groups and stakeholders. PREFERRED EXPERIENCE: Strong background in machine learning, distributed systems, or AI infrastructure. Proven experience building and optimizing distributed training systems for large models. Prefer experience in both model and application-level development and optimization. Strong familiarity with ML frameworks (PyTorch, JAX, TensorFlow) and distributed frameworks (TorchTitan, Megatron-LM). Hands-on expertise with LLMs, recommendation systems, or ranking models. Proficiency in Python and C++, including performance profiling, debugging, and large-scale optimization. Experience collaborating across hardware, compiler, and system software layers. Excellent communication, and problem-solving skills. ACADEMIC CREDENTIALS Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. LOCATION San Jose, CA or Bellevue, WA preferred. Other U.S. locations near AMD offices may be considered. #LI-MV1 #HYBRID
Job Alerts
Get the latest AI jobs delivered to your inbox every Wednesday.
Free, weekly digest. No spam.
AMD
65 jobs posted
About the job
Posted on
Feb 17, 2026
Apply before
Mar 19, 2026
Job typeFull-time
CategoryML Engineer
Location
San Jose, CA
Skills
pythonTensorflowPytorchLLMgenerative ai
Job Alerts
Get the latest AI jobs delivered to your inbox every Wednesday.
Free, weekly digest. No spam.
Similar Jobs
Paypal
14 days agoSr Machine Learning Engineer
New York City, New York$143K - $212K/yrView detailsPaypal
5 days agoSr Machine Learning Engineer
San Jose, California$159K - $244K/yrView detailsPinterest
11 days agoSr. Machine Learning Engineer, Notifications
Toronto, ON, CanadaView detailsSalesforce
29 days agoMachine Learning Engineer
MexicoView detailsCalendly
27 days agoMachine Learning Engineer
RemoteUnited States$202K - $256K/yrView detailsTwilio
26 days agoMachine Learning Engineer
RemoteUnited States$139K - $173K/yrView detailsGrab
26 days agoMachine Learning Engineer
Petaling Jaya, Selangor, MalaysiaView detailsGrab
26 days agoMachine Learning Engineer
Beijing, ChinaView detailsGrab
26 days agoMachine Learning Engineer
Beijing, Beijing, ChinaView detailsReddit
14 days agoMachine Learning Engineer
San Francisco, CA$223K - $260K/yrView details
Looking for something different?
Browse all AI jobs