Amazon
Company
ML Infrastructure Engineer - Distributed Training, AWS Neuron, Annapurna Labs
US, CA, Cupertino
Job Description
This job posting has expired and no longer accepting applications.
By applying to this position, your application will be considered for all locations we hire for in the United States.
Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators. This role is for a Senior Machine Learning Engineer in the Distribute Training team for AWS Neuron, responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive-scale Large Language Models (LLM) such as GPT and Llama, as well as Stable Diffusion, Vision Transformers (ViT) and many more.
The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.
Key job responsibilities
You'll help develop and improve distributed training capabilities in popular machine learning frameworks (PyTorch and JAX) using AWS's specialized AI hardware. Working with our compiler and runtime teams, you'll learn how to optimize ML models to run efficiently on AWS's custom AI chips (Trainium and Inferentia). This is a great opportunity to bridge the gap between ML frameworks and hardware acceleration, while building strong foundations in distributed systems.
We're looking for someone with solid programming skills, enthusiasm for learning complex systems, and basic understanding of machine learning concepts. This role offers excellent growth opportunities in the rapidly evolving field of ML infrastructure.
About the team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators. This role is for a Senior Machine Learning Engineer in the Distribute Training team for AWS Neuron, responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive-scale Large Language Models (LLM) such as GPT and Llama, as well as Stable Diffusion, Vision Transformers (ViT) and many more.
The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key.
Key job responsibilities
You'll help develop and improve distributed training capabilities in popular machine learning frameworks (PyTorch and JAX) using AWS's specialized AI hardware. Working with our compiler and runtime teams, you'll learn how to optimize ML models to run efficiently on AWS's custom AI chips (Trainium and Inferentia). This is a great opportunity to bridge the gap between ML frameworks and hardware acceleration, while building strong foundations in distributed systems.
We're looking for someone with solid programming skills, enthusiasm for learning complex systems, and basic understanding of machine learning concepts. This role offers excellent growth opportunities in the rapidly evolving field of ML infrastructure.
About the team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!
Amazon
293 jobs posted
About the job
Similar Jobs
Discover more opportunities that match your interests
- 3 days ago
Software Development Manager, ML Accelerators, AWS Neuron, Annapurna Labs
Amazon
US, WA, SeattleView details - 28 days ago
Sr. ML Compiler Engineer, Annapurna Labs
Amazon
US, WA, SeattleView details
17 days agoSoftware Engineer, ML Infrastructure
Motive
Hybrid - Islamabad & LahoreView details- 21 days ago
ML Infrastructure Engineer, Safeguards
Anthropic
San Francisco, CAView details - 4 days ago
Software Development Engineer - AI/ML, AWS Neuron, Multimodal Inference
Amazon
US, CA, CupertinoView details - 24 days ago
Software Engineer, ML Infrastructure, Optimization
Nuro
Mountain View, California (HQ)View details
22 days agoNetwork Engineer, AI/ML Infrastructure
Boson AI
TorontoView details
22 days agoNetwork Engineer, AI/ML Infrastructure
Boson AI
Santa Clara HQView details- 22 days ago
Senior AI/ML Infrastructure Engineer
AMD
Austin, TexasView details - 19 days ago
Software Engineer - Data & ML Infrastructure
AMD
MARKHAM, CanadaView details
View all ML Engineer jobs
Looking for something different?
Browse all AI jobs