Amazon
Company
Sr. Software Engineer - AI/ML, AWS Neuron Inference - Multimodal
US, WA, Seattle
Job Description
This job posting has expired and no longer accepting applications.
AWS Neuron is the complete software stack for the AWS Inferentia (Inf1/Inf2) and Trainium (Trn1), our cloud-scale Machine Learning accelerators. This role is for a machine learning engineer in the Inference team for AWS Neuron, responsible for development, enablement and performance tuning of a wide variety of ML model families, including massive-scale Large Language Models (LLM) such as GPT and Llama, as well as Stable Diffusion, Vision Transformers (ViT) and many more.
The ML Inference team works side by side with chip architects, compiler engineers and runtime engineers to create, build and optimize distributed inference solutions with Trainium/Inferentia instances. Experience with training and optimizing inference on these large models using Python/C++ is a must. Model parallelization, quantization, memory optimization - vLLM, DeepSpeed and other distributed inference libraries can be central to this and extending all of them for the Neuron based system is the key.
Key job responsibilities
You will help leading efforts to build distributed inference support into PyTorch, JAX, TensorFlow using XLA, the Neuron compiler, and runtime stacks. You will help optimizing these models to ensure the highest performance and maximize the efficiency of them running on the custom AWS Trainium and Inferentia silicon and the Trn1, Inf1/2 servers. Strong software development (Python and C++) and Machine Learning knowledge (Multimodal, Computer Vision, Speech) are both critical to this role.
About the team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
Inclusive Team Culture
Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.
Work/Life Balance
Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
The ML Inference team works side by side with chip architects, compiler engineers and runtime engineers to create, build and optimize distributed inference solutions with Trainium/Inferentia instances. Experience with training and optimizing inference on these large models using Python/C++ is a must. Model parallelization, quantization, memory optimization - vLLM, DeepSpeed and other distributed inference libraries can be central to this and extending all of them for the Neuron based system is the key.
Key job responsibilities
You will help leading efforts to build distributed inference support into PyTorch, JAX, TensorFlow using XLA, the Neuron compiler, and runtime stacks. You will help optimizing these models to ensure the highest performance and maximize the efficiency of them running on the custom AWS Trainium and Inferentia silicon and the Trn1, Inf1/2 servers. Strong software development (Python and C++) and Machine Learning knowledge (Multimodal, Computer Vision, Speech) are both critical to this role.
About the team
Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, are some of the products we have delivered, over the last few years.
Inclusive Team Culture
Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust.
Work/Life Balance
Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.
Mentorship & Career Growth
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded professional and enable them to take on more complex tasks in the future.
Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!
Amazon
276 jobs posted
About the job
Similar Jobs
Discover more opportunities that match your interests
Amazon
3 days agoSoftware Engineer- AI/ML, AWS Neuron Distributed Training
US, CA, CupertinoView detailsSnorkel AI
24 days agoSenior Software Engineer — AI/ML
Redwood City, CA (Hybrid); San Francisco, CA (Hybrid)$200K/yrView detailsAMD
14 days agoSr AI Software Development Engineer
Santa Clara, CaliforniaView detailsAMD
14 days agoSr AI Software Development Engineer
MARKHAM, CanadaView details
PlayStation
13 days agoSenior ML/AI Software Engineer
Remote$177K - $266K/yrView detailsTenstorrent
17 days agoSr. Engineer, Software - AI Compiler
Austin, Texas, United States; Santa Clara, California, United States; Toronto, Ontario, Canada$100K - $500K/yrView detailsSnorkel AI
20 days agoSenior Software Engineer — AI/ML
Redwood City, CA (Hybrid); San Francisco, CA (Hybrid)$200K - $250K/yrView detailsDigitalOcean
9 hours agoSenior Software Engineer - AI/ML
Austin$167K - $209K/yrView detailsDigitalOcean
9 hours agoSenior Software Engineer - AI/ML
San Francisco$167K - $209K/yrView detailsDigitalOcean
9 hours agoSenior Software Engineer - AI/ML
Boston$167K - $209K/yrView details
View all ML Engineer jobs
Looking for something different?
Browse all AI jobs