Amazon
Company
11 days ago
Software Development Engineer AI/ML, Inference Serving, AWS Neuron
US, CA, Cupertino
Full-time
Job Description
AWS Neuron is the software stack powering AWS Inferentia and Trainium machine learning accelerators, designed to deliver high-performance, low-cost inference at scale. The Neuron Serving team develops infrastructure to serve modern machine learning models—including large language models (LLMs) and multimodal workloads—reliably and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model serving infrastructure, with a particular focus on large-scale generative AI applications.
Key job responsibilities
* Architect and lead the design of distributed ML serving systems optimized for generative AI workloads
* Drive technical excellence in performance optimization and system reliability across the Neuron ecosystem
* Design and implement scalable solutions for both offline and online inference workloads
* Lead integration efforts with frameworks such as vLLM, SGLang, Torch XLA, TensorRT, and Triton
* Develop and optimize system components for tensor/data parallelism and disaggregated serving
* Implement and optimize custom PyTorch operators and NKI kernels
* Mentor team members and provide technical leadership across multiple work streams
* Drive architectural decisions that impact the entire Neuron serving stack
* Collaborate with customers, product owners, and engineering teams to define technical strategy
* Author technical documentation, design proposals, and architectural guidelines
A day in the life
You'll lead critical technical initiatives while mentoring team members. You'll collaborate with cross-functional teams of applied scientists, system engineers, and product managers to architect and deliver state-of-the-art inference capabilities. Your day might involve:
* Leading design reviews and architectural discussions
* Rapidly prototyping software to show customer value
* Debugging complex performance issues across the stack
* Mentoring junior engineers on system design and optimization
* Collaborating with research teams on new ML serving capabilities
* Driving technical decisions that shape the future of Neuron's inference stack
About the team
The Neuron Serving team is at the forefront of scalable and resilient AI infrastructure at AWS. We focus on developing model-agnostic inference innovations, including disaggregated serving, distributed KV cache management, CPU offloading, and container-native solutions. Our team is dedicated to upstreaming Neuron SDK contributions to the open-source community, enhancing performance and scalability for AI workloads. We're committed to pushing the boundaries of what's possible in large-scale ML serving.
Recent shares:
https://github.com/aws-neuron/upstreaming-to-vllm/releases/tag/2.25.0
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/disaggregated-inference.html
Key job responsibilities
* Architect and lead the design of distributed ML serving systems optimized for generative AI workloads
* Drive technical excellence in performance optimization and system reliability across the Neuron ecosystem
* Design and implement scalable solutions for both offline and online inference workloads
* Lead integration efforts with frameworks such as vLLM, SGLang, Torch XLA, TensorRT, and Triton
* Develop and optimize system components for tensor/data parallelism and disaggregated serving
* Implement and optimize custom PyTorch operators and NKI kernels
* Mentor team members and provide technical leadership across multiple work streams
* Drive architectural decisions that impact the entire Neuron serving stack
* Collaborate with customers, product owners, and engineering teams to define technical strategy
* Author technical documentation, design proposals, and architectural guidelines
A day in the life
You'll lead critical technical initiatives while mentoring team members. You'll collaborate with cross-functional teams of applied scientists, system engineers, and product managers to architect and deliver state-of-the-art inference capabilities. Your day might involve:
* Leading design reviews and architectural discussions
* Rapidly prototyping software to show customer value
* Debugging complex performance issues across the stack
* Mentoring junior engineers on system design and optimization
* Collaborating with research teams on new ML serving capabilities
* Driving technical decisions that shape the future of Neuron's inference stack
About the team
The Neuron Serving team is at the forefront of scalable and resilient AI infrastructure at AWS. We focus on developing model-agnostic inference innovations, including disaggregated serving, distributed KV cache management, CPU offloading, and container-native solutions. Our team is dedicated to upstreaming Neuron SDK contributions to the open-source community, enhancing performance and scalability for AI workloads. We're committed to pushing the boundaries of what's possible in large-scale ML serving.
Recent shares:
https://github.com/aws-neuron/upstreaming-to-vllm/releases/tag/2.25.0
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/nxd-inference/developer_guides/disaggregated-inference.html
Similar Jobs
Discover more opportunities that match your interests
21 days ago
Machine learning engineer -AI/ML, AWS Neuron Inference, AWS Neuron Inference
Amazon
US, WA, Seattle
View details
8 days ago
Senior Software Development Engineer - AI/ML Frameworks
AMD
San Jose, California
View details
21 days ago
AI-ML Software Engineer
Dell Technologies
Hopkinton, Massachusetts, United States
View details
20 days ago
AI-ML Software Engineer
Dell Technologies
Hopkinton, Massachusetts, United States
View details
11 days ago
Hardware Development Engineer AWS AI & ML, Accelerator Servers
Amazon
US, WA, Seattle
View details
6 days ago
Hardware Development Engineer AWS AI & ML, Accelerator Servers
Amazon
US, CA, Cupertino
View details
27 days ago
AI Infrastructure Software Development Engineer
AMD
San Jose, California
View details
27 days ago
AI Compiler Software Development Engineer
AMD
Shanghai, China
View details
21 days ago
Senior Hardware Development Engineer AWS AI & ML, Accelerator Servers
Amazon
US, WA, Seattle
View details
8 days ago
Staff Software Development Engineer - AI/ML frameworks, C/C++/Python
AMD
San Jose, California
View details
View all ML Engineer jobs
Looking for something different?
Browse all AI jobs