AMD logo

AMD

Company

Senior Software Development Engineer – LLM Inference Framework

Santa Clara, California

Job Description

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As a senior member of the LLM inference framework team, you will be responsible for building and optimizing production-grade single-node and distributed inference runtimes for large language models on AMD GPUs. You will work at the framework and runtime layer, driving performance, scalability, and reliability, enabling tensor parallelism, pipeline parallelism, expert parallelism (MoE), and single-node or multi-node inference at scale. Your work will directly power customer-facing deployments and benchmarking platforms (e.g., InferenceMax, MLPerf, strategic partners, and cloud providers) and will be upstreamed into open-source inference frameworks such as vLLM and SGLang to make AMD a first-class platform for LLM serving. This role sits at the intersection of inference engines, distributed systems, and GPU runtime and kernel backends. THE PERSON: You are a systems-minded ML engineer who thinks in terms of throughput, latency, memory movement, and scheduling, not just model code. You are comfortable reading and modifying large-scale inference frameworks, debugging performance across GPUs and nodes, and collaborating with kernel, compiler, and networking teams to close end-to-end performance gaps. You enjoy working in open source and driving architecture-level improvements in inference platforms.   KEY RESPONSIBILITIES: Inference Framework & Runtime Architect and optimize distributed LLM inference runtimes based on in-house LLM engines or open-source stacks such as vLLM, SGLang, and llm-d Design and improve TP / PP / EP (MoE) hybrid execution, including KV-cache management, attention dispatch, and token scheduling Implement and optimize multi-node inference pipelines using RCCL, RDMA, and collective-based execution Performance & Scalability Drive throughput, latency, and memory efficiency across single-GPU and multi-GPU clusters Optimize continuous batching, speculative decoding, KV-cache paging, prefix caching, and multi-turn serving GPU & Backend Integration Work with AMD GPU libraries (AITER, HIPBLAS-LT, RCCL, ROCm runtime) to ensure inference frameworks efficiently use FP8 / FP4 GEMM and FlashAttention / MLA Collaborate with compiler teams (Triton, LLVM, ROCm) to unblock framework-level performance Open Source & Customer Enablement Upstream features and performance fixes into vLLM, SGLang, and llm-d Enable customer PoCs and production deployments on AMD platforms Build and maintain benchmark-grade inference pipelines   PREFERRED EXPERIENCE:  Inference Stack Knowledge Hands-on understanding of vLLM, SGLang, or similar inference stacks Experience with distributed inference scaling and a proven track record of contributing to upstream open-source projects Deep Learning Integration Strong experience integrating optimized GPU performance into machine-learning frameworks (e.g., PyTorch, TensorFlow) for high-throughput and scalable inference Kernel & Inference Frameworks Strong background in NVIDIA, AMD, or similar GPU architectures and kernel development Software Engineering Expertise in Python and preferably experience in C/C++, including debugging, performance tuning, and test design for large-scale systems High-Performance Computing Experience running large-scale workloads on heterogeneous GPU clusters, optimizing for efficiency and scalability Compiler & Runtime Optimization Understanding of compiler and runtime systems, including LLVM, ROCm, and GPU code generation ACADEMIC CREDENTIALS:  Master’s or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field. #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.

THE ROLE: As a senior member of the LLM inference framework team, you will be responsible for building and optimizing production-grade single-node and distributed inference runtimes for large language models on AMD GPUs. You will work at the framework and runtime layer, driving performance, scalability, and reliability, enabling tensor parallelism, pipeline parallelism, expert parallelism (MoE), and single-node or multi-node inference at scale. Your work will directly power customer-facing deployments and benchmarking platforms (e.g., InferenceMax, MLPerf, strategic partners, and cloud providers) and will be upstreamed into open-source inference frameworks such as vLLM and SGLang to make AMD a first-class platform for LLM serving. This role sits at the intersection of inference engines, distributed systems, and GPU runtime and kernel backends. THE PERSON: You are a systems-minded ML engineer who thinks in terms of throughput, latency, memory movement, and scheduling, not just model code. You are comfortable reading and modifying large-scale inference frameworks, debugging performance across GPUs and nodes, and collaborating with kernel, compiler, and networking teams to close end-to-end performance gaps. You enjoy working in open source and driving architecture-level improvements in inference platforms.   KEY RESPONSIBILITIES: Inference Framework & Runtime Architect and optimize distributed LLM inference runtimes based on in-house LLM engines or open-source stacks such as vLLM, SGLang, and llm-d Design and improve TP / PP / EP (MoE) hybrid execution, including KV-cache management, attention dispatch, and token scheduling Implement and optimize multi-node inference pipelines using RCCL, RDMA, and collective-based execution Performance & Scalability Drive throughput, latency, and memory efficiency across single-GPU and multi-GPU clusters Optimize continuous batching, speculative decoding, KV-cache paging, prefix caching, and multi-turn serving GPU & Backend Integration Work with AMD GPU libraries (AITER, HIPBLAS-LT, RCCL, ROCm runtime) to ensure inference frameworks efficiently use FP8 / FP4 GEMM and FlashAttention / MLA Collaborate with compiler teams (Triton, LLVM, ROCm) to unblock framework-level performance Open Source & Customer Enablement Upstream features and performance fixes into vLLM, SGLang, and llm-d Enable customer PoCs and production deployments on AMD platforms Build and maintain benchmark-grade inference pipelines   PREFERRED EXPERIENCE:  Inference Stack Knowledge Hands-on understanding of vLLM, SGLang, or similar inference stacks Experience with distributed inference scaling and a proven track record of contributing to upstream open-source projects Deep Learning Integration Strong experience integrating optimized GPU performance into machine-learning frameworks (e.g., PyTorch, TensorFlow) for high-throughput and scalable inference Kernel & Inference Frameworks Strong background in NVIDIA, AMD, or similar GPU architectures and kernel development Software Engineering Expertise in Python and preferably experience in C/C++, including debugging, performance tuning, and test design for large-scale systems High-Performance Computing Experience running large-scale workloads on heterogeneous GPU clusters, optimizing for efficiency and scalability Compiler & Runtime Optimization Understanding of compiler and runtime systems, including LLVM, ROCm, and GPU code generation ACADEMIC CREDENTIALS:  Master’s or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field. #LI-JG1

Please mention that you found this job on MoAIJobs, this helps us grow. Thank you!

AMD logo

AMD

123 jobs posted

View all AMD jobs

About the job

Posted on

Jan 16, 2026

Apply before

Feb 15, 2026

Job typeFull-time
CategoryML Engineer

Share this job opportunity