AMD
Company
Senior Software Development Engineer – LLM Inference Framework
Santa Clara, California
Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: As a senior member of the LLM inference framework team, you will be responsible for building and optimizing production-grade single-node and distributed inference runtimes for large language models on AMD GPUs. You will work at the framework and runtime layer, driving performance, scalability, and reliability, enabling tensor parallelism, pipeline parallelism, expert parallelism (MoE), and single-node or multi-node inference at scale. Your work will directly power customer-facing deployments and benchmarking platforms (e.g., InferenceMax, MLPerf, strategic partners, and cloud providers) and will be upstreamed into open-source inference frameworks such as vLLM and SGLang to make AMD a first-class platform for LLM serving. This role sits at the intersection of inference engines, distributed systems, and GPU runtime and kernel backends. THE PERSON: You are a systems-minded ML engineer who thinks in terms of throughput, latency, memory movement, and scheduling, not just model code. You are comfortable reading and modifying large-scale inference frameworks, debugging performance across GPUs and nodes, and collaborating with kernel, compiler, and networking teams to close end-to-end performance gaps. You enjoy working in open source and driving architecture-level improvements in inference platforms. KEY RESPONSIBILITIES: Inference Framework & Runtime Architect and optimize distributed LLM inference runtimes based on in-house LLM engines or open-source stacks such as vLLM, SGLang, and llm-d Design and improve TP / PP / EP (MoE) hybrid execution, including KV-cache management, attention dispatch, and token scheduling Implement and optimize multi-node inference pipelines using RCCL, RDMA, and collective-based execution Performance & Scalability Drive throughput, latency, and memory efficiency across single-GPU and multi-GPU clusters Optimize continuous batching, speculative decoding, KV-cache paging, prefix caching, and multi-turn serving GPU & Backend Integration Work with AMD GPU libraries (AITER, HIPBLAS-LT, RCCL, ROCm runtime) to ensure inference frameworks efficiently use FP8 / FP4 GEMM and FlashAttention / MLA Collaborate with compiler teams (Triton, LLVM, ROCm) to unblock framework-level performance Open Source & Customer Enablement Upstream features and performance fixes into vLLM, SGLang, and llm-d Enable customer PoCs and production deployments on AMD platforms Build and maintain benchmark-grade inference pipelines PREFERRED EXPERIENCE: Inference Stack Knowledge Hands-on understanding of vLLM, SGLang, or similar inference stacks Experience with distributed inference scaling and a proven track record of contributing to upstream open-source projects Deep Learning Integration Strong experience integrating optimized GPU performance into machine-learning frameworks (e.g., PyTorch, TensorFlow) for high-throughput and scalable inference Kernel & Inference Frameworks Strong background in NVIDIA, AMD, or similar GPU architectures and kernel development Software Engineering Expertise in Python and preferably experience in C/C++, including debugging, performance tuning, and test design for large-scale systems High-Performance Computing Experience running large-scale workloads on heterogeneous GPU clusters, optimizing for efficiency and scalability Compiler & Runtime Optimization Understanding of compiler and runtime systems, including LLVM, ROCm, and GPU code generation ACADEMIC CREDENTIALS: Master’s or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field. #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: As a senior member of the LLM inference framework team, you will be responsible for building and optimizing production-grade single-node and distributed inference runtimes for large language models on AMD GPUs. You will work at the framework and runtime layer, driving performance, scalability, and reliability, enabling tensor parallelism, pipeline parallelism, expert parallelism (MoE), and single-node or multi-node inference at scale. Your work will directly power customer-facing deployments and benchmarking platforms (e.g., InferenceMax, MLPerf, strategic partners, and cloud providers) and will be upstreamed into open-source inference frameworks such as vLLM and SGLang to make AMD a first-class platform for LLM serving. This role sits at the intersection of inference engines, distributed systems, and GPU runtime and kernel backends. THE PERSON: You are a systems-minded ML engineer who thinks in terms of throughput, latency, memory movement, and scheduling, not just model code. You are comfortable reading and modifying large-scale inference frameworks, debugging performance across GPUs and nodes, and collaborating with kernel, compiler, and networking teams to close end-to-end performance gaps. You enjoy working in open source and driving architecture-level improvements in inference platforms. KEY RESPONSIBILITIES: Inference Framework & Runtime Architect and optimize distributed LLM inference runtimes based on in-house LLM engines or open-source stacks such as vLLM, SGLang, and llm-d Design and improve TP / PP / EP (MoE) hybrid execution, including KV-cache management, attention dispatch, and token scheduling Implement and optimize multi-node inference pipelines using RCCL, RDMA, and collective-based execution Performance & Scalability Drive throughput, latency, and memory efficiency across single-GPU and multi-GPU clusters Optimize continuous batching, speculative decoding, KV-cache paging, prefix caching, and multi-turn serving GPU & Backend Integration Work with AMD GPU libraries (AITER, HIPBLAS-LT, RCCL, ROCm runtime) to ensure inference frameworks efficiently use FP8 / FP4 GEMM and FlashAttention / MLA Collaborate with compiler teams (Triton, LLVM, ROCm) to unblock framework-level performance Open Source & Customer Enablement Upstream features and performance fixes into vLLM, SGLang, and llm-d Enable customer PoCs and production deployments on AMD platforms Build and maintain benchmark-grade inference pipelines PREFERRED EXPERIENCE: Inference Stack Knowledge Hands-on understanding of vLLM, SGLang, or similar inference stacks Experience with distributed inference scaling and a proven track record of contributing to upstream open-source projects Deep Learning Integration Strong experience integrating optimized GPU performance into machine-learning frameworks (e.g., PyTorch, TensorFlow) for high-throughput and scalable inference Kernel & Inference Frameworks Strong background in NVIDIA, AMD, or similar GPU architectures and kernel development Software Engineering Expertise in Python and preferably experience in C/C++, including debugging, performance tuning, and test design for large-scale systems High-Performance Computing Experience running large-scale workloads on heterogeneous GPU clusters, optimizing for efficiency and scalability Compiler & Runtime Optimization Understanding of compiler and runtime systems, including LLVM, ROCm, and GPU code generation ACADEMIC CREDENTIALS: Master’s or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field. #LI-JG1
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: As a senior member of the LLM inference framework team, you will be responsible for building and optimizing production-grade single-node and distributed inference runtimes for large language models on AMD GPUs. You will work at the framework and runtime layer, driving performance, scalability, and reliability, enabling tensor parallelism, pipeline parallelism, expert parallelism (MoE), and single-node or multi-node inference at scale. Your work will directly power customer-facing deployments and benchmarking platforms (e.g., InferenceMax, MLPerf, strategic partners, and cloud providers) and will be upstreamed into open-source inference frameworks such as vLLM and SGLang to make AMD a first-class platform for LLM serving. This role sits at the intersection of inference engines, distributed systems, and GPU runtime and kernel backends. THE PERSON: You are a systems-minded ML engineer who thinks in terms of throughput, latency, memory movement, and scheduling, not just model code. You are comfortable reading and modifying large-scale inference frameworks, debugging performance across GPUs and nodes, and collaborating with kernel, compiler, and networking teams to close end-to-end performance gaps. You enjoy working in open source and driving architecture-level improvements in inference platforms. KEY RESPONSIBILITIES: Inference Framework & Runtime Architect and optimize distributed LLM inference runtimes based on in-house LLM engines or open-source stacks such as vLLM, SGLang, and llm-d Design and improve TP / PP / EP (MoE) hybrid execution, including KV-cache management, attention dispatch, and token scheduling Implement and optimize multi-node inference pipelines using RCCL, RDMA, and collective-based execution Performance & Scalability Drive throughput, latency, and memory efficiency across single-GPU and multi-GPU clusters Optimize continuous batching, speculative decoding, KV-cache paging, prefix caching, and multi-turn serving GPU & Backend Integration Work with AMD GPU libraries (AITER, HIPBLAS-LT, RCCL, ROCm runtime) to ensure inference frameworks efficiently use FP8 / FP4 GEMM and FlashAttention / MLA Collaborate with compiler teams (Triton, LLVM, ROCm) to unblock framework-level performance Open Source & Customer Enablement Upstream features and performance fixes into vLLM, SGLang, and llm-d Enable customer PoCs and production deployments on AMD platforms Build and maintain benchmark-grade inference pipelines PREFERRED EXPERIENCE: Inference Stack Knowledge Hands-on understanding of vLLM, SGLang, or similar inference stacks Experience with distributed inference scaling and a proven track record of contributing to upstream open-source projects Deep Learning Integration Strong experience integrating optimized GPU performance into machine-learning frameworks (e.g., PyTorch, TensorFlow) for high-throughput and scalable inference Kernel & Inference Frameworks Strong background in NVIDIA, AMD, or similar GPU architectures and kernel development Software Engineering Expertise in Python and preferably experience in C/C++, including debugging, performance tuning, and test design for large-scale systems High-Performance Computing Experience running large-scale workloads on heterogeneous GPU clusters, optimizing for efficiency and scalability Compiler & Runtime Optimization Understanding of compiler and runtime systems, including LLVM, ROCm, and GPU code generation ACADEMIC CREDENTIALS: Master’s or PhD in Computer Science, Computer Engineering, Electrical Engineering, or a related field. #LI-JG1
AMD
123 jobs posted
About the job
Similar Jobs
Discover more opportunities that match your interests
1 day agoSenior Software Engineer, AI/LLM
Censys
RemoteView details- 24 days ago
Senior LLM Engineer
GoFundMe
Buenos Aires, ArgentinaView details - 5 days ago
LLM Software Engineer
Workday
Israel, Tel AvivView details - 5 days ago
LLM Software Engineer
Workday
Israel, Tel AvivView details - 24 days ago
Senior LLM Engineer
GoFundMe
Buenos Aires, ArgentinaView details - 12 days ago
Senior Software Development Engineer, AWS Agentic AI
Amazon
IL, HaifaView details - 7 days ago
Senior Software Engineer - IDE AI Experiences - LLM Engineer
Datadog
Tel Aviv, IsraelView details - 7 days ago
Senior Software Engineer - IDE AI Experiences - LLM Engineer
Datadog
Boston, Massachusetts, USA$187K - $240KView details - 30 days ago
Senior AI Software Engineer
Salesforce
California - Palo AltoView details - 18 days ago
Senior AI Software Engineer
AMD
Shanghai, ChinaView details
View all ML Engineer jobs
Looking for something different?
Browse all AI jobs