Distributed Inferencing Software Engineer - AI Models
Posted 13 hours ago
Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs, and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology. THE PERSON: Strong technical and analytical skills in C++/Python AI development, solving performance and investigating scalability on multi-GPU, multi-node clusters. Ability to work as part of a team, while also being able to work independently, define goals and scope and lead your own development effort. KEY RESPONSIBILITIES: Enable, benchmark AI models on distributed systems Work in a distributed computing setting to optimize for both scale-up (multi-GPU) / scale-out (multi-node) / scale-across systems Collaborate and interact with internal GPU library teams to analyze and optimize distributed workloads for high throughput/low latency Expertise on parallelization strategies for AI workloads - and application for best performance for each configuration Contribute to distributed model management, model zoos, monitoring, benchmarking and documentation PREFERRED EXPERIENCE: Knowledge of GPU computing (HIP, CUDA, OpenCL) AI framework engineering experience (vLLM, SGLang, Llama.cpp) Understanding of KV cache transfer mechanisms, options (Mooncake, NIXL/RIXL) and Expert Parallelization (DeepEP/MORI/PPLX-Garden) Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design. Experiences to run workloads, especially AI models, on large scale heterogeneous cluster Familiarity with clusters and orchestration software (SLURM, K8s) ACADEMIC CREDENTIALS: Masters or PhD or equivalent experience in Computer Science, Computer Engineering, or related field #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs, and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology. THE PERSON: Strong technical and analytical skills in C++/Python AI development, solving performance and investigating scalability on multi-GPU, multi-node clusters. Ability to work as part of a team, while also being able to work independently, define goals and scope and lead your own development effort. KEY RESPONSIBILITIES: Enable, benchmark AI models on distributed systems Work in a distributed computing setting to optimize for both scale-up (multi-GPU) / scale-out (multi-node) / scale-across systems Collaborate and interact with internal GPU library teams to analyze and optimize distributed workloads for high throughput/low latency Expertise on parallelization strategies for AI workloads - and application for best performance for each configuration Contribute to distributed model management, model zoos, monitoring, benchmarking and documentation PREFERRED EXPERIENCE: Knowledge of GPU computing (HIP, CUDA, OpenCL) AI framework engineering experience (vLLM, SGLang, Llama.cpp) Understanding of KV cache transfer mechanisms, options (Mooncake, NIXL/RIXL) and Expert Parallelization (DeepEP/MORI/PPLX-Garden) Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design. Experiences to run workloads, especially AI models, on large scale heterogeneous cluster Familiarity with clusters and orchestration software (SLURM, K8s) ACADEMIC CREDENTIALS: Masters or PhD or equivalent experience in Computer Science, Computer Engineering, or related field #LI-JG1
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs, and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology. THE PERSON: Strong technical and analytical skills in C++/Python AI development, solving performance and investigating scalability on multi-GPU, multi-node clusters. Ability to work as part of a team, while also being able to work independently, define goals and scope and lead your own development effort. KEY RESPONSIBILITIES: Enable, benchmark AI models on distributed systems Work in a distributed computing setting to optimize for both scale-up (multi-GPU) / scale-out (multi-node) / scale-across systems Collaborate and interact with internal GPU library teams to analyze and optimize distributed workloads for high throughput/low latency Expertise on parallelization strategies for AI workloads - and application for best performance for each configuration Contribute to distributed model management, model zoos, monitoring, benchmarking and documentation PREFERRED EXPERIENCE: Knowledge of GPU computing (HIP, CUDA, OpenCL) AI framework engineering experience (vLLM, SGLang, Llama.cpp) Understanding of KV cache transfer mechanisms, options (Mooncake, NIXL/RIXL) and Expert Parallelization (DeepEP/MORI/PPLX-Garden) Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design. Experiences to run workloads, especially AI models, on large scale heterogeneous cluster Familiarity with clusters and orchestration software (SLURM, K8s) ACADEMIC CREDENTIALS: Masters or PhD or equivalent experience in Computer Science, Computer Engineering, or related field #LI-JG1
AMD
79 jobs posted
About the job
Similar Jobs
19d
AI Software Engineer
HP IQ
$127K - $175KSan Francisco, CAAI Software Engineer
HP IQ
$127K - $175KSan Francisco, CA19d21d
Senior, Software Engineer - AI
Walmart
$90K - $180KBentonville, ARSenior, Software Engineer - AI
Walmart
$90K - $180KBentonville, AR21d13d
Software Engineer, AI Platform
Harvey
$185K - $325KSan Francisco, CASoftware Engineer, AI Platform
Harvey
$185K - $325KSan Francisco, CA13d12d
Software Engineer, AI Platform
Harvey
$185K - $325KTorontoSoftware Engineer, AI Platform
Harvey
$185K - $325KToronto12d13d
Software Engineer, AI Platform
Harvey
$220K - $300KUnited StatesSoftware Engineer, AI Platform
Harvey
$220K - $300KUnited States13d11d
AI Software Application Engineer
AMD
PennsylvaniaAI Software Application Engineer
AMD
Pennsylvania11d9d
Software Engineer - AI
Optiver
Sydney, AustraliaSoftware Engineer - AI
Optiver
Sydney, Australia9d26d
AI Tutor - Software Engineer Specialist
xAI
Remote$60 - $100AI Tutor - Software Engineer Specialist
xAI
Remote$60 - $10026d25d
Senior Software Engineer — AI/ML
Snorkel AI
$200K - $250KRedwood City, CASan Francisco, CASenior Software Engineer — AI/ML
Snorkel AI
$200K - $250KRedwood City, CASan Francisco, CA25d25d
Senior Software Engineer - AI Applications
Plaid
$180K - $270KSan Francisco, CASenior Software Engineer - AI Applications
Plaid
$180K - $270KSan Francisco, CA25d
Looking for something different?
Browse all AI jobsFree AI job alerts
Get the latest AI jobs delivered to your inbox every week. Free, no spam.