AI Software Development Eng.
Posted 48 days ago
Job Description
This job posting has expired and no longer accepting applications.
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology THE PERSON: We are seeking a software engineer with strong technical expertise in C++/ Python development, solving performance and investigating scalability on multi-GPU, multi-node clusters. He is also passionate about quality assurance, benchmarking, and automation in the AI/ML space. The ideal candidate thrives in both collaborative and independent environments, demonstrates excellent problem-solving skills, and takes ownership in defining goals and delivering impactful solutions. KEY RESPONSIBILITIES: Distributed AI Enablement and Benchmarking: Enable and benchmark AI models on large-scale distributed systems to evaluate performance, accuracy, and scalability. Scalable Systems Optimization: Optimize AI workloads across scale-up (multi-GPU), scale-out (multi-node), and scale-across distributed system configurations. Cross-Team Collaboration: Collaborate closely with internal GPU library teams to analyze and optimize distributed workloads for high throughput and low latency. Parallelization Strategies: Develop and apply optimal parallelization strategies for AI workloads to achieve best-in-class performance across diverse system configurations. Model Infrastructure and Management: Contribute to distributed model management systems, model zoos, monitoring frameworks, benchmarking pipelines, and technical documentation. Performance Monitoring and Visualization: Build and maintain real-time dashboards reporting performance, accuracy, and reliability metrics for internal stakeholders and external users. PREFERRED EXPERIENCE: AI Framework Engineering: Hands-on experience with AI inference or serving frameworks such as vLLM, SGLang, and Llama.cpp. KV Cache and Expert Parallelization: Understanding KV cache transfer mechanisms and technologies (e.g., Mooncake, NIXL/RIXL) and expert parallelization approaches (e.g., DeepEP, MORI, PPLX-Garden). Programming and Software Design: Strong C/C++ and Python skills, with experience in software design, debugging, performance analysis, and test development. Large-Scale Distributed Systems: Experience running AI workloads on large-scale, heterogeneous compute clusters. Cluster and Orchestration Systems: Familiarity with cluster management and orchestration platforms such as SLURM and Kubernetes (K8s). Development Tools and Workflows: Experience with GitHub, Jenkins, or similar CI/CD tools and modern development workflows. ACADEMIC CREDENTIALS: Master’s or PhD degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience. #LI-JG1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology THE PERSON: We are seeking a software engineer with strong technical expertise in C++/ Python development, solving performance and investigating scalability on multi-GPU, multi-node clusters. He is also passionate about quality assurance, benchmarking, and automation in the AI/ML space. The ideal candidate thrives in both collaborative and independent environments, demonstrates excellent problem-solving skills, and takes ownership in defining goals and delivering impactful solutions. KEY RESPONSIBILITIES: Distributed AI Enablement and Benchmarking: Enable and benchmark AI models on large-scale distributed systems to evaluate performance, accuracy, and scalability. Scalable Systems Optimization: Optimize AI workloads across scale-up (multi-GPU), scale-out (multi-node), and scale-across distributed system configurations. Cross-Team Collaboration: Collaborate closely with internal GPU library teams to analyze and optimize distributed workloads for high throughput and low latency. Parallelization Strategies: Develop and apply optimal parallelization strategies for AI workloads to achieve best-in-class performance across diverse system configurations. Model Infrastructure and Management: Contribute to distributed model management systems, model zoos, monitoring frameworks, benchmarking pipelines, and technical documentation. Performance Monitoring and Visualization: Build and maintain real-time dashboards reporting performance, accuracy, and reliability metrics for internal stakeholders and external users. PREFERRED EXPERIENCE: AI Framework Engineering: Hands-on experience with AI inference or serving frameworks such as vLLM, SGLang, and Llama.cpp. KV Cache and Expert Parallelization: Understanding KV cache transfer mechanisms and technologies (e.g., Mooncake, NIXL/RIXL) and expert parallelization approaches (e.g., DeepEP, MORI, PPLX-Garden). Programming and Software Design: Strong C/C++ and Python skills, with experience in software design, debugging, performance analysis, and test development. Large-Scale Distributed Systems: Experience running AI workloads on large-scale, heterogeneous compute clusters. Cluster and Orchestration Systems: Familiarity with cluster management and orchestration platforms such as SLURM and Kubernetes (K8s). Development Tools and Workflows: Experience with GitHub, Jenkins, or similar CI/CD tools and modern development workflows. ACADEMIC CREDENTIALS: Master’s or PhD degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience. #LI-JG1
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: AMD is looking for a software engineer who is passionate about Distributed Inferencing on AMD GPUs and improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology THE PERSON: We are seeking a software engineer with strong technical expertise in C++/ Python development, solving performance and investigating scalability on multi-GPU, multi-node clusters. He is also passionate about quality assurance, benchmarking, and automation in the AI/ML space. The ideal candidate thrives in both collaborative and independent environments, demonstrates excellent problem-solving skills, and takes ownership in defining goals and delivering impactful solutions. KEY RESPONSIBILITIES: Distributed AI Enablement and Benchmarking: Enable and benchmark AI models on large-scale distributed systems to evaluate performance, accuracy, and scalability. Scalable Systems Optimization: Optimize AI workloads across scale-up (multi-GPU), scale-out (multi-node), and scale-across distributed system configurations. Cross-Team Collaboration: Collaborate closely with internal GPU library teams to analyze and optimize distributed workloads for high throughput and low latency. Parallelization Strategies: Develop and apply optimal parallelization strategies for AI workloads to achieve best-in-class performance across diverse system configurations. Model Infrastructure and Management: Contribute to distributed model management systems, model zoos, monitoring frameworks, benchmarking pipelines, and technical documentation. Performance Monitoring and Visualization: Build and maintain real-time dashboards reporting performance, accuracy, and reliability metrics for internal stakeholders and external users. PREFERRED EXPERIENCE: AI Framework Engineering: Hands-on experience with AI inference or serving frameworks such as vLLM, SGLang, and Llama.cpp. KV Cache and Expert Parallelization: Understanding KV cache transfer mechanisms and technologies (e.g., Mooncake, NIXL/RIXL) and expert parallelization approaches (e.g., DeepEP, MORI, PPLX-Garden). Programming and Software Design: Strong C/C++ and Python skills, with experience in software design, debugging, performance analysis, and test development. Large-Scale Distributed Systems: Experience running AI workloads on large-scale, heterogeneous compute clusters. Cluster and Orchestration Systems: Familiarity with cluster management and orchestration platforms such as SLURM and Kubernetes (K8s). Development Tools and Workflows: Experience with GitHub, Jenkins, or similar CI/CD tools and modern development workflows. ACADEMIC CREDENTIALS: Master’s or PhD degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience. #LI-JG1
This job posting has expired and no longer accepting applications. Please check out our latest AI jobs.
AMD
77 jobs posted
About the job
Similar Jobs
AMD
13 days agoPrincipal Software Development Eng. - AI Performance
San Jose, CAView detailsWorkday
17 days agoSr Software Development Engineer (AI & Full Stack)
IND.ChennaiView detailsAmazon
2 days agoSr. Software Development Engineer, Frontier AI & Robotics
US, WAView detailsAMD
24 days agoSr. AI Software Development Engineer
IASI, RomaniaView details
EarnIn
15 days agoSoftware Engineer (Gen AI)
Mountain View$181K - $222K/yrView detailsAmazon
9 days agoSoftware Development Engineer, Healthcare AI
US, WAView detailsFaculty
2 days agoSoftware Engineer (AI Platform)
LondonView detailsMastercard
3 days agoSenior Software Engineer (AI)
Pune, IndiaView detailsAMD
20 days agoSenior Software Development Engineer in Test (AI/ML)
Belgrade, RSView detailsMastercard
15 days agoDirector, Software Development Engineering (AI & Decision Product Enablement)
Dublin, IrelandView details
Looking for something different?
Browse all AI jobsNever miss a new AI job
Get the latest AI jobs delivered to your inbox every week. Free, no spam.
