Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: We are seeking an AI Infrastructure / Platform Engineer to join our team building and operating large-scale GPU compute infrastructure that powers AI and ML workloads. The ideal candidate should be passionate about software engineering and possess leadership skills to independently deliver on multiple projects. They should be able to communicate effectively and work optimally with their peers within our larger organization. THE PERSON: Experience in Platform, Infrastructure, DevOps Engineering. Deep hands-on experience with Kubernetes and container orchestration at scale. Proven ability to design and deliver platform features that serve internal customers or developer teams Experience building developer-facing platforms or internal developer portals (e.g. Custom workflow tooling). KEY RESPONSIBILITIES: Build and extend platform capabilities to enable different classes of workloads (e.g., Large-scale AI training, inferencing etc). Design and operate scalable orchestration systems using Kubernetes across both on-prem and multi-cloud environments. Develop platform features such as pre-flight health checks, job status monitoring and post-mortem analysis. Partner with development teams to extend the GPU developer platform with features, APIs, templates, and self-service workflows that streamline job orchestration and environment management. Apply expertise in storage and networking to design and integrate CSI drivers, persistent volumes, and network policies that enable high-performance GPU workloads. Production support on large-scale GPU clusters. PREFERRED EXPERIENCE: Hands-on experience in storage or network engineering within Kubernetes environments (e.g., CSI drivers, dynamic provisioning, CNI plugins, or network policy). Experience with Infrastructure as Code tools like Terraform. Background in HPC, Slurm, or GPU-based compute systems for ML/AI workloads. Practical experience with monitoring and observability tools (Prometheus, Grafana, Loki, etc.). Understanding of machine learning frameworks (PyTorch, vLLM, SGLang, etc.). High performance network and IB/RDMA tuning. ACADEMIC CREDENTIALS: Bachelor’s or master's degree in computer science, computer engineering, electrical engineering, or equivalent. LOCATION: San Jose, CA (Hybrid) preferred; open to considering other US locations. #LI-CJ3 #HYBRID Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: We are seeking an AI Infrastructure / Platform Engineer to join our team building and operating large-scale GPU compute infrastructure that powers AI and ML workloads. The ideal candidate should be passionate about software engineering and possess leadership skills to independently deliver on multiple projects. They should be able to communicate effectively and work optimally with their peers within our larger organization. THE PERSON: Experience in Platform, Infrastructure, DevOps Engineering. Deep hands-on experience with Kubernetes and container orchestration at scale. Proven ability to design and deliver platform features that serve internal customers or developer teams Experience building developer-facing platforms or internal developer portals (e.g. Custom workflow tooling). KEY RESPONSIBILITIES: Build and extend platform capabilities to enable different classes of workloads (e.g., Large-scale AI training, inferencing etc). Design and operate scalable orchestration systems using Kubernetes across both on-prem and multi-cloud environments. Develop platform features such as pre-flight health checks, job status monitoring and post-mortem analysis. Partner with development teams to extend the GPU developer platform with features, APIs, templates, and self-service workflows that streamline job orchestration and environment management. Apply expertise in storage and networking to design and integrate CSI drivers, persistent volumes, and network policies that enable high-performance GPU workloads. Production support on large-scale GPU clusters. PREFERRED EXPERIENCE: Hands-on experience in storage or network engineering within Kubernetes environments (e.g., CSI drivers, dynamic provisioning, CNI plugins, or network policy). Experience with Infrastructure as Code tools like Terraform. Background in HPC, Slurm, or GPU-based compute systems for ML/AI workloads. Practical experience with monitoring and observability tools (Prometheus, Grafana, Loki, etc.). Understanding of machine learning frameworks (PyTorch, vLLM, SGLang, etc.). High performance network and IB/RDMA tuning. ACADEMIC CREDENTIALS: Bachelor’s or master's degree in computer science, computer engineering, electrical engineering, or equivalent. LOCATION: San Jose, CA (Hybrid) preferred; open to considering other US locations. #LI-CJ3 #HYBRID
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: We are seeking an AI Infrastructure / Platform Engineer to join our team building and operating large-scale GPU compute infrastructure that powers AI and ML workloads. The ideal candidate should be passionate about software engineering and possess leadership skills to independently deliver on multiple projects. They should be able to communicate effectively and work optimally with their peers within our larger organization. THE PERSON: Experience in Platform, Infrastructure, DevOps Engineering. Deep hands-on experience with Kubernetes and container orchestration at scale. Proven ability to design and deliver platform features that serve internal customers or developer teams Experience building developer-facing platforms or internal developer portals (e.g. Custom workflow tooling). KEY RESPONSIBILITIES: Build and extend platform capabilities to enable different classes of workloads (e.g., Large-scale AI training, inferencing etc). Design and operate scalable orchestration systems using Kubernetes across both on-prem and multi-cloud environments. Develop platform features such as pre-flight health checks, job status monitoring and post-mortem analysis. Partner with development teams to extend the GPU developer platform with features, APIs, templates, and self-service workflows that streamline job orchestration and environment management. Apply expertise in storage and networking to design and integrate CSI drivers, persistent volumes, and network policies that enable high-performance GPU workloads. Production support on large-scale GPU clusters. PREFERRED EXPERIENCE: Hands-on experience in storage or network engineering within Kubernetes environments (e.g., CSI drivers, dynamic provisioning, CNI plugins, or network policy). Experience with Infrastructure as Code tools like Terraform. Background in HPC, Slurm, or GPU-based compute systems for ML/AI workloads. Practical experience with monitoring and observability tools (Prometheus, Grafana, Loki, etc.). Understanding of machine learning frameworks (PyTorch, vLLM, SGLang, etc.). High performance network and IB/RDMA tuning. ACADEMIC CREDENTIALS: Bachelor’s or master's degree in computer science, computer engineering, electrical engineering, or equivalent. LOCATION: San Jose, CA (Hybrid) preferred; open to considering other US locations. #LI-CJ3 #HYBRID
Job Alerts
Get the latest AI jobs delivered to your inbox every Wednesday.
Free, weekly digest. No spam.
AMD
67 jobs posted
About the job
Posted on
Feb 19, 2026
Apply before
Mar 21, 2026
Job typeFull-time
CategoryTensorFlow
Location
San Jose, CA
Job Alerts
Get the latest AI jobs delivered to your inbox every Wednesday.
Free, weekly digest. No spam.
Similar Jobs
AMD
8 days agoAI Infrastructure / Platform Engineer - GPU compute
San Jose, CAView detailsGrammarly
1 day agoAI Platform Engineer
CanadaARBrazilMexicoCA$128K - CA$174K/yrView detailsSamsara
1 day agoSoftware Engineer, AI Platform
Remote$114K - $172K/yrView detailsGrab
29 days agoLead Software Engineer, AI Platform
Singapore, SingaporeView detailsGrab
22 days agoLead Software Engineer, AI Platform
Singapore, SingaporeView detailsDell Technologies
22 days agoAI Platform Engineer - Advisory Consultant
Remote$186K - $240K/yrView detailsSalesforce
13 days agoPrincipal AI Engineer - Agentforce Platform
San Francisco, CA$216K - $314K/yrView detailsMotorola Solutions
11 days agoLead Engineer, Agentic AI Platform
RemoteBrazilView detailsSamsara
8 days agoSenior Software Engineer, AI Platform
Remote$131K - $198K/yrView detailsSalesforce
9 days agoPrincipal AI Engineer - Agentforce Platform
San Francisco, CA$197K - $314K/yrView details
Looking for something different?
Browse all AI jobs