Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: The focus of this role is on the design, development, and packaging of production grade AI inference and training solutions as part of the AMD Inference Microservice (AIM) ecosystem. You will be part of a diverse and ambitious team responsible for maintaining, scaling, and extending AMD’s ecosystem of AI microservices. You will work with state-of-the-art AI tooling and models on cutting edge AI infrastructure. This role requires both deep hands-on knowledge of AI tooling and best practices of software development methodologies like DevOps to ensure reliability, security, and performance of critical components. KEY RESPONSIBILITIES: LLM and AI Tooling: Design, develop, and scale containerized AI microservices for inference, training, evaluation, supporting diverse use cases. Utilize and adopt state-of-the-art open-source AI tools like vLLM, SGLang, verl, etc. to support various use cases and infrastructure configurations. Stay on top of current advances in LLM frameworks, APIs, and open-source ecosystems, and translate them into scalable solutions Container Development and Lifecycle Management: Design, build, and maintain containerized AI microservices using Docker and related tooling, ensuring reproducibility and scalability across Manage lifecycle of microservices, including updating, testing, and performance benchmarking and optimization. Collaborate with other teams to implement and maintain container orchestration workflows (e.g., Kubernetes) for scalability CI/CD Pipeline Integration: Implement DevOps best practices to use GitHub Actions to automate build, test, and deploy processes. Collaborate with development teams to design, implement, and optimize CI/CD pipelines, ensuring smooth deployments and integrations. Scripting and Automation: Develop and maintain tooling for interacting with different ecosystem functions to improve developer and user experience. Write scripts and tools (Python, Bash) to automate routine tasks, enhance functionality, and support development. EXPERIENCE & KEY QUALITIES Seasoned in deploying LLMs and other AI model types in production using frameworks like vLLM, SGLang, or similar tooling. Solid experience in packaging software and delivering microservice containers. Desire and ability to continuously learn in a fast-changing environment CI/CD (GitHub actions/workflows an extra plus) Initiative, pragmatic problem solving, and great collaboration skills Bachelor’s or master’s degree in computer science, computer engineering, electrical engineering, or an equivalent field LOCATIONS Finland or Sweden #LI-MH3 #LI-HYBRID Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: The focus of this role is on the design, development, and packaging of production grade AI inference and training solutions as part of the AMD Inference Microservice (AIM) ecosystem. You will be part of a diverse and ambitious team responsible for maintaining, scaling, and extending AMD’s ecosystem of AI microservices. You will work with state-of-the-art AI tooling and models on cutting edge AI infrastructure. This role requires both deep hands-on knowledge of AI tooling and best practices of software development methodologies like DevOps to ensure reliability, security, and performance of critical components. KEY RESPONSIBILITIES: LLM and AI Tooling: Design, develop, and scale containerized AI microservices for inference, training, evaluation, supporting diverse use cases. Utilize and adopt state-of-the-art open-source AI tools like vLLM, SGLang, verl, etc. to support various use cases and infrastructure configurations. Stay on top of current advances in LLM frameworks, APIs, and open-source ecosystems, and translate them into scalable solutions Container Development and Lifecycle Management: Design, build, and maintain containerized AI microservices using Docker and related tooling, ensuring reproducibility and scalability across Manage lifecycle of microservices, including updating, testing, and performance benchmarking and optimization. Collaborate with other teams to implement and maintain container orchestration workflows (e.g., Kubernetes) for scalability CI/CD Pipeline Integration: Implement DevOps best practices to use GitHub Actions to automate build, test, and deploy processes. Collaborate with development teams to design, implement, and optimize CI/CD pipelines, ensuring smooth deployments and integrations. Scripting and Automation: Develop and maintain tooling for interacting with different ecosystem functions to improve developer and user experience. Write scripts and tools (Python, Bash) to automate routine tasks, enhance functionality, and support development. EXPERIENCE & KEY QUALITIES Seasoned in deploying LLMs and other AI model types in production using frameworks like vLLM, SGLang, or similar tooling. Solid experience in packaging software and delivering microservice containers. Desire and ability to continuously learn in a fast-changing environment CI/CD (GitHub actions/workflows an extra plus) Initiative, pragmatic problem solving, and great collaboration skills Bachelor’s or master’s degree in computer science, computer engineering, electrical engineering, or an equivalent field LOCATIONS Finland or Sweden #LI-MH3 #LI-HYBRID
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
THE ROLE: The focus of this role is on the design, development, and packaging of production grade AI inference and training solutions as part of the AMD Inference Microservice (AIM) ecosystem. You will be part of a diverse and ambitious team responsible for maintaining, scaling, and extending AMD’s ecosystem of AI microservices. You will work with state-of-the-art AI tooling and models on cutting edge AI infrastructure. This role requires both deep hands-on knowledge of AI tooling and best practices of software development methodologies like DevOps to ensure reliability, security, and performance of critical components. KEY RESPONSIBILITIES: LLM and AI Tooling: Design, develop, and scale containerized AI microservices for inference, training, evaluation, supporting diverse use cases. Utilize and adopt state-of-the-art open-source AI tools like vLLM, SGLang, verl, etc. to support various use cases and infrastructure configurations. Stay on top of current advances in LLM frameworks, APIs, and open-source ecosystems, and translate them into scalable solutions Container Development and Lifecycle Management: Design, build, and maintain containerized AI microservices using Docker and related tooling, ensuring reproducibility and scalability across Manage lifecycle of microservices, including updating, testing, and performance benchmarking and optimization. Collaborate with other teams to implement and maintain container orchestration workflows (e.g., Kubernetes) for scalability CI/CD Pipeline Integration: Implement DevOps best practices to use GitHub Actions to automate build, test, and deploy processes. Collaborate with development teams to design, implement, and optimize CI/CD pipelines, ensuring smooth deployments and integrations. Scripting and Automation: Develop and maintain tooling for interacting with different ecosystem functions to improve developer and user experience. Write scripts and tools (Python, Bash) to automate routine tasks, enhance functionality, and support development. EXPERIENCE & KEY QUALITIES Seasoned in deploying LLMs and other AI model types in production using frameworks like vLLM, SGLang, or similar tooling. Solid experience in packaging software and delivering microservice containers. Desire and ability to continuously learn in a fast-changing environment CI/CD (GitHub actions/workflows an extra plus) Initiative, pragmatic problem solving, and great collaboration skills Bachelor’s or master’s degree in computer science, computer engineering, electrical engineering, or an equivalent field LOCATIONS Finland or Sweden #LI-MH3 #LI-HYBRID
AMD
69 jobs posted
About the job
Similar Jobs
AMD
20 hours agoLLM Inference Performance Engineer
Helsinki, FinlandView detailsWorkday
27 days agoLLM Engineer
IsraelView detailsYahoo
25 days agoLLM Ops Engineer
United States$88K - $184K/yrView detailsWaymo
14 days agoStaff Machine Learning Engineer – VLM/LLM Evaluation
Mountain View, CANew York, NYSeattle, WA$238K - $302K/yrView detailsd-Matrix
23 days agoMachine Learning Intern - Dynamic KV-Cache Modeling for Efficient LLM Inference
Santa ClaraView details
Netflix
22 days agoResearch Scientist 4 - Machine Learning and Inference Research (LLM Post-Training)
Los GatosView detailsTwilio
9 hours agoPrincipal, Product Manager - AI / LLM
RemoteUnited States$171K - $214K/yrView detailsVisa
18 days agoSoftware Engineer (1-2 yrs of experience: Python/Java + design & develop LLM applications)
Bengaluru, IndiaView detailsDatadog
1 day agoProduct Solutions Architect 3 - LLM Observability
BostonNew YorkSan Francisco$152K - $222K/yrView detailsWaymo
23 days agoSenior Research Scientist, Foundation Model (LLM/VLM)
Mountain View, CASan Francisco, CANew York City, NY$204K - $259K/yrView details
Looking for something different?
Browse all AI jobsNever miss a new AI job
Get the latest AI jobs delivered to your inbox every week. Free, no spam.
