LLM Training&Agent Engineer
Posted 5 days ago
Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. 1. Responsibilities Train, fine-tune, and optimize Large Language Models (LLMs), including but not limited to pretraining, SFT, and RLHF pipelines Design and develop LLM-based agent systems (e.g., tool use, planning and reasoning, multi-agent collaboration) Optimize LLM inference performance, including latency, throughput, and memory (VRAM) usage Participate in GPU computing optimization, including operator/kernel optimization and parallelization strategies Collaborate with research and product teams to drive the deployment of LLMs in real-world applications 2. Requirements Bachelor’s degree or above in Computer Science, Artificial Intelligence, or a related field 4+ years of relevant development experience Proficient in at least one of Python or C++, with strong engineering skills Familiar with LLM training workflows, with hands-on experience in training or fine-tuning; experience deploying LLM-based products is a plus Experience in agent development (e.g., LangChain, in-house agents, tool use systems) Familiar with LLM inference optimization techniques, including but not limited to acceleration, quantization, and KV cache Understanding of GPU computing principles, with some experience in operator/kernel optimization 3. Preferred Qualifications (Plus) Experience with large-scale LLM training (e.g., distributed training, Megatron, DeepSpeed) Familiarity with CUDA or Triton, with experience in GPU kernel development or optimization Experience in high-performance computing (HPC) or inference framework optimization Hands-on experience deploying agent systems in production (e.g., complex task planning, multi-tool orchestration) #LI-JW2 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
1. Responsibilities Train, fine-tune, and optimize Large Language Models (LLMs), including but not limited to pretraining, SFT, and RLHF pipelines Design and develop LLM-based agent systems (e.g., tool use, planning and reasoning, multi-agent collaboration) Optimize LLM inference performance, including latency, throughput, and memory (VRAM) usage Participate in GPU computing optimization, including operator/kernel optimization and parallelization strategies Collaborate with research and product teams to drive the deployment of LLMs in real-world applications 2. Requirements Bachelor’s degree or above in Computer Science, Artificial Intelligence, or a related field 4+ years of relevant development experience Proficient in at least one of Python or C++, with strong engineering skills Familiar with LLM training workflows, with hands-on experience in training or fine-tuning; experience deploying LLM-based products is a plus Experience in agent development (e.g., LangChain, in-house agents, tool use systems) Familiar with LLM inference optimization techniques, including but not limited to acceleration, quantization, and KV cache Understanding of GPU computing principles, with some experience in operator/kernel optimization 3. Preferred Qualifications (Plus) Experience with large-scale LLM training (e.g., distributed training, Megatron, DeepSpeed) Familiarity with CUDA or Triton, with experience in GPU kernel development or optimization Experience in high-performance computing (HPC) or inference framework optimization Hands-on experience deploying agent systems in production (e.g., complex task planning, multi-tool orchestration) #LI-JW2
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
1. Responsibilities Train, fine-tune, and optimize Large Language Models (LLMs), including but not limited to pretraining, SFT, and RLHF pipelines Design and develop LLM-based agent systems (e.g., tool use, planning and reasoning, multi-agent collaboration) Optimize LLM inference performance, including latency, throughput, and memory (VRAM) usage Participate in GPU computing optimization, including operator/kernel optimization and parallelization strategies Collaborate with research and product teams to drive the deployment of LLMs in real-world applications 2. Requirements Bachelor’s degree or above in Computer Science, Artificial Intelligence, or a related field 4+ years of relevant development experience Proficient in at least one of Python or C++, with strong engineering skills Familiar with LLM training workflows, with hands-on experience in training or fine-tuning; experience deploying LLM-based products is a plus Experience in agent development (e.g., LangChain, in-house agents, tool use systems) Familiar with LLM inference optimization techniques, including but not limited to acceleration, quantization, and KV cache Understanding of GPU computing principles, with some experience in operator/kernel optimization 3. Preferred Qualifications (Plus) Experience with large-scale LLM training (e.g., distributed training, Megatron, DeepSpeed) Familiarity with CUDA or Triton, with experience in GPU kernel development or optimization Experience in high-performance computing (HPC) or inference framework optimization Hands-on experience deploying agent systems in production (e.g., complex task planning, multi-tool orchestration) #LI-JW2
AMD
74 jobs posted
About the job
Similar Jobs
30d
AI Distributed Training & Inference Validation Engineer
AMD
San Jose, CAAI Distributed Training & Inference Validation Engineer
AMD
San Jose, CA30d8d
Senior Software Engineer, AI Training & Infrastructure
Skild AI
$200K - $300KSan MateoSenior Software Engineer, AI Training & Infrastructure
Skild AI
$200K - $300KSan Mateo8d2d
Machine Learning Engineer (Training Optimization)
Canva
Beijing, Beijing, ChinaMachine Learning Engineer (Training Optimization)
Canva
Beijing, Beijing, China2d30d
Software Engineer, ML (Training and Inference)
Isomorphic Labs
LondonSoftware Engineer, ML (Training and Inference)
Isomorphic Labs
London30d8d
AI Systems Engineer – AI Model (Training & Inference)
AMD
MARKHAM, CanadaAI Systems Engineer – AI Model (Training & Inference)
AMD
MARKHAM, Canada8d25d
Principal ML Engineer - Large Scale Training Performance Optimization
AMD
San Jose, CAPrincipal ML Engineer - Large Scale Training Performance Optimization
AMD
San Jose, CA25d3d
Research Engineer, Post-Training (All Industry Levels)
Character AI
United StatesResearch Engineer, Post-Training (All Industry Levels)
Character AI
United States3d3d
Research Engineer, Post-Training for Code Security Analysis
DeepMind
$141K - $291KMountain View, CaliforniaResearch Engineer, Post-Training for Code Security Analysis
DeepMind
$141K - $291KMountain View, California3d1d
Machine Learning Engineer - Training & Dataset Platform (AU remote)
Canva
Sydney, AustraliaMachine Learning Engineer - Training & Dataset Platform (AU remote)
Canva
Sydney, Australia1d1d
Machine Learning Engineer - Training & Dataset Platform (AU remote)
Canva
Sydney, AustraliaMachine Learning Engineer - Training & Dataset Platform (AU remote)
Canva
Sydney, Australia1d
Looking for something different?
Browse all AI jobsFree AI job alerts
Get the latest AI jobs delivered to your inbox every week. Free, no spam.