Senior ML Engineer
Posted 10 hours ago
Job Description
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. You Are: A seasoned Senior ML Engineer to drive distillation of ML Models for high-performance, production-ready rendering systems. You are passionate about software engineering and possess leadership skills to drive sophisticated issues to resolution. Able to communicate effectively and work optimally with different teams across AMD. What you'll be part of: Distillation and compression: KD variants, hint/fitnets, attention transfer, feature mimicking, low-rank/SVD, sparsity. Efficient architectures: MobileNet/EfficientNet, vision transformers optimization, lightweight diffusion/UNet variants, NeRF/instant-NGP distillation. Inference optimization: TensorRT, CUDA, cuDNN, ONNX, quantization-aware training, weight clustering, operator fusion. Metrics: SSIM, LPIPS, PSNR, FID/KID, latency/throughput profiling, memory/activation footprint analysis. Data and training: large-scale dataset curation, synthetic data generation, curriculum learning, augmentation strategies. MLOps: experiment tracking, CI/CD for models, model registries, reproducibility, telemetry. Integrate ML inference into production rendering pipelines: define model I/O, preprocessing/postprocessing, and make trade-offs for latency, throughput, and quality. Collaborate across teams (ML researchers, engine/platform, tooling, QA) to translate ML and product requirements into graphics-friendly implementations and integration plans. Mentor other engineers, conduct code reviews, and help define best practices for rendering, performance, and SDK delivery. Experience: 6–10+ years in ML engineering or applied research, with 3+ years focused on model distillation/compression at production scale. Strong proficiency in PyTorch (preferred) or JAX/TF; ability to implement custom training loops, distributed training, and mixed precision. Demonstrated experience shipping distilled or compressed models to production with measurable gains in latency/memory and maintained quality. Deep understanding of knowledge distillation techniques: teacher–student frameworks, soft-labels, intermediate feature matching, contrastive distillation, task-specific loss shaping. Hands-on experience with quantization (static/dynamic, PTQ/QAT), pruning, and graph-level optimizations (operator fusion). GPU performance engineering: CUDA fundamentals, TensorRT/ONNX Runtime, kernel profiling (Nsight), memory/layout optimization. Solid grasp of computer graphics fundamentals: rendering pipeline, shaders, sampling, anti-aliasing, tone mapping, and perceptual metrics. Strong software engineering: Python/C++ proficiency, testing, code quality, version control, reproducible pipelines, containerization. Cross-functional leadership and communication; ability to drive roadmaps and align stakeholders across ML, graphics, and product. Academics: Bachelor’s or Master's degree in Computer Science, Mathematics, or equivalent #LI-CC5 #LI-REMOTE Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
You Are: A seasoned Senior ML Engineer to drive distillation of ML Models for high-performance, production-ready rendering systems. You are passionate about software engineering and possess leadership skills to drive sophisticated issues to resolution. Able to communicate effectively and work optimally with different teams across AMD. What you'll be part of: Distillation and compression: KD variants, hint/fitnets, attention transfer, feature mimicking, low-rank/SVD, sparsity. Efficient architectures: MobileNet/EfficientNet, vision transformers optimization, lightweight diffusion/UNet variants, NeRF/instant-NGP distillation. Inference optimization: TensorRT, CUDA, cuDNN, ONNX, quantization-aware training, weight clustering, operator fusion. Metrics: SSIM, LPIPS, PSNR, FID/KID, latency/throughput profiling, memory/activation footprint analysis. Data and training: large-scale dataset curation, synthetic data generation, curriculum learning, augmentation strategies. MLOps: experiment tracking, CI/CD for models, model registries, reproducibility, telemetry. Integrate ML inference into production rendering pipelines: define model I/O, preprocessing/postprocessing, and make trade-offs for latency, throughput, and quality. Collaborate across teams (ML researchers, engine/platform, tooling, QA) to translate ML and product requirements into graphics-friendly implementations and integration plans. Mentor other engineers, conduct code reviews, and help define best practices for rendering, performance, and SDK delivery. Experience: 6–10+ years in ML engineering or applied research, with 3+ years focused on model distillation/compression at production scale. Strong proficiency in PyTorch (preferred) or JAX/TF; ability to implement custom training loops, distributed training, and mixed precision. Demonstrated experience shipping distilled or compressed models to production with measurable gains in latency/memory and maintained quality. Deep understanding of knowledge distillation techniques: teacher–student frameworks, soft-labels, intermediate feature matching, contrastive distillation, task-specific loss shaping. Hands-on experience with quantization (static/dynamic, PTQ/QAT), pruning, and graph-level optimizations (operator fusion). GPU performance engineering: CUDA fundamentals, TensorRT/ONNX Runtime, kernel profiling (Nsight), memory/layout optimization. Solid grasp of computer graphics fundamentals: rendering pipeline, shaders, sampling, anti-aliasing, tone mapping, and perceptual metrics. Strong software engineering: Python/C++ proficiency, testing, code quality, version control, reproducible pipelines, containerization. Cross-functional leadership and communication; ability to drive roadmaps and align stakeholders across ML, graphics, and product. Academics: Bachelor’s or Master's degree in Computer Science, Mathematics, or equivalent #LI-CC5 #LI-REMOTE
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.
You Are: A seasoned Senior ML Engineer to drive distillation of ML Models for high-performance, production-ready rendering systems. You are passionate about software engineering and possess leadership skills to drive sophisticated issues to resolution. Able to communicate effectively and work optimally with different teams across AMD. What you'll be part of: Distillation and compression: KD variants, hint/fitnets, attention transfer, feature mimicking, low-rank/SVD, sparsity. Efficient architectures: MobileNet/EfficientNet, vision transformers optimization, lightweight diffusion/UNet variants, NeRF/instant-NGP distillation. Inference optimization: TensorRT, CUDA, cuDNN, ONNX, quantization-aware training, weight clustering, operator fusion. Metrics: SSIM, LPIPS, PSNR, FID/KID, latency/throughput profiling, memory/activation footprint analysis. Data and training: large-scale dataset curation, synthetic data generation, curriculum learning, augmentation strategies. MLOps: experiment tracking, CI/CD for models, model registries, reproducibility, telemetry. Integrate ML inference into production rendering pipelines: define model I/O, preprocessing/postprocessing, and make trade-offs for latency, throughput, and quality. Collaborate across teams (ML researchers, engine/platform, tooling, QA) to translate ML and product requirements into graphics-friendly implementations and integration plans. Mentor other engineers, conduct code reviews, and help define best practices for rendering, performance, and SDK delivery. Experience: 6–10+ years in ML engineering or applied research, with 3+ years focused on model distillation/compression at production scale. Strong proficiency in PyTorch (preferred) or JAX/TF; ability to implement custom training loops, distributed training, and mixed precision. Demonstrated experience shipping distilled or compressed models to production with measurable gains in latency/memory and maintained quality. Deep understanding of knowledge distillation techniques: teacher–student frameworks, soft-labels, intermediate feature matching, contrastive distillation, task-specific loss shaping. Hands-on experience with quantization (static/dynamic, PTQ/QAT), pruning, and graph-level optimizations (operator fusion). GPU performance engineering: CUDA fundamentals, TensorRT/ONNX Runtime, kernel profiling (Nsight), memory/layout optimization. Solid grasp of computer graphics fundamentals: rendering pipeline, shaders, sampling, anti-aliasing, tone mapping, and perceptual metrics. Strong software engineering: Python/C++ proficiency, testing, code quality, version control, reproducible pipelines, containerization. Cross-functional leadership and communication; ability to drive roadmaps and align stakeholders across ML, graphics, and product. Academics: Bachelor’s or Master's degree in Computer Science, Mathematics, or equivalent #LI-CC5 #LI-REMOTE
AMD
79 jobs posted
About the job
Posted on
Mar 25, 2026
Apply before
Apr 24, 2026
Job typeFull-time
CategoryML Engineer
Location
Warsaw, Poland
Similar Jobs
22d
Senior ML Engineer
Truecaller
SwedenSenior ML Engineer
Truecaller
Sweden22d2d
Senior ML Engineer
Shyftlabs
Noida, Uttar PradeshSenior ML Engineer
Shyftlabs
Noida, Uttar Pradesh2d3d
Senior ML Engineer
Yahoo
$128K - $267KUnited StatesSenior ML Engineer
Yahoo
$128K - $267KUnited States3d12d
ML Engineer
Egen
HyderabadML Engineer
Egen
Hyderabad12d23d
Senior AI/ML Engineer*
Egen
RemoteSenior AI/ML Engineer*
Egen
Remote23d16d
Senior ML Engineer - Audience Enrichment
Yahoo
$128K - $267KUnited StatesSenior ML Engineer - Audience Enrichment
Yahoo
$128K - $267KUnited States16d22d
Sr. ML Engineer
Visa
Bengaluru, IndiaSr. ML Engineer
Visa
Bengaluru, India22d12d
Staff ML Engineer
Visa
$214KFoster City, CAStaff ML Engineer
Visa
$214KFoster City, CA12d8d
Senior ML Engineer- Cloud AI Platform
Visa
$131K - $202KAustin, TXSenior ML Engineer- Cloud AI Platform
Visa
$131K - $202KAustin, TX8d5d
AI/ML Engineer
Epic Games
$141K - $206KBLANK,BLANK,Multiple LocationsAI/ML Engineer
Epic Games
$141K - $206KBLANK,BLANK,Multiple Locations5d
Looking for something different?
Browse all AI jobsFree AI job alerts
Get the latest AI jobs delivered to your inbox every week. Free, no spam.