LLM Inference Engineer
Posted 110 days ago
Job Description
This job posting has expired and no longer accepting applications.
About Us
Hippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health.
Why Join Our Team
Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.
Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.
Strategic Investors: We have raised a total of $400+ million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.
World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.
For more information, visit www.HippocraticAI.com.
We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description.
About the Role
We're seeking an experienced LLM Inference Engineer to optimize our large language model (LLM) serving infrastructure. The ideal candidate has:
Extensive hands-on experience with state-of-the-art inference optimization techniques
A track record of deploying efficient, scalable LLM systems in production environments
Key Responsibilities
Design and implement multi-node serving architectures for distributed LLM inference
Optimize multi-LoRA serving systems
Apply advanced quantization techniques (FP4/FP6) to reduce model footprint while preserving quality
Implement speculative decoding and other latency optimization strategies
Develop disaggregated serving solutions with optimized caching strategies for prefill and decoding phases
Continuously benchmark and improve system performance across various deployment scenarios and GPU types
Required Qualifications
Experience optimizing LLM inference systems at scale
Proven expertise with distributed serving architectures for large language models
Hands-on experience implementing quantization techniques for transformer models
Strong understanding of modern inference optimization methods, including:
Speculative decoding techniques with draft models
Eagle speculative decoding approaches
Proficiency in Python and C++
Experience with CUDA programming and GPU optimization
Preferred Qualifications
Contributions to open-source inference frameworks such as vLLM, SGLang, or TensorRT-LLM
Experience with custom CUDA kernels
Track record of deploying inference systems in production environments
Deep understanding of performance optimization systems
Show us what you've built: Tell us about an LLM inference or training project that makes you proud! Whether you've optimized inference pipelines to achieve breakthrough performance, designed innovative training techniques, or built systems that scale to billions of parameters - we want to hear your story.
Open source contributor? Even better! If you've contributed to projects like vllm, sglang, lmdeploy or similar LLM optimization frameworks, we'd love to see your PRs. Your contributions to these communities demonstrate exactly the kind of collaborative innovation we value.
Join a team where your expertise won't just be appreciated—it will be celebrated and amplified. Help us shape the future of AI deployment at scale!
References
1. Polaris: A Safety-focused LLM Constellation Architecture for Healthcare, https://arxiv.org/abs/2403.13313
2. Polaris 2: https://www.hippocraticai.com/polaris2
3. Personalized Interactions: https://www.hippocraticai.com/personalized-interactions
4. Human Touch in AI: https://www.hippocraticai.com/the-human-touch-in-ai
5. Empathetic Intelligence: https://www.hippocraticai.com/empathetic-intelligence
6. Polaris 1: https://www.hippocraticai.com/research/polaris
7. Research and clinical blogs: https://www.hippocraticai.com/research
***Be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.com email addresses. We will never request payment or sensitive personal information during the hiring process. If anything appears suspicious, stop engaging immediately and report the incident.
This job posting has expired and no longer accepting applications. Please check out our latest AI jobs.
Hippocratic AI
0 jobs posted
About the job
Similar Jobs
AMD
10 days agoLLM Inference Engineer
Helsinki, FinlandView detailsAMD
10 days agoLLM Inference Performance Engineer
Helsinki, FinlandView detailsAMD
6 days agoSenior LLM Inference Engineer
Helsinki, FinlandView detailsAMD
6 days agoSenior LLM Inference Performance Engineer
Helsinki, FinlandView detailsWaymo
24 days agoStaff Machine Learning Engineer – VLM/LLM Evaluation
Mountain View, CANew York, NYSeattle, WA$238K - $302K/yrView detailsWaymo
7 days agoSenior Machine Learning Engineer – VLM/LLM Evaluation
Mountain View, CANew York, NYSeattle, WA$204K - $259K/yrView detailsTwilio
10 days agoPrincipal, Product Manager - AI / LLM
RemoteUnited States$171K - $214K/yrView detailsDatadog
11 days agoProduct Solutions Architect 3 - LLM Observability
BostonNew YorkSan Francisco$152K - $222K/yrView detailsBlack Forest Labs
14 days agoMember of Technical Staff - Multimodal VLM/LLM
GermanyView detailsAgoda
6 days ago[TPM] Technical Product Manager (ML & LLM Domain)
ThailandView details
Looking for something different?
Browse all AI jobsNever miss a new AI job
Get the latest AI jobs delivered to your inbox every week. Free, no spam.
