Hippocratic AI
Company
AI Engineer - Evaluations
Job Description
About Us
Hippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health.
Why Join Our Team
- Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale. 
- Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA. 
- Strategic Investors: We have raised a total of $278 million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems. 
- World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes. 
For more information, visit www.HippocraticAI.com.
We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA, unless explicitly noted otherwise in the job description.
About the Role
As an AI Engineer – Evaluations at Hippocratic AI, you’ll define and build the systems that measure, validate, and improve the intelligence, safety, and empathy of our voice-based generative healthcare agents.
Evaluation sits at the heart of our model improvement loop — it informs architecture choices, training priorities, and launch decisions for every patient-facing agent. You’ll design LLM-based auto-evaluators, agent harnesses, and feedback pipelines that ensure each model interaction is clinically safe, contextually aware, and grounded in healthcare best practices.
You’ll collaborate closely with research, product, and clinical teams, working across the stack — from backend data pipelines and evaluation frameworks to tooling that surfaces insights for model iteration. Your work will directly shape how our agents behave, accelerating both their reliability and their real-world impact.
What You'll Do:
- Design and build evaluation frameworks and harnesses that measure the performance, safety, and trustworthiness of Hippocratic AI’s generative voice agents. 
- Prototype and deploy LLM-based evaluators to assess reasoning quality, empathy, factual correctness, and adherence to clinical safety standards. 
- Build feedback pipelines that connect evaluation signals directly to model improvement and retraining loops. 
- Partner with AI researchers and product teams to turn qualitative gaps into clear, defensible, and reproducible metrics. 
- Develop reusable systems and tooling that enable contributions from across the company, steadily raising the quality bar for model behavior and user experience. 
What You Bring
Must Have:
- 3+ years of software or ML engineering experience with a track record of shipping production systems end-to-end. 
- Proficiency in Python and experience building data pipelines, evaluation frameworks, or ML infrastructure. 
- Familiarity with LLM evaluation techniques — including prompt testing, multi-agent workflows, and tool-using systems. 
- Understanding of deep learning fundamentals and how offline datasets, evaluation data, and experiments drive model reliability. 
- Excellent communication skills with the ability to partner effectively across engineering, research, and clinical domains. 
- Passion for safety, quality, and real-world impact in AI-driven healthcare products. 
Nice-to-Have:
- Experience developing agent harnesses or simulation environments for model testing. 
- Background in AI safety, healthcare QA, or human feedback evaluation (RLHF). 
- Familiarity with reinforcement learning, retrieval-augmented evaluation, or long-context model testing. 
If you’re excited by the challenge of building trusted, production-grade evaluation systems that directly shape how AI behaves in the real world, we’d love to hear from you.
Join Hippocratic AI and help define the standard for clinically safe, high-quality AI evaluation in healthcare.
***Be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.com email addresses. We will never request payment or sensitive personal information during the hiring process. If anything
Hippocratic AI
2 jobs posted
About the job
Similar Jobs
Discover more opportunities that match your interests
- 21 days agoAI EngineerMotorola Solutions Melbourne, AustraliaView details
- 14 days agoAI EngineerMotorola Solutions RemoteView details
 8 days ago 8 days ago- AI 工程師 AI Engineer- BJAK Taipei, TaiwanView details
- 6 days agoAI EngineerHubSpot RemoteView details
 5 hours ago 5 hours ago- AI 工程師 AI Engineer- BJAK Taipei, TaiwanView details
- 11 hours agoAI EngineerHippocratic AI Palo AltoView details
 28 days ago 28 days ago- AI Application Engineer- Arize AI RemoteView details
- 20 days agoBusiness Engineer, Business AIMeta Menlo Park, CA, New York, NYView details
 20 days ago 20 days ago- Founding AI Engineer- BJAK ThailandView details
 20 days ago 20 days ago- Lead AI Engineer- BJAK ThailandView details
Looking for something different?
Browse all AI jobs