POSTED Mar 17

GPU programming Expert (San Francisco)

at Mistral

Share:

Mistral AI is hiring an expert in the role of serving and training large language models at high speed on GPUs. The role is based in San Francisco. 

The role will involve
-Writing low-level code to take all advantage of high-end GPUs (H100) and max out their capacity
-Rethinking various part of the generative model architecture to make them more suitable for efficient inference-Integrating low-level efficient code in a high-level MLOps framework 

The successful candidate will have
-High technical competence for writing custom CUDA kernels and pushing GPUs to their limits. High expertise on the distributed computation infrastructure of current generation GPU clusters
-Overall understanding of the field of generative AI, knowledge or interest in fine-tuning and using language models for applications
About Mistral AI

Mistral AI is a European company training large generative models for providing them to the industry. It releases the technology in a fully transparent way; a significant part of its IP is shared with permissive open-source software: Mistral AI intends to be a technical leader in the open-source generative AI community.

We're a small team, mostly composed of seasoned researchers and engineers in the field of AI. We like to work hard and to be at the edge of science. We are creative, low-ego, team-spirited, and have all been passionate about AI for years. We hire people that foster in competitive environments because they find them more fun to work in. We hire passionate women and men from all over the world.

Please mention that you found this job on Moaijobs, this helps us get more companies to post here, thanks!

Related Jobs

Amazon
エンタープライズソリューション営業(東日本、新規ビジネスディベロップメント、製造・流通・サービス業中心), 広域事業統括本部
JP, 13, Tokyo
Perplexity AI
Controller
San Francisco Bay Area
ElevenLabs
Partnerships
United States
Shield AI
Flight Test Engineer II - San Diego or Washington DC (R2999)
San Diego Metro Area
Codeium
Executive Assistant
Mountain View (HQ)