Backend Software Engineer, AI Safety, TikTok - San Jose
Posted 49 days ago
Job Description
This job posting has expired and no longer accepting applications.
About Us:
TikTok identifies Trust and Safety as a top priority, and our team obsesses over it daily. As generative AI and large models power more TikTok experiences, the AI Safety team ensures these systems are deployed responsibly, reliably, and in alignment with our community values.
We are a newly founded team focused on building scalable safety systems to detect, prevent, and mitigate risks associated with advanced AI technologies. Our mission is to create robust infrastructure, tools, and model-based interventions to support safe and trustworthy AI development at TikTok.
Responsibilities:
- Design, build, and maintain infrastructure that enables safe deployment and monitoring of generative AI systems
- Develop systems to detect, respond to, and mitigate misuse and safety incidents in AI-generated content
- Collaborate closely with ML engineers, T&S policy, operations, and research teams to align models with TikTok’s safety principles
- Lead efforts to assess and remediate real-time safety risks from emerging AI capabilities
- Prototype and productionize classifiers, filters, audits, and feedback loops for high-risk or novel model outputs
- Drive safety evaluations, instrumentation, and experimentation to monitor alignment, fairness, and reliability at scale
Minimum Qualifications:
- Bachelor’s degree or above in Computer Science or related technical field
- 2+ years of backend or platform engineering experience
- Strong proficiency in at least one programming language: Python/Go/Java/C++
- Experience in building production systems with an emphasis on security, integrity, or safety
- Familiarity with machine learning pipelines or model deployment infrastructure
- Excellent communication skills and ability to collaborate across cross-functional teams
- Passion for trustworthy AI and a proactive attitude toward solving open-ended safety challenges
Preferred Qualifications:
- Experience working on content moderation, anti-abuse, fraud detection, or related domains
- Familiarity with AI safety research, LLM alignment techniques, or risk mitigation frameworks
- Experience working with large-scale data systems and real-time monitoring
TikTok identifies Trust and Safety as a top priority, and our team obsesses over it daily. As generative AI and large models power more TikTok experiences, the AI Safety team ensures these systems are deployed responsibly, reliably, and in alignment with our community values.
We are a newly founded team focused on building scalable safety systems to detect, prevent, and mitigate risks associated with advanced AI technologies. Our mission is to create robust infrastructure, tools, and model-based interventions to support safe and trustworthy AI development at TikTok.
Responsibilities:
- Design, build, and maintain infrastructure that enables safe deployment and monitoring of generative AI systems
- Develop systems to detect, respond to, and mitigate misuse and safety incidents in AI-generated content
- Collaborate closely with ML engineers, T&S policy, operations, and research teams to align models with TikTok’s safety principles
- Lead efforts to assess and remediate real-time safety risks from emerging AI capabilities
- Prototype and productionize classifiers, filters, audits, and feedback loops for high-risk or novel model outputs
- Drive safety evaluations, instrumentation, and experimentation to monitor alignment, fairness, and reliability at scale
Minimum Qualifications:
- Bachelor’s degree or above in Computer Science or related technical field
- 2+ years of backend or platform engineering experience
- Strong proficiency in at least one programming language: Python/Go/Java/C++
- Experience in building production systems with an emphasis on security, integrity, or safety
- Familiarity with machine learning pipelines or model deployment infrastructure
- Excellent communication skills and ability to collaborate across cross-functional teams
- Passion for trustworthy AI and a proactive attitude toward solving open-ended safety challenges
Preferred Qualifications:
- Experience working on content moderation, anti-abuse, fraud detection, or related domains
- Familiarity with AI safety research, LLM alignment techniques, or risk mitigation frameworks
- Experience working with large-scale data systems and real-time monitoring
This job posting has expired and no longer accepting applications. Please check out our latest AI jobs.
TikTok
48 jobs posted
About the job
Posted on
Jan 15, 2026
Apply before
Feb 14, 2026
Job typeFull-time
CategoryAI Safety
Location
San Jose, CA
Skills
Similar Jobs
NVIDIA
25 days agoAI Safety Scientist, Deep Learning
VietnamView detailsSamsara
20 days agoSenior/Principal Product Manager - Safety AI
RemoteCanadaCA$147K - CA$202K/yrView detailsSamsara
20 days agoSenior/Principal Product Manager - Safety AI
RemoteUnited States$137K - $245K/yrView detailsRoblox
16 days agoPrincipal Machine Learning Engineer, Content Safety
San Mateo, CA$295K - $345K/yrView detailsDeepMind
15 days agoTechnical Program Manager, AI Safety
Mountain View, California$156K - $229K/yrView details
AI Fund
20 days agoSales Development Representative (Haven Safety AI)
RemoteView detailsDeepMind
16 days agoSenior Technical Program Manager, AI Safety
Mountain View, California$183K - $271K/yrView detailsDeepMind
15 days agoSenior Psychologist or Sociologist - AI Psychology & Safety
Mountain View, California$197K/yrView detailsCanva
8 days agoMachine Learning Engineer - Content Safety Platform (AU remote)
Sydney, AustraliaView detailsDeepMind
24 days agoPsychologist or Sociologist - AI Psychology & Safety, ReDI - US, MTV - 12 month FTC
Mountain View, California$147K - $216K/yrView details
Looking for something different?
Browse all AI jobsNever miss a new AI job
Get the latest AI jobs delivered to your inbox every week. Free, no spam.
