Backend Software Engineer, AI Safety, TikTok - San Jose
Posted 122 days ago
Job Description
This job posting has expired and no longer accepting applications.
About Us:
TikTok identifies Trust and Safety as a top priority, and our team obsesses over it daily. As generative AI and large models power more TikTok experiences, the AI Safety team ensures these systems are deployed responsibly, reliably, and in alignment with our community values.
We are a newly founded team focused on building scalable safety systems to detect, prevent, and mitigate risks associated with advanced AI technologies. Our mission is to create robust infrastructure, tools, and model-based interventions to support safe and trustworthy AI development at TikTok.
Responsibilities:
- Design, build, and maintain infrastructure that enables safe deployment and monitoring of generative AI systems
- Develop systems to detect, respond to, and mitigate misuse and safety incidents in AI-generated content
- Collaborate closely with ML engineers, T&S policy, operations, and research teams to align models with TikTok’s safety principles
- Lead efforts to assess and remediate real-time safety risks from emerging AI capabilities
- Prototype and productionize classifiers, filters, audits, and feedback loops for high-risk or novel model outputs
- Drive safety evaluations, instrumentation, and experimentation to monitor alignment, fairness, and reliability at scale
Minimum Qualifications:
- Bachelor’s degree or above in Computer Science or related technical field
- 2+ years of backend or platform engineering experience
- Strong proficiency in at least one programming language: Python/Go/Java/C++
- Experience in building production systems with an emphasis on security, integrity, or safety
- Familiarity with machine learning pipelines or model deployment infrastructure
- Excellent communication skills and ability to collaborate across cross-functional teams
- Passion for trustworthy AI and a proactive attitude toward solving open-ended safety challenges
Preferred Qualifications:
- Experience working on content moderation, anti-abuse, fraud detection, or related domains
- Familiarity with AI safety research, LLM alignment techniques, or risk mitigation frameworks
- Experience working with large-scale data systems and real-time monitoring
TikTok identifies Trust and Safety as a top priority, and our team obsesses over it daily. As generative AI and large models power more TikTok experiences, the AI Safety team ensures these systems are deployed responsibly, reliably, and in alignment with our community values.
We are a newly founded team focused on building scalable safety systems to detect, prevent, and mitigate risks associated with advanced AI technologies. Our mission is to create robust infrastructure, tools, and model-based interventions to support safe and trustworthy AI development at TikTok.
Responsibilities:
- Design, build, and maintain infrastructure that enables safe deployment and monitoring of generative AI systems
- Develop systems to detect, respond to, and mitigate misuse and safety incidents in AI-generated content
- Collaborate closely with ML engineers, T&S policy, operations, and research teams to align models with TikTok’s safety principles
- Lead efforts to assess and remediate real-time safety risks from emerging AI capabilities
- Prototype and productionize classifiers, filters, audits, and feedback loops for high-risk or novel model outputs
- Drive safety evaluations, instrumentation, and experimentation to monitor alignment, fairness, and reliability at scale
Minimum Qualifications:
- Bachelor’s degree or above in Computer Science or related technical field
- 2+ years of backend or platform engineering experience
- Strong proficiency in at least one programming language: Python/Go/Java/C++
- Experience in building production systems with an emphasis on security, integrity, or safety
- Familiarity with machine learning pipelines or model deployment infrastructure
- Excellent communication skills and ability to collaborate across cross-functional teams
- Passion for trustworthy AI and a proactive attitude toward solving open-ended safety challenges
Preferred Qualifications:
- Experience working on content moderation, anti-abuse, fraud detection, or related domains
- Familiarity with AI safety research, LLM alignment techniques, or risk mitigation frameworks
- Experience working with large-scale data systems and real-time monitoring
This job posting has expired and no longer accepting applications. Please check out our latest AI jobs.
TikTok
43 jobs posted
About the job
Posted on
Nov 8, 2025
Apply before
Dec 8, 2025
Job typeFull-time
CategoryAI Safety
Location
San Jose, CA
Skills
Similar Jobs
Samsara
24 days agoSenior/Principal Product Manager - Safety AI
RemoteCanadaCA$147K - CA$202K/yrView detailsSamsara
24 days agoSenior/Principal Product Manager - Safety AI
RemoteUnited States$137K - $245K/yrView detailsRoblox
20 days agoPrincipal Machine Learning Engineer, Content Safety
San Mateo, CA$295K - $345K/yrView detailsDeepMind
19 days agoTechnical Program Manager, AI Safety
Mountain View, California$156K - $229K/yrView details
AI Fund
24 days agoSales Development Representative (Haven Safety AI)
RemoteView detailsDeepMind
20 days agoSenior Technical Program Manager, AI Safety
Mountain View, California$183K - $271K/yrView detailsDeepMind
19 days agoSenior Psychologist or Sociologist - AI Psychology & Safety
Mountain View, California$197K/yrView detailsCanva
12 days agoMachine Learning Engineer - Content Safety Platform (AU remote)
Sydney, AustraliaView detailsTikTok
2 days agoMachine Learning Engineer Intern, Trust and Safety Engineering - 2027 Start (PhD)
Sydney, NSW, AustraliaView detailsPinterest
1 day agoDirector, Data Analytics - Trust & Safety
San Francisco, CAPalo Alto, CASeattle, WALos Angeles, CANew York, NY$227K - $397K/yrView details
Looking for something different?
Browse all AI jobsNever miss a new AI job
Get the latest AI jobs delivered to your inbox every week. Free, no spam.
