Job Description
About Us:
TikTok identifies Trust and Safety as a top priority, and our team obsesses over it daily. As generative AI and large models power more TikTok experiences, the AI Safety team ensures these systems are deployed responsibly, reliably, and in alignment with our community values.
We are a newly founded team focused on building scalable safety systems to detect, prevent, and mitigate risks associated with advanced AI technologies. Our mission is to create robust infrastructure, tools, and model-based interventions to support safe and trustworthy AI development at TikTok.
Responsibilities:
- Design, build, and maintain infrastructure that enables safe deployment and monitoring of generative AI systems
- Develop systems to detect, respond to, and mitigate misuse and safety incidents in AI-generated content
- Collaborate closely with ML engineers, T&S policy, operations, and research teams to align models with TikTok’s safety principles
- Lead efforts to assess and remediate real-time safety risks from emerging AI capabilities
- Prototype and productionize classifiers, filters, audits, and feedback loops for high-risk or novel model outputs
- Drive safety evaluations, instrumentation, and experimentation to monitor alignment, fairness, and reliability at scale
Minimum Qualifications:
- Bachelor’s degree or above in Computer Science or related technical field
- 2+ years of backend or platform engineering experience
- Strong proficiency in at least one programming language: Python/Go/Java/C++
- Experience in building production systems with an emphasis on security, integrity, or safety
- Familiarity with machine learning pipelines or model deployment infrastructure
- Excellent communication skills and ability to collaborate across cross-functional teams
- Passion for trustworthy AI and a proactive attitude toward solving open-ended safety challenges
Preferred Qualifications:
- Experience working on content moderation, anti-abuse, fraud detection, or related domains
- Familiarity with AI safety research, LLM alignment techniques, or risk mitigation frameworks
- Experience working with large-scale data systems and real-time monitoring
TikTok identifies Trust and Safety as a top priority, and our team obsesses over it daily. As generative AI and large models power more TikTok experiences, the AI Safety team ensures these systems are deployed responsibly, reliably, and in alignment with our community values.
We are a newly founded team focused on building scalable safety systems to detect, prevent, and mitigate risks associated with advanced AI technologies. Our mission is to create robust infrastructure, tools, and model-based interventions to support safe and trustworthy AI development at TikTok.
Responsibilities:
- Design, build, and maintain infrastructure that enables safe deployment and monitoring of generative AI systems
- Develop systems to detect, respond to, and mitigate misuse and safety incidents in AI-generated content
- Collaborate closely with ML engineers, T&S policy, operations, and research teams to align models with TikTok’s safety principles
- Lead efforts to assess and remediate real-time safety risks from emerging AI capabilities
- Prototype and productionize classifiers, filters, audits, and feedback loops for high-risk or novel model outputs
- Drive safety evaluations, instrumentation, and experimentation to monitor alignment, fairness, and reliability at scale
Minimum Qualifications:
- Bachelor’s degree or above in Computer Science or related technical field
- 2+ years of backend or platform engineering experience
- Strong proficiency in at least one programming language: Python/Go/Java/C++
- Experience in building production systems with an emphasis on security, integrity, or safety
- Familiarity with machine learning pipelines or model deployment infrastructure
- Excellent communication skills and ability to collaborate across cross-functional teams
- Passion for trustworthy AI and a proactive attitude toward solving open-ended safety challenges
Preferred Qualifications:
- Experience working on content moderation, anti-abuse, fraud detection, or related domains
- Familiarity with AI safety research, LLM alignment techniques, or risk mitigation frameworks
- Experience working with large-scale data systems and real-time monitoring
TikTok
120 jobs posted
About the job
Posted on
Jan 15, 2026
Apply before
Feb 14, 2026
Job typeFull-time
CategoryLLM
Location
San Jose, CA
Skills
pythonLLMgenerative ai
Similar Jobs
TikTok
26 days agoSenior Backend Software Engineer, TikTok Privacy AI
Seattle, WAView detailsTikTok
16 days agoSenior Backend Software Engineer, TikTok Privacy AI
Seattle, WAView detailsTikTok
11 days agoSenior Backend Software Engineer, TikTok Privacy AI
Seattle, WAView detailsTikTok
10 days agoSenior Backend Software Engineer, TikTok Privacy AI
San Jose, CAView detailsTikTok
8 days agoSenior Backend Software Engineer, TikTok Privacy AI
San Jose, CAView detailsTikTok
7 days agoSenior Backend Software Engineer, TikTok Privacy AI
Seattle, WAView detailsAMD
29 days agoAI Software Engineer
Shanghai, ChinaView details
BJAK
20 days agoBackend Engineer, AI
United StatesView details
BJAK
20 days agoBackend Engineer, AI
ChinaView details
BJAK
1 day agoBackend Engineer, AI
ChinaView details
Looking for something different?
Browse all AI jobs