Data Annotation - Safety tasks / Content Moderation
Posted 748 days ago
Job Description
This job posting has expired and no longer accepting applications.
Who are we?
Cohere is focused on building and deploying large language model (LLM) AI into enterprises in a safe and responsible way that drives human productivity, and creates magical new ways to interact with technology and real business value. We’re a team of highly motivated and experienced engineers, innovators, and disruptors looking to change the face of technology.
Our goals are ambitious, but also concrete and practical. Cohere wants to fundamentally change how businesses operate, making everyone more productive and able to focus on doing better what they do best. Every day, our team breaks new ground, as we build transformational AI technology and products for enterprise and developers to harness the power of LLMs.
Cohere was founded by three global leaders in AI development, including our CEO, Aidan Gomez, who co-created the Transformer, which makes LLMs possible. Collectively, we're driven by the belief that our technology has the potential to revolutionize the way enterprises, their employees, and customers engage with technology through language.
Cohere’s broader research team is world-renowned, having contributed to the development of sentence transformers for semantic search, dynamic adversarial data collection and red teaming, and retrieval augmented generation, often referred to as “RAG,” among other technological breakthroughs.
We have been deliberate in assembling a team of operational leaders with industry-leading experience, with backgrounds working at the most sophisticated, demanding, and respected enterprises in the world. Cohere’s operational leaders have built, scaled, and led multi-billion product lines and businesses at Google, Apple, Rakuten, YouTube, AWS, and Cisco.
The Cohere team is a collective from all walks of life, from people who left college to start businesses, to some of the most experienced people from globally renowned companies. We believe a diverse team is the key to a safer, more responsible technology, and that different experiences and backgrounds enable us to tackle problems from all angles and avoid blindspots.
There’s no better time to play a role in defining the future of AI, and its impact on the world.
Why this role?
We are on a mission to build machines that understand the world and make them safely accessible to all. Data quality is foundational to this process. Machines (or Large Language Models to be exact) learn in similar ways to humans - by way of feedback. By labeling, ranking, auditing, and correcting text output, you will improve Large Language Model’s performance for iterations to come, thus having a lasting impact on Cohere’s tech. Cohere is looking for dynamic and dedicated Data Annotators with backgrounds and skills in Safety or Content Moderation.
IMPORTANT CONTEXT ON THIS ROLE: In this position, you will be asked to engage with human-generated and model-generated tasks which will sometimes mean intentional exposure to explicit content. Your annotations on these explicit tasks will be used to prevent the Large Language Model from generating unintentional or adversarial toxic or unsafe outputs. The types of explicit content you may be exposed to may include but are not limited to those of a sexual, violent, or psychologically disturbing nature.
As a Data Quality Specialist on safety task, you will:
- Improve Model Safety: Label, proofread, and improve machine-written and human-written generations, ensuring data integrity and quality. This will include work with content of a sexual, violent, or psychologically disturbing nature.
- Reading and Text-Based Tasks: Efficiently complete reading and text-based assignments, with high attention to detail.
- Preference-Based Tasks: Evaluate and complete tasks, assessing which responses best conform to our style guide.
- Provide Feedback: Collaborate and communicate effectively, providing feedback to cross-functional team members.
- Detail-Oriented Execution: Maintain meticulous attention to detail while performing repetitive and precise tasks.
You may be a good fit if you have:
- 1+ years of experience in Content Moderation and/or Trust and Safety
- Emotional resilience: An understanding that this role requires annotating texts that contain unsafe, explicit, and/or toxic content, including content of a sexual, violent, or psychologically disturbing nature
- Excellent command of written English. Expert reading and writing skills, which you are ready to prove on our written assessment. Bonus points if you are fluent in another language!
- Strong attention to detail and commitment to accuracy— you’re the type to proofread all of your emails!
- High tolerance for repetitive and monotonous work + superb sense of urgency and time management
What is the candidate journey:
- Initial Screening— Once you have submitted your application our Talent Team will review your resume and writing samples.
- Virtual Meet & Greet— If selected to move forward, you will have a short video call with a member of our Operations team!
- Practical Assessment— This assignment will test your written skill through various language-based tasks, such as a writing sample, interacting with a chatbot, and more.
- Emotional Resilience Assessment - This assessment will assess your ability to handle stress and your skills in coping with difficult situations.
- Offer— Independent Contractor Agreement.
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants of all kinds and are committed to providing an equal opportunity process. Cohere provides accessibility accommodations during the recruitment process. Should you require any accommodation, please let us know and we will work with you to meet your needs.
Our Perks:
🤝 An open and inclusive culture and work environment
🧑💻 Work with cutting-edge AI technology
🪴 A vibrant & central location
🥨 A great selection of office snacks
🏆 Performance-based incentives
This job posting has expired and no longer accepting applications. Please check out our latest AI jobs.

Cohere
8 jobs posted
About the job
Posted on
Jan 25, 2024
Apply before
Feb 24, 2024
Job typeFull-time
CategoryLLM
Location
Canada
Skills
RAGLLMmanagementaws
Similar Jobs
Suno
29 days agoData Scientist, Content
NYCView detailsSuno
29 days agoSenior Data Scientist, Content
NYCView detailsRoblox
27 days agoData Scientist, Creator Content
San Mateo, CA$220K - $262K/yrView detailsRoblox
27 days agoPrincipal Data Scientist - Safety
San Mateo, CA$321K - $374K/yrView detailsPinterest
18 days agoDirector of Data Science, Trust & Safety, Signals and Content Understanding
RemoteSan Francisco, CARemote, US$229K - $471K/yrView detailsCharacter AI
27 days agoStaff Data Scientist, Trust & Safety
United StatesView detailsWaymo
28 days agoStaff Data Scientist, Safety Evaluation
Mountain View, California$238K - $302K/yrView detailsWaymo
28 days agoStaff Data Scientist, Safety Metrics
Mountain ViewSan FranciscoNew York City$238K - $302K/yrView details
Netflix
21 days agoData Scientist (L5) - Content Understanding
RemoteUnited StatesView details
Netflix
9 days agoData Scientist 5- Content DSE
RemoteUnited States$372K - $600K/yrView details
Looking for something different?
Browse all AI jobs