// ROLE SUMMARY
You will classify and label text, image, and mixed-media data sets that go straight into training pipelines at major AI research labs. The work is detail-heavy: every annotation you submit gets checked against a rubric before it counts.
Multimodal Data Labeler
// DESCRIPTION
You will classify and label text, image, and mixed-media data sets that go straight into training pipelines at major AI research labs. The work is detail-heavy: every annotation you submit gets checked against a rubric before it counts. Most tasks involve reading a piece of content, deciding which labels from a predefined taxonomy apply, and marking spans or bounding regions where relevant. Expect a steady mix of short-burst tasks (a few seconds each) and longer judgment calls that require reading full documents.
Ideal candidates have a background in linguistics, library science, content moderation, or a related field, but we have also had strong results from career changers who simply care about getting things right. The work is repetitive by nature, so you need to be someone who finds satisfaction in precision rather than novelty. You should be comfortable working independently and hitting deadlines without constant supervision.
Schedule is flexible within project deadlines. Most annotators work 15-25 hours per week, though some projects offer surge periods at higher rates. You will communicate with project leads through Slack and attend a brief weekly sync call.
// SKILLS & REQUIREMENTS
// FREQUENTLY ASKED QUESTIONS
// READY TO GET STARTED?
Apply in minutes
Create your profile, select your areas of expertise, and start working on frontier AI projects.
Apply Now