// ROLE SUMMARY

We need evaluators who can read AI-generated text critically and make consistent quality judgments under detailed rubrics. On a given day you might compare two explanations of quantum mechanics, two pieces of marketing copy, and two responses to a sensitive personal question.

Human Feedback Annotator

RLHF$3540/hrRemotePosted February 14, 2026

// DESCRIPTION

We need evaluators who can read AI-generated text critically and make consistent quality judgments under detailed rubrics. On a given day you might compare two explanations of quantum mechanics, two pieces of marketing copy, and two responses to a sensitive personal question. The common thread is careful reading, rubric application, and clear written justifications. Speed matters, but not at the expense of thoughtfulness.

We look for people with sharp critical reading skills and the intellectual range to evaluate responses on topics they may not be experts in. You do not need to know everything -- but you do need to know how to spot when an AI is confidently wrong, subtly misleading, or superficially helpful without actually addressing the question. Prior experience with RLHF annotation pipelines (e.g., at Scale, Surge, or Invisible) is a strong plus.

Onboarding takes about one week and includes rubric training, practice tasks with feedback, and a calibration exam. After onboarding, you work asynchronously on your own schedule. A Slack workspace provides real-time access to project leads and fellow annotators for guideline questions.

// SKILLS & REQUIREMENTS

Background in linguistics, philosophy, law, or STEMComfort evaluating content across diverse subject areasExperience with RLHF or preference labeling pipelinesAbility to follow detailed annotation guidelines consistentlyGood judgment on safety and sensitivity issues

// FREQUENTLY ASKED QUESTIONS

// READY TO GET STARTED?

Apply in minutes

Create your profile, select your areas of expertise, and start working on frontier AI projects.

Apply Now