// ROLE SUMMARY
You will evaluate code generated by AI models. Each task shows you a programming prompt and one or more candidate solutions.
Code Review Annotator
// DESCRIPTION
You will evaluate code generated by AI models. Each task shows you a programming prompt and one or more candidate solutions. Your job is to assess correctness, efficiency, readability, and adherence to best practices, then rank the solutions and write a brief justification. Languages vary by project but commonly include Python, JavaScript/TypeScript, Java, C++, and SQL. Some tasks also ask you to identify bugs, suggest fixes, or rate the quality of inline comments and documentation.
Proficiency in at least two programming languages is required. You should be able to spot off-by-one errors, recognize when an algorithm has worse-than-necessary time complexity, and tell the difference between code that works and code that is actually good. Professional software development experience is strongly preferred. If you regularly do code reviews as part of your day job, this work will feel familiar.
After a one-hour onboarding session covering the evaluation rubric and annotation tool, you start with a calibration set of 10 tasks. Once you pass calibration, live tasks are available immediately. Turnaround expectations are per-batch, typically 48-72 hours.
// SKILLS & REQUIREMENTS
// FREQUENTLY ASKED QUESTIONS
// READY TO GET STARTED?
Apply in minutes
Create your profile, select your areas of expertise, and start working on frontier AI projects.
Apply Now