Note: The job is a remote job and is open to candidates in USA. OpenTrain AI is hiring detail-oriented, analytical contributors to help test and improve autonomous AI agent evaluations. The role involves reviewing evaluation tasks, identifying inconsistencies, and collaborating with teams to ensure thorough testing of agents.
Responsibilities
• Review and refine agent evaluation tasks and scenarios for logic, completeness, and realism
• Identify inconsistencies, ambiguities, and missing assumptions
• Define gold-standard expected behaviors for agents
• Annotate reasoning paths, cause-effect relationships, and plausible alternatives
• Collaborate with QA, writers, and developers to suggest refinements and expand edge case coverage
• Ensure autonomous agents are tested thoroughly and realistically
Skills
• Strong analytical thinking and excellent attention to detail
• Fluent written English with clear documentation skills
• Comfort reading structured formats such as JSON or YAML (no need to write code)
• Ability to reason about complex systems and spot what could break or be misinterpreted
• Prior exposure to QA/test-case thinking, logic puzzles, or evaluation frameworks
Company Overview
• OpenTrain AI connects companies with vetted data labeling experts, supports any annotation tool, and manages escrow payments. It was founded in 2022, and is headquartered in Seattle, Washington, USA, with a workforce of 2-10 employees. Its website is https://www.opentrain.ai.
Apply Now
Apply Now