Company Description
An enterprise client is currently seeking experienced software engineers to contribute to improving advanced AI systems through human feedback. This work supports leading AI organizations in training large language models to better understand software development practices, debugging, and code quality.
This is part of a cutting-edge initiative focused on enhancing how AI systems write, review, and optimize code in real-world scenarios. You'll play a key role in shaping how AI models evaluate performance, detect issues, and generate reliable outputs.
Job Description
This opportunity is ideal for engineers who enjoy analyzing systems, improving code quality, and working on complex technical challenges. You will contribute to AI training projects by evaluating outputs, refining logic, and identifying potential vulnerabilities.
What You'll Do:Develop objective, verifiable evaluation criteria (rubrics) for system performance
Review system logs and execution paths to improve reliability and code quality
Refactor code and optimize system behavior toward ideal outcomes
Test systems for vulnerabilities, including data exposure and edge-case failures
Provide detailed, high-quality feedback on system performance and outputs