OpenAI
San Francisco, CA
About the Team The Alignment Oversight team at OpenAI develops techniques for improving control, accountability, and alignment as AI systems become more capable and agentic. We combine longer-horizon research with hands-on deployment. We study long-term questions about how increasingly intelligent systems can be supervised, constrained, and corrected, while also building oversight systems that are used in practice today, both internally and externally (see our recent work on code review and action monitoring for codex). We also study how to learn from real-world deployments: using oversight data and human interventions to train future models to be more aligned, while preserving the effectiveness and independence of the oversight systems themselves. About the Role As a researcher on the Alignment team, you will design and run experiments that improve our ability to oversee increasingly capable models. You will work on hands-on model training, evaluation design, and research...