OpenAI
San Francisco, CA
About the Team The Alignment Science team at OpenAI studies the science of intent alignment: how to train models to understand what users are actually asking for, act faithfully on that intent while respecting safety constraints, verify what they did, and report their limitations honestly. Our work sits alongside broader value alignment efforts, but this team focuses on scalable methods for ensuring instruction-following, honesty, and robustness as models become more capable. We work on both sides of alignment research: producing externally publishable results and integrating promising techniques into the models OpenAI deploys. Recent team research on model confessions studies how models can be trained to honestly report shortcomings after their original answer, including failures involving hallucination, instruction following, scheming, and reward hacking. That work reflects a broader agenda: build scalable and general methods to ensure models follow human intent. The team...