OpenAI
San Francisco, CA
About the Team The compute infrastructure team runs the GPU fleet and large-scale compute clusters that serve the models backing ChatGPT and the API, while also supporting training workloads for our next generation models. We operate a large, modern GPU fleet and provide a unified platform for other OpenAI teams to seamlessly run production Applied AI and Research training workloads. We seek to learn from deployment and distribute the benefits of AI, while ensuring that this powerful tool is used responsibly and safely. Safety is more important to us than unfettered growth. About the Role You will be part of an engineer-first TPM team as a Technical Program Manager for Compute Infrastructure who owns the end-to-end delivery of large-scale GPU clusters, partnering with engineers to bring clusters online across external providers and partners. You'll run a broad, parallel portfolio spanning hardware, networking, power, and cooling-driving execution, risk management, and...