OpenAI is hiring a Head of Preparedness. The position focuses on safety risks posed by AI models. OpenAI CEO Sam Altman points to the now well-documented effects of AI models on mental health as one example. Beyond that, the models have become so capable at cybersecurity that they can find critical vulnerabilities on their own.
This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.
One of the key challenges for the new leader will be making sure cybersecurity defenders can use the latest AI capabilities while keeping attackers locked out. The role also covers safe handling of biological capabilities—meaning how AI models release biological knowledge—and self-improving systems.
OpenAI has faced criticism recently, particularly from former employees, for neglecting model safety in favor of shipping products. Many safety researchers have left the company.
