Andrea Vallone, a senior safety researcher at OpenAI, has moved to Anthropic. She'll be working on the alignment team, which focuses on AI model risks. Vallone spent three years at OpenAI, where she founded the "Model Policy" research team and contributed to major projects including GPT-4, GPT-5, and the company's reasoning models.
Over the past year, Vallone led OpenAI's research on an increasingly urgent question: how should AI models respond when users show signs of emotional dependency or mental health struggles? Some users, including teenagers, have taken their own lives after conversations with chatbots. Several families have filed lawsuits, and the U.S. Senate has held hearings on the issue.
At Anthropic, Vallone will report to Jan Leike. Leike himself was head of safety research at OpenAI before leaving the company in May 2024. At the time, Leike publicly criticized OpenAI, saying safety had taken a backseat to shipping new products.

