Content
summary Summary

OpenAI is ramping up security to stop rivals from copying its AI models, a move that comes as competition among major AI companies heats up.

Ad

The changes follow accusations against Chinese start-up Deepseek, which reportedly used OpenAI models to develop its own systems, including R1, through a process known as "distillation." Deepseek has not commented on the allegations.

According to the Financial Times, OpenAI has responded by restricting access to sensitive information, rolling out biometric access controls, and introducing new data segregation policies. One of the main steps: internal systems are kept offline unless they receive explicit approval for internet access, a move designed to protect model weights from unauthorized leaks.

Employees are now only allowed to access projects for which they have specific clearance, a practice known as "information tenting." For instance, only colleagues who had been granted access could discuss the project with those working on the then-new "o1" model, codenamed "Strawberry."

Ad
Ad

Physical security has been stepped up as well. Data centers now have stricter entry rules, and OpenAI has hired security experts with military backgrounds, including Dane Stuckey (formerly at Palantir) and retired US General Paul Nakasone.

US AI firms tighten security amid China concerns

OpenAI says these changes are part of a broader investment in security and not a response to a specific incident. The company is also responding to warnings from US officials about rising industrial espionage from foreign actors, particularly China.

Recently, OpenAI and Anthropic warned the US government about Deepseek's R1 model, citing risks related to state involvement and wide-ranging data access. OpenAI has updated its Preparedness Framework to more systematically track high-risk capabilities like autonomous replication and cyberattacks, using stricter criteria and automated testing.

In the global AI race, technology theft has become a growing concern for national and economic security. China reportedly imposed unofficial travel restrictions on AI professionals, advising them to avoid traveling to the United States and allied countries unless absolutely necessary. In connection with these broader concerns, employees at Deepseek are reportedly required to surrender their passports and are no longer allowed to travel freely outside China.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI is tightening the protection of its AI models after allegations surfaced that the Chinese start-up Deepseek used OpenAI models to build its own systems without authorization.
  • The company is introducing measures such as restricted access to information, biometric controls, stricter data separation, updated data center access rules, and hiring security experts with military backgrounds.
  • These steps are partly a response to warnings from US authorities about industrial espionage, especially from China, as OpenAI views the theft of AI technology as a risk to national and economic security.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.