Perplexity AI has unveiled R1 1776, a modified version of the Deepseek-R1 language model specifically designed to overcome Chinese censorship through specialized post-training techniques.
The original Deepseek R1 model generated significant interest by approaching the capabilities of leading reasoning models like o1 and o3-mini at substantially lower costs. This efficiency advantage triggered a dramatic decline in U.S. AI chip stocks, particularly affecting Nvidia. According to the Financial Times, Nvidia's resulting $589 billion single-day market value loss stands as the largest in U.S. corporate history.
The open-source model's main limitation was its handling of topics censored in China - instead of addressing sensitive questions directly, it would respond with pre-approved Communist Party messaging. Perplexity claims to have eliminated these biases and censorship constraints through their modifications to R1.
The company's post-training process involved extensive data collection on censored Chinese topics, gathering both questions and factual responses. Their team identified approximately 300 censored subjects, which they used to develop a multilingual censorship detection system. This system captured 40,000 multilingual user prompts that had previously triggered censored responses.
One of the biggest challenges, Perplexity reports, was finding accurate, well-reasoned responses to previously censored prompts. The company hasn't disclosed their exact sources for these answers and reasoning chains.
R1 1776 maintains its performance despite de-censoring
According to Perplexity's testing, which involved over 1,000 examples evaluated by both human annotators and AI judges, R1 1776 now handles previously censored topics comprehensively and without bias. Their benchmarking shows that the model's mathematical and reasoning capabilities remain unchanged from the base R1 version, despite the removal of censorship constraints.
The model is now available through the HuggingFace repo and can be accessed via the Sonar API.