China’s DeepSeek Unveils New Open-Source AI Following R1’s Challenge to OpenAI

DeepSeek’s Latest AI Breakthrough: Prover V2
The Chinese artificial intelligence (AI) company DeepSeek has introduced a new open-weight large language model known as Prover V2. This model was uploaded to the Hugging Face platform on April 30, 2025. Designed to focus on verifying mathematical proofs, Prover V2 is released under the open-source MIT license, which enables wider accessibility and usage in various settings.
Specifications of Prover V2
With an impressive 671 billion parameters, Prover V2 far surpasses its earlier versions, Prover V1 and Prover V1.5, which came out in August 2024. The Prover V1 model, as detailed in its accompanying research paper, was specifically crafted to convert math competition problems into formal logic using the Lean 4 programming language. This programming tool is commonly utilized for theorem proving.
The main goal of Prover V2 is to compress mathematical expertise into a format that enables it to not only generate proofs but also verify them. This capability could significantly enhance academic research and educational methodologies.
Understanding AI Models and Their Functionality
In the realm of AI, the term “model” refers to a set of files that enable AI systems to operate without relying on outside servers. However, high-performance language models like Prover V2 generally necessitate substantial computational power and memory, making them less accessible to the average user.
This particular model is approximately 650 gigabytes in size and utilizes RAM or VRAM for operation. To make it manageable, the Prover V2 model’s parameters have been quantized to an 8-bit floating point precision. This modification effectively reduces the size of each parameter, streamlining the model considerably.
Advancements in Previous Models
Prover V1 was built upon the foundation of the seven-billion-parameter DeepSeekMath model and was fine-tuned with synthetic data—data generated by AI models as opposed to human input. Meanwhile, Prover V1.5 already enhanced training efficiency and accuracy compared to its predecessor. However, as of now, the specific advancements of Prover V2 have not been documented in a detailed paper or report.
The Significance of Open Weights
The release of AI model weights as open-source is a debated issue. Supporters argue that it democratizes access to AI technology, allowing users to run models independently without corporate restrictions. In contrast, critics fear that such openness can lead to misuse, as companies may be unable to monitor and regulate the application of their models to prevent harmful exploits.
When DeepSeek released the R1 model in this manner, it raised concerns about security and potential misuse, furthering discussions about the implications of open-sourcing AI technologies. Proponents of open-source AI celebrated that DeepSeek continued where other companies, such as Meta, left off, indicating that open AI models are viable alternatives to proprietary systems.
Wider Accessibility of Language Models
The developments in AI have made it easier for users without access to expensive high-powered computers to operate language models locally. Techniques such as model distillation and quantization are primarily responsible for this shift.
Model distillation involves training a smaller “student” model to emulate the behavior of a larger “teacher” model, preserving much of its functionality while reducing complexity. Quantization refers to lowering the precision of a model’s weights and activations, which decreases size and speeds up processing while keeping the model largely effective.
An example of quantization is Prover V2’s transition from 16 to 8-bit floating point numbers. Further reductions can still be achieved by applying additional optimizations. DeepSeek’s previous R1 model was also distilled into various iterations that scaled down from 70 billion parameters to as little as 1.5 billion parameters, with the smallest model being compatible for use on specific mobile devices.