LG Launches Korea’s First Reasoning AI Model to Compete with OpenAI and DeepSeek

LG Unveils Exaone Deep: Korea’s First Reasoning AI Model
Introduction to Exaone Deep
LG AI Research recently revealed Exaone Deep, marking a significant milestone as South Korea’s first reasoning artificial intelligence model. This advanced AI was presented during Nvidia’s GTC conference in San Jose, California, where LG aims to position this model as a strong contender against established companies like OpenAI and DeepSeek.
Performance Overview of Exaone Deep
Exaone Deep 32B, the flagship model from LG, boasts 32 billion parameters. This is a considerable reduction, as it is only 5% of the parameters employed by DeepSeek R1, which has an impressive 671 billion parameters. Despite the difference in scale, Exaone Deep has demonstrated outstanding capabilities, particularly in solving difficult mathematical and scientific problems.
Parameter Importance
Parameters in AI are essential numerical values that assist models in processing data and making decisions. They significantly influence how effectively an AI can learn and adapt.
Efficiency and Cost-Effectiveness
According to LG, Exaone Deep excels in high-difficulty benchmarks, performing well against much larger rivals. It operates with just one Nvidia H100 chip, unlike DeepSeek, which requires 16 GPUs. This efficiency highlights not just the performance but also the cost-effectiveness of Exaone Deep.
Computational Achievements
Exaone Deep’s performance has been evaluated against various benchmarks:
Korean College Scholastic Ability Test (CSAT) 2025: Exaone Deep scored 94.5 points, outperforming OpenAI’s o1-mini, which achieved 84.4 points, and DeepSeek R1 with 88.8 points.
American Invitational Mathematics Examination 2025: It delivered a score of 80, matching the performance of DeepSeek R1.
Graduate-level Google-Proof Q&A Benchmark: The model scored 66.1 in problem-solving abilities in subjects such as physics, chemistry, and biology.
- LiveCodeBench: In coding capability evaluations, Exaone Deep scored 59.5, positioning it ahead of other similarly sized models.
Reasoning Capabilities
An executive from LG mentioned, “Humans think, reason, and produce results. For AI agents and robots to evolve into models that genuinely meet human needs, inference capability is crucial.” This highlights LG’s focus on enhancing reasoning within their AI models.
Recognition and Future Developments
Following its launch, Exaone Deep 32B was featured on Epoch AI’s Notable AI Models, marking it as a significant achievement for Korean AI. LG also presented two additional models:
Deep 7.8B: A lighter model that maintains 95% of the performance at 24% of the size of Exaone Deep 32B.
- On-device model 2.4B: Achieves 86% of the bigger model’s performance, while being only 7.5% the size.
Additionally, Exaone Deep shows excellent capabilities in general language understanding, attaining the top score of 83 in the Massive Multitask Language Understanding benchmark.
Open-Source Initiative
LG AI Research has made all Exaone Deep models available as open-source. This initiative allows developers, researchers, and users to access and expand upon these advanced AI models freely.
Corporate Vision
In his New Year address, LG Corp.’s CEO Koo Kwang-mo emphasized the company’s commitment to innovation: “The current LG has been built by accumulating many moments of challenging new areas and creating unprecedented value.” He envisions a future where cutting-edge technologies, like AI, seamlessly integrate into daily lives, freeing up time for people to engage in more fulfilling activities.
The introduction of Exaone Deep solidifies LG’s position in the competitive AI landscape and reflects its ambition to innovate continuously within this crucial field.