Korea's first 500B-scale model. 519B total / 33B active MoE with 192+1 experts. Uses Multi-Latent Attention and Think-Fusion for hybrid reasoning/direct modes. Trained on ~10T tokens. KMMLU: 80.2, AIME25: 89.8. Developed by 8-member SKT-led consortium with Korean government funding as part of the "National Representative AI" project. Apache-2.0.

Model Details

Architecture MOE
Parameters 519B
Active params 33B
Context window 131,000

Paper

arXiv: 2601.09200

moeopen-weightfrontierreasoningmultilingual

Related