31B-parameter proprietary model with 64K context, built by scaling up Solar Pro (22B, itself depth-up-scaled from Phi-3-medium 14B). Two operating modes: Chat (fast, fluent) and Reasoning (multi-step CoT for math, code, logic). Designed to fit on a single GPU.

AA Intelligence Index: 14 (non-reasoning, #30 of 117). Matches or exceeds GPT-4o, DeepSeek R1, and Qwen 3 on several benchmarks despite being less than half their size. Leads all models on Korean benchmarks (Ko-MMLU, Hae-Rae, Ko-IFEval).

Model Details

Architecture DENSE
Parameters 31B
Context window 64,000
enterprisereasoningefficiency

Related