235-billion-parameter open-source medical model that conducts full clinical dialogue with proactive information acquisition, long-horizon reasoning, and adaptive hallucination suppression. Achieves state-of-the-art results on HealthBench, HealthBench-Hallu, and ScanBench, outperforming GPT-5.2 in clinical inquiry, advisory, and safety. Licensed under Apache 2.0. M3 Plus variant announced January 22, 2026 with hallucination rate reduced to 2.6% and 70% API cost reduction.

Outputs 2

Baichuan-M3: Modeling Clinical Inquiry for Reliable Medical Decision-Making

paper

Technical paper describing the clinical inquiry modeling, proactive information acquisition, and hallucination suppression techniques.

arXiv: 2602.06570

Baichuan-M3-235B

model

235-billion-parameter medical reasoning model with MoE architecture, available in FP8 and GPTQ-INT4 quantizations.

Architecture MOE
Parameters 235B
open-weightbiologyreasoningmoe