OpenBMB
academicOpenBMB (Open Lab for Big Model Base) is a premier collaborative lab and global leader in on-device and ultra-efficient LLMs. Unlike state-run national laboratories, OpenBMB operates on a hybrid academic-startup model, serving as the external deployment arm for Tsinghua THUNLP. Its commercial counterpart, ModelBest (Mianbi Intelligence), funnels resources into the ecosystem, including a major 2026 funding round led by China Telecom raising hundreds of millions of yuan.
The lab's "Edge-First" philosophy focuses on Intelligence Density rather than brute-force scaling. While larger labs race toward 10-trillion parameter models, OpenBMB's research (exemplified by the MiniCPM series) aims to package GPT-4-level multimodal performance into 8B-parameter models that run locally on smartphones and consumer-grade hardware like the NVIDIA RTX 5090. Through its partnership with China Telecom, OpenBMB also gains prioritized access to the Tianyi Cloud national computing network.
OpenBMB is famous for its ecosystem of efficiency-focused tools, including BMTrain, BMInf, and OpenPrompt. The lab also created ChatDev (multi-agent software development) and the ToolLLM/ToolBench framework. The lab's lineage traces back to the early CPM models co-sponsored by BAAI as part of the Wu Dao program.
The two principal investigators are Prof. Maosong Sun and Prof. Zhiyuan Liu, both from Tsinghua's Department of Computer Science. Their team has been instrumental in proving that massive-scale reasoning can be democratized through highly-optimized, small-but-mighty architectures.
People
- Maosong Sun OpenReview — PI (formerly Tsinghua THUNLP)
- Zhiyuan Liu OpenReview — PI (formerly Tsinghua THUNLP)
- Xu Han OpenReview — Core Researcher
- Shengding Hu OpenReview — Core Researcher (MiniCPM)