FlagScale
libraryEnd-to-end distributed training and inference framework for large models built on open-source projects (Megatron-LM, vLLM). Supports pooled training across 8+ different chip architectures, attaining 85%+ of upper-bound performance in hybrid training. Provides unified runner for training, inference, and serving via Hydra-based configuration. Used as the training backbone for OpenSeek, RoboBrain 2.0, and other BAAI models. Actively developed through 2025.