Bolt
libraryLightweight deep learning inference library for heterogeneous hardware deployment. Supports ARM CPU (v7/v8/v9), X86 CPU (AVX2/AVX512), and GPU (Mali, Qualcomm, Intel, AMD). Claims 15%+ faster inference than other open-source libraries. Supports FP32, FP16, INT8, and 1-bit precision with Caffe, ONNX, TFLite, and TensorFlow model conversion. Widely deployed across Huawei product lines.