Tencent Hunyuan has open sourced HPC-Ops, a manufacturing grade operator library for giant language mannequin inference structure gadgets. HPC-Ops focuses on low stage CUDA kernels for core operators comparable to Consideration, Grouped GEMM, and Fused MoE, and exposes them by a compact-C and Python API for integration into present inference stacks.
HPC-Ops runs in giant scale inside providers. In these deployments it delivers about 30 p.c queries per minute enchancment for Tencent-HY fashions and about 17 p.c enchancment for DeepSeek fashions on mainstream inference playing cards. These positive aspects are reported on the service stage, in order that they replicate the cumulative impact of quicker kernels inside an actual inference pipeline.
Scope and design of HPC-Ops
HPC-Ops is a manufacturing grade, excessive efficiency, and straightforward to make use of operator library for LLM inference, developed by the Tencent Hunyuan AI Infra staff. The mission doesn’t attempt to substitute serving frameworks. As a substitute it gives kernels and clear APIs that may be referred to as from techniques that already deal with scheduling, KV cache administration, batching, and transport.
The API is designed for seamless use inside fashionable inference frameworks comparable to vLLM and SGLang. Which means the framework staff can swap in HPC-Ops kernels behind their very own abstractions with out altering the exterior habits of their servers.
HPC-Ops makes use of C++ and CUDA with CuTe and CUTLASS as constructing blocks. Kernels are written as comparatively small examples that additionally function a contemporary CUDA tutorial.
Kernel efficiency traits
The mission publishes most noticed speedup numbers for every operator relative to established baselines. These are microbenchmarks, and the analysis staff stress that efficiency varies throughout shapes and workloads, however they present the optimization ceiling.
For Consideration in bf16, in contrast with FlashInfer, FlashAttention two, FlashAttention three, and TensorRT LLM, HPC Ops reviews as much as 1.33 occasions speedup in prefill and as much as 2.22 occasions in decode. For Consideration in fp8, in contrast with FlashInfer, FlashAttention three, and TensorRT LLM, it reviews as much as 1.12 occasions in prefill and as much as 2.0 occasions in decode.
For FusedMoE fp8, in contrast with TensorRT LLM and vLLM, most noticed speedup is as much as 1.49 occasions in prefill and 1.14 occasions in decode. For GroupGEMM fp8, in contrast with DeepGEMM, the reported positive aspects are as much as 1.1 occasions in prefill and 1.88 occasions in decode.
These numbers matter as a result of decode is often the latency bottleneck in autoregressive technology, the place batch sizes shrink and reminiscence site visitors dominates. The truth that Consideration and GroupGEMM present the most important relative positive aspects in decode means that HPC-Ops focuses on the a part of the pipeline that the majority customers discover.
Supported kernels and precision
The present launch teams its performance into three operator households:
- Consideration kernels cowl each prefill and decode and embrace help for paged consideration. Paged consideration is the reminiscence structure that frameworks like vLLM use to put key and worth cache blocks in a paged construction, which improves reminiscence reuse for lengthy sequences.
- Grouped GEMM is carried out as quantized GroupGEMM with fp8 weights. HPC-Ops helps block sensible and per tensor scaling, so groups can commerce off quantization granularity in opposition to parameter storage and calibration price.
- Fused-MoE combines combination of consultants routing and skilled computation in a single quantized operator. It additionally makes use of fp8 skilled weights and helps block sensible and per tensor scaling methods.
Throughout these kernels, HPC-Ops gives native help for bf16 and fp8 information varieties. That matches the present manufacturing pattern to maneuver inference towards decrease precision codecs that protect accuracy whereas decreasing reminiscence bandwidth and bettering tensor core utilization.
Key Takeaways
- Tencent Hunyuan open-sourced HPC-Ops as a manufacturing grade operator library for LLM inference on NVIDIA SM90 GPUs, together with H20, with C++ and CUDA kernels constructed on CuTe and CUTLASS.
- In manufacturing deployments HPC-Ops reviews about 30 p.c QPM acquire for Tencent-HY fashions and about 17 p.c QPM acquire for DeepSeek fashions on mainstream inference playing cards.
- Operator microbenchmarks present most speedups as much as 2.22 occasions for bf16 Consideration decode, as much as 2.0 occasions for fp8 Consideration decode, as much as 1.49 occasions for fp8 FusedMoE prefill, and as much as 1.88 occasions for fp8 GroupGEMM decode in contrast with robust baselines like FlashInfer, FlashAttention, TensorRT LLM, and DeepGEMM.
- The library focuses on three operator households, Consideration with paged consideration help, quantized GroupGEMM with fp8 weights, and quantized Fused MoE with fp8 skilled weights, with each block sensible and per tensor scaling, and native bf16 plus fp8 precision help.
- HPC-Ops is designed as an operator layer that integrates into present inference frameworks comparable to vLLM and SGLang, and the roadmap targets sparse consideration for lengthy context LLMs, prolonged quantization together with 4 bit and eight bit methods, and kernels that higher overlap computation with multi GPU communication.
Try the Repo right here. Additionally, be happy to comply with us on Twitter and don’t overlook to affix our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you possibly can be a part of us on telegram as properly.

