Friday, November 21, 2025
HomeArtificial IntelligencevLLM vs TensorRT-LLM vs HF TGI vs LMDeploy, A Deep Technical Comparability...

vLLM vs TensorRT-LLM vs HF TGI vs LMDeploy, A Deep Technical Comparability for Manufacturing LLM Inference

Manufacturing LLM serving is now a programs downside, not a generate() loop. For actual workloads, the selection of inference stack drives your tokens per second, tail latency, and finally value per million tokens on a given GPU fleet.

This comparability focuses on 4 broadly used stacks:

  • vLLM
  • NVIDIA TensorRT-LLM
  • Hugging Face Textual content Technology Inference (TGI v3)
  • LMDeploy

1. vLLM, PagedAttention because the open baseline

Core concept

vLLM is constructed round PagedAttention, an consideration implementation that treats the KV cache like paged digital reminiscence somewhat than a single contiguous buffer per sequence.

As a substitute of allocating one massive KV area per request, vLLM:

  • Divides KV cache into mounted measurement blocks
  • Maintains a block desk that maps logical tokens to bodily blocks
  • Shares blocks between sequences wherever prefixes overlap

This reduces exterior fragmentation and lets the scheduler pack many extra concurrent sequences into the identical VRAM.

Throughput and latency

vLLM improves throughput by 2–4× over programs like FasterTransformer and Orca at comparable latency, with bigger good points for longer sequences.

Key properties for operators:

  • Steady batching (additionally known as inflight batching) merges incoming requests into present GPU batches as an alternative of ready for mounted batch home windows.
  • On typical chat workloads, throughput scales near linearly with concurrency till KV reminiscence or compute saturates.
  • P50 latency stays low for reasonable concurrency, however P99 can degrade as soon as queues are lengthy or KV reminiscence is tight, particularly for prefill heavy queries.

vLLM exposes an OpenAI appropriate HTTP API and integrates nicely with Ray Serve and different orchestrators, which is why it’s broadly used as an open baseline.

KV and multi tenant

  • PagedAttention provides close to zero KV waste and versatile prefix sharing inside and throughout requests.
  • Every vLLM course of serves one mannequin, multi tenant and multi mannequin setups are normally constructed with an exterior router or API gateway that followers out to a number of vLLM cases.

2. TensorRT-LLM, {hardware} most on NVIDIA GPUs

Core concept

TensorRT-LLM is NVIDIA’s optimized inference library for his or her GPUs. The library supplies customized consideration kernels, inflight batching, paged KV caching, quantization all the way down to FP4 and INT4, and speculative decoding.

It’s tightly coupled to NVIDIA {hardware}, together with FP8 tensor cores on Hopper and Blackwell.

Measured efficiency

NVIDIA’s H100 vs A100 analysis is essentially the most concrete public reference:

  • On H100 with FP8, TensorRT-LLM reaches over 10,000 output tokens/s at peak throughput for 64 concurrent requests, with ~100 ms time to first token.
  • H100 FP8 achieves as much as 4.6× larger max throughput and 4.4× sooner first token latency than A100 on the identical fashions.

For latency delicate modes:

  • TensorRT-LLM on H100 can drive TTFT beneath 10 ms in batch 1 configurations, at the price of decrease general throughput.

These numbers are mannequin and form particular, however they provide a practical scale.

Prefill vs decode

TensorRT-LLM optimizes each phases:

  • Prefill advantages from excessive throughput FP8 consideration kernels and tensor parallelism
  • Decode advantages from CUDA graphs, speculative decoding, quantized weights and KV, and kernel fusion

The consequence could be very excessive tokens/s throughout a variety of enter and output lengths, particularly when the engine is tuned for that mannequin and batch profile.

KV and multi tenant

TensorRT-LLM supplies:

  • Paged KV cache with configurable format
  • Help for lengthy sequences, KV reuse and offloading
  • Inflight batching and precedence conscious scheduling primitives

NVIDIA pairs this with Ray based mostly or Triton based mostly orchestration patterns for multi tenant clusters. Multi mannequin assist is finished on the orchestrator stage, not inside a single TensorRT-LLM engine occasion.

3. Hugging Face TGI v3, lengthy immediate specialist and multi backend gateway

Core concept

Textual content Technology Inference (TGI) is a Rust and Python based mostly serving stack that provides:

  • HTTP and gRPC APIs
  • Steady batching scheduler
  • Observability and autoscaling hooks
  • Pluggable backends, together with vLLM fashion engines, TensorRT-LLM, and different runtimes

Model 3 focuses on lengthy immediate processing by way of chunking and prefix caching.

Lengthy immediate benchmark vs vLLM

The TGI v3 docs give a transparent benchmark:

  • On lengthy prompts with greater than 200,000 tokens, a dialog reply that takes 27.5 s in vLLM may be served in about 2 s in TGI v3.
  • That is reported as a 13× speedup on that workload.
  • TGI v3 is ready to course of about 3× extra tokens in the identical GPU reminiscence by decreasing its reminiscence footprint and exploiting chunking and caching.

The mechanism is:

  • TGI retains the unique dialog context in a prefix cache, so subsequent turns solely pay for incremental tokens
  • Cache lookup overhead is on the order of microseconds, negligible relative to prefill compute

It is a focused optimization for workloads the place prompts are extraordinarily lengthy and reused throughout turns, for instance RAG pipelines and analytic summarization.

Structure and latency conduct

Key parts:

  • Chunking, very lengthy prompts are cut up into manageable segments for KV and scheduling
  • Prefix caching, knowledge construction to share lengthy context throughout turns
  • Steady batching, incoming requests be part of batches of already working sequences
  • PagedAttention and fused kernels within the GPU backends

For brief chat fashion workloads, throughput and latency are in the identical ballpark as vLLM. For lengthy, cacheable contexts, each P50 and P99 latency enhance by an order of magnitude as a result of the engine avoids repeated prefill.

Multi backend and multi mannequin

TGI is designed as a router plus mannequin server structure. It may well:

  • Route requests throughout many fashions and replicas
  • Goal totally different backends, for instance TensorRT-LLM on H100 plus CPU or smaller GPUs for low precedence visitors

This makes it appropriate as a central serving tier in multi tenant environments.

4. LMDeploy, TurboMind with blocked KV and aggressive quantization

Core concept

LMDeploy from the InternLM ecosystem is a toolkit for compressing and serving LLMs, centered across the TurboMind engine. It focuses on:

  • Excessive throughput request serving
  • Blocked KV cache
  • Persistent batching (steady batching)
  • Quantization of weights and KV cache

Relative throughput vs vLLM

The venture states:

  • ‘LMDeploy delivers as much as 1.8× larger request throughput than vLLM‘, with the assist from persistent batch, blocked KV, dynamic cut up and fuse, tensor parallelism and optimized CUDA kernels.

KV, quantization and latency

LMDeploy consists of:

  • Blocked KV cache, much like paged KV, that helps pack many sequences into VRAM
  • Help for KV cache quantization, usually int8 or int4, to chop KV reminiscence and bandwidth
  • Weight solely quantization paths resembling 4 bit AWQ
  • A benchmarking harness that studies token throughput, request throughput, and first token latency

This makes LMDeploy enticing while you wish to run bigger open fashions like InternLM or Qwen on mid vary GPUs with aggressive compression whereas nonetheless sustaining good tokens/s.

Multi mannequin deployments

LMDeploy supplies a proxy server capable of deal with:

  • Multi mannequin deployments
  • Multi machine, multi GPU setups
  • Routing logic to pick out fashions based mostly on request metadata

So architecturally it sits nearer to TGI than to a single engine.

What to make use of when?

  • If you need most throughput and really low TTFT on NVIDIA GPUs
    • TensorRT-LLM is the first alternative
    • It makes use of FP8 and decrease precision, customized kernels and speculative decoding to push tokens/s and maintain TTFT beneath 100 ms at excessive concurrency and beneath 10 ms at low concurrency
  • If you’re dominated by lengthy prompts with reuse, resembling RAG over giant contexts
    • TGI v3 is a robust default
    • Its prefix cache and chunking give as much as 3× token capability and 13× decrease latency than vLLM in printed lengthy immediate benchmarks, with out further configuration
  • If you need an open, easy engine with sturdy baseline efficiency and an OpenAI fashion API
    • vLLM stays the usual baseline
    • PagedAttention and steady batching make it 2–4× sooner than older stacks at comparable latency, and it integrates cleanly with Ray and K8s
  • In the event you goal open fashions resembling InternLM or Qwen and worth aggressive quantization with multi mannequin serving
    • LMDeploy is an efficient match
    • Blocked KV cache, persistent batching and int8 or int4 KV quantization give as much as 1.8× larger request throughput than vLLM on supported fashions, with a router layer included

In apply, many dev groups combine these programs, for instance utilizing TensorRT-LLM for top quantity proprietary chat, TGI v3 for lengthy context analytics, vLLM or LMDeploy for experimental and open mannequin workloads. The secret’s to align throughput, latency tails, and KV conduct with the precise token distributions in your visitors, then compute value per million tokens from measured tokens/s by yourself {hardware}.


References

  1. vLLM / PagedAttention
  2. TensorRT-LLM efficiency and overview
  3. HF Textual content Technology Inference (TGI v3) long-prompt conduct
  4. LMDeploy / TurboMind


Michal Sutter is a knowledge science skilled with a Grasp of Science in Knowledge Science from the College of Padova. With a strong basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at remodeling complicated datasets into actionable insights.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments