9.0 KiB
CPU
vLLM is a Python library that supports the following CPU variants. Select your CPU type to see vendor specific instructions:
:::::{tab-set} :sync-group: device
::::{tab-item} Intel/AMD x86 :selected: :sync: x86
:::{include} cpu/x86.inc.md :start-after: "# Installation" :end-before: "## Requirements" :::
::::
::::{tab-item} ARM AArch64 :sync: arm
:::{include} cpu/arm.inc.md :start-after: "# Installation" :end-before: "## Requirements" :::
::::
::::{tab-item} Apple silicon :sync: apple
:::{include} cpu/apple.inc.md :start-after: "# Installation" :end-before: "## Requirements" :::
::::
::::{tab-item} IBM Z (S390X) :sync: s390x
:::{include} cpu/s390x.inc.md :start-after: "# Installation" :end-before: "## Requirements" :::
::::
:::::
Requirements
- Python: 3.9 -- 3.12
:::::{tab-set} :sync-group: device
::::{tab-item} Intel/AMD x86 :sync: x86
:::{include} cpu/x86.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" :::
::::
::::{tab-item} ARM AArch64 :sync: arm
:::{include} cpu/arm.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" :::
::::
::::{tab-item} Apple silicon :sync: apple
:::{include} cpu/apple.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" :::
::::
::::{tab-item} IBM Z (S390X) :sync: s390x
:::{include} cpu/s390x.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" :::
::::
:::::
Set up using Python
Create a new Python environment
:::{include} python_env_setup.inc.md :::
Pre-built wheels
Currently, there are no pre-built CPU wheels.
Build wheel from source
:::::{tab-set} :sync-group: device
::::{tab-item} Intel/AMD x86 :sync: x86
:::{include} cpu/x86.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" :::
::::
::::{tab-item} ARM AArch64 :sync: arm
:::{include} cpu/arm.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" :::
::::
::::{tab-item} Apple silicon :sync: apple
:::{include} cpu/apple.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" :::
::::
::::{tab-item} IBM Z (s390x) :sync: s390x
:::{include} cpu/s390x.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" :::
::::
:::::
Set up using Docker
Pre-built images
:::::{tab-set} :sync-group: device
::::{tab-item} Intel/AMD x86 :sync: x86
:::{include} cpu/x86.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" :::
::::
:::::
Build image from source
$ docker build -f docker/Dockerfile.cpu --tag vllm-cpu-env --target vllm-openai .
# Launching OpenAI server
$ docker run --rm \
--privileged=true \
--shm-size=4g \
-p 8000:8000 \
-e VLLM_CPU_KVCACHE_SPACE=<KV cache space> \
-e VLLM_CPU_OMP_THREADS_BIND=<CPU cores for inference> \
vllm-cpu-env \
--model=meta-llama/Llama-3.2-1B-Instruct \
--dtype=bfloat16 \
other vLLM OpenAI server arguments
::::{tip}
For ARM or Apple silicon, use docker/Dockerfile.arm
::::
::::{tip}
For IBM Z (s390x), use docker/Dockerfile.s390x
and in docker run
use flag --dtype float
::::
Supported features
vLLM CPU backend supports the following vLLM features:
- Tensor Parallel
- Model Quantization (
INT8 W8A8, AWQ, GPTQ
) - Chunked-prefill
- Prefix-caching
- FP8-E5M2 KV cache
Related runtime environment variables
VLLM_CPU_KVCACHE_SPACE
: specify the KV Cache size (e.g,VLLM_CPU_KVCACHE_SPACE=40
means 40 GiB space for KV cache), larger setting will allow vLLM running more requests in parallel. This parameter should be set based on the hardware configuration and memory management pattern of users.VLLM_CPU_OMP_THREADS_BIND
: specify the CPU cores dedicated to the OpenMP threads. For example,VLLM_CPU_OMP_THREADS_BIND=0-31
means there will be 32 OpenMP threads bound on 0-31 CPU cores.VLLM_CPU_OMP_THREADS_BIND=0-31|32-63
means there will be 2 tensor parallel processes, 32 OpenMP threads of rank0 are bound on 0-31 CPU cores, and the OpenMP threads of rank1 are bound on 32-63 CPU cores.VLLM_CPU_MOE_PREPACK
: whether to use prepack for MoE layer. This will be passed toipex.llm.modules.GatedMLPMOE
. Default is1
(True). On unsupported CPUs, you might need to set this to0
(False).
Performance tips
- We highly recommend to use TCMalloc for high performance memory allocation and better cache locality. For example, on Ubuntu 22.4, you can run:
sudo apt-get install libtcmalloc-minimal4 # install TCMalloc library
find / -name *libtcmalloc* # find the dynamic link library path
export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4:$LD_PRELOAD # prepend the library to LD_PRELOAD
python examples/offline_inference/basic/basic.py # run vLLM
- When using the online serving, it is recommended to reserve 1-2 CPU cores for the serving framework to avoid CPU oversubscription. For example, on a platform with 32 physical CPU cores, reserving CPU 30 and 31 for the framework and using CPU 0-29 for OpenMP:
export VLLM_CPU_KVCACHE_SPACE=40
export VLLM_CPU_OMP_THREADS_BIND=0-29
vllm serve facebook/opt-125m
- If using vLLM CPU backend on a machine with hyper-threading, it is recommended to bind only one OpenMP thread on each physical CPU core using
VLLM_CPU_OMP_THREADS_BIND
. On a hyper-threading enabled platform with 16 logical CPU cores / 8 physical CPU cores:
$ lscpu -e # check the mapping between logical CPU cores and physical CPU cores
# The "CPU" column means the logical CPU core IDs, and the "CORE" column means the physical core IDs. On this platform, two logical cores are sharing one physical core.
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ MHZ
0 0 0 0 0:0:0:0 yes 2401.0000 800.0000 800.000
1 0 0 1 1:1:1:0 yes 2401.0000 800.0000 800.000
2 0 0 2 2:2:2:0 yes 2401.0000 800.0000 800.000
3 0 0 3 3:3:3:0 yes 2401.0000 800.0000 800.000
4 0 0 4 4:4:4:0 yes 2401.0000 800.0000 800.000
5 0 0 5 5:5:5:0 yes 2401.0000 800.0000 800.000
6 0 0 6 6:6:6:0 yes 2401.0000 800.0000 800.000
7 0 0 7 7:7:7:0 yes 2401.0000 800.0000 800.000
8 0 0 0 0:0:0:0 yes 2401.0000 800.0000 800.000
9 0 0 1 1:1:1:0 yes 2401.0000 800.0000 800.000
10 0 0 2 2:2:2:0 yes 2401.0000 800.0000 800.000
11 0 0 3 3:3:3:0 yes 2401.0000 800.0000 800.000
12 0 0 4 4:4:4:0 yes 2401.0000 800.0000 800.000
13 0 0 5 5:5:5:0 yes 2401.0000 800.0000 800.000
14 0 0 6 6:6:6:0 yes 2401.0000 800.0000 800.000
15 0 0 7 7:7:7:0 yes 2401.0000 800.0000 800.000
# On this platform, it is recommend to only bind openMP threads on logical CPU cores 0-7 or 8-15
$ export VLLM_CPU_OMP_THREADS_BIND=0-7
$ python examples/offline_inference/basic/basic.py
- If using vLLM CPU backend on a multi-socket machine with NUMA, be aware to set CPU cores using
VLLM_CPU_OMP_THREADS_BIND
to avoid cross NUMA node memory access.
Other considerations
-
The CPU backend significantly differs from the GPU backend since the vLLM architecture was originally optimized for GPU use. A number of optimizations are needed to enhance its performance.
-
Decouple the HTTP serving components from the inference components. In a GPU backend configuration, the HTTP serving and tokenization tasks operate on the CPU, while inference runs on the GPU, which typically does not pose a problem. However, in a CPU-based setup, the HTTP serving and tokenization can cause significant context switching and reduced cache efficiency. Therefore, it is strongly recommended to segregate these two components for improved performance.
-
On CPU based setup with NUMA enabled, the memory access performance may be largely impacted by the topology. For NUMA architecture, Tensor Parallel is a option for better performance.
-
Tensor Parallel is supported for serving and offline inferencing. In general each NUMA node is treated as one GPU card. Below is the example script to enable Tensor Parallel = 2 for serving:
VLLM_CPU_KVCACHE_SPACE=40 VLLM_CPU_OMP_THREADS_BIND="0-31|32-63" vllm serve meta-llama/Llama-2-7b-chat-hf -tp=2 --distributed-executor-backend mp
-
For each thread id list in
VLLM_CPU_OMP_THREADS_BIND
, users should guarantee threads in the list belong to a same NUMA node. -
Meanwhile, users should also take care of memory capacity of each NUMA node. The memory usage of each TP rank is the sum of
weight shard size
andVLLM_CPU_KVCACHE_SPACE
, if it exceeds the capacity of a single NUMA node, TP worker will be killed due to out-of-memory.
-