2025-01-13 12:27:36 +00:00
# Installation
2024-12-23 17:35:38 -05:00
2025-03-13 20:33:09 -07:00
vLLM initially supports basic model inference and serving on Intel GPU platform.
2024-12-23 17:35:38 -05:00
2025-01-31 23:38:35 +00:00
:::{attention}
There are no pre-built wheels or images for this device, so you must build vLLM from source.
:::
2024-12-23 17:35:38 -05:00
## Requirements
- Supported Hardware: Intel Data Center GPU, Intel ARC GPU
2025-03-11 10:11:47 -07:00
- OneAPI requirements: oneAPI 2025.0
2024-12-23 17:35:38 -05:00
2025-01-13 12:27:36 +00:00
## Set up using Python
2024-12-23 17:35:38 -05:00
2025-01-13 12:27:36 +00:00
### Pre-built wheels
2024-12-23 17:35:38 -05:00
2025-01-13 12:27:36 +00:00
Currently, there are no pre-built XPU wheels.
2024-12-23 17:35:38 -05:00
2025-01-13 12:27:36 +00:00
### Build wheel from source
2024-12-23 17:35:38 -05:00
2025-03-11 10:11:47 -07:00
- First, install required driver and Intel OneAPI 2025.0 or later.
2024-12-23 17:35:38 -05:00
- Second, install Python packages for vLLM XPU backend building:
```console
2025-01-12 03:17:13 -05:00
pip install --upgrade pip
2025-03-08 17:44:35 +01:00
pip install -v -r requirements/xpu.txt
2024-12-23 17:35:38 -05:00
```
2025-03-11 10:11:47 -07:00
- Then, build and install vLLM XPU backend:
2024-12-23 17:35:38 -05:00
```console
2025-01-12 03:17:13 -05:00
VLLM_TARGET_DEVICE=xpu python setup.py install
2024-12-23 17:35:38 -05:00
```
2025-03-11 10:11:47 -07:00
- Finally, due to a known issue of conflict dependency(oneapi related) in torch-xpu 2.6 and ipex-xpu 2.6, we install ipex here. This will be fixed in the ipex-xpu 2.7.
```console
pip install intel-extension-for-pytorch==2.6.10+xpu \
--extra-index-url=https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```
2025-01-29 03:38:29 +00:00
:::{note}
2024-12-23 17:35:38 -05:00
- FP16 is the default data type in the current XPU backend. The BF16 data
2025-02-02 18:17:26 +08:00
type is supported on Intel Data Center GPU, not supported on Intel Arc GPU yet.
2025-01-29 03:38:29 +00:00
:::
2024-12-23 17:35:38 -05:00
2025-01-13 12:27:36 +00:00
## Set up using Docker
### Pre-built images
Currently, there are no pre-built XPU images.
### Build image from source
```console
2025-03-31 21:47:32 +01:00
$ docker build -f docker/Dockerfile.xpu -t vllm-xpu-env --shm-size=4g .
2025-01-13 12:27:36 +00:00
$ docker run -it \
--rm \
--network=host \
--device /dev/dri \
-v /dev/dri/by-path:/dev/dri/by-path \
vllm-xpu-env
```
## Supported features
2024-12-23 17:35:38 -05:00
2025-03-13 20:33:09 -07:00
XPU platform supports **tensor parallel** inference/serving and also supports **pipeline parallel** as a beta feature for online serving. We require Ray as the distributed runtime backend. For example, a reference execution like following:
2024-12-23 17:35:38 -05:00
```console
2025-01-12 03:17:13 -05:00
python -m vllm.entrypoints.openai.api_server \
--model=facebook/opt-13b \
--dtype=bfloat16 \
--device=xpu \
--max_model_len=1024 \
--distributed-executor-backend=ray \
--pipeline-parallel-size=2 \
-tp=8
2024-12-23 17:35:38 -05:00
```
2025-03-13 20:33:09 -07:00
By default, a ray instance will be launched automatically if no existing one is detected in the system, with `num-gpus` equals to `parallel_config.world_size` . We recommend properly starting a ray cluster before execution, referring to the < gh-file:examples / online_serving / run_cluster . sh > helper script.
2025-03-11 10:11:47 -07:00
2025-03-13 20:33:09 -07:00
There are some new features coming with ipex-xpu 2.6, e.g. **chunked prefill** , **V1 engine support** , **lora** , **MoE** , etc.