2025-02-08 04:25:15 -08:00

8.1 KiB

Installation

vLLM supports AMD GPUs with ROCm 6.3.

:::{attention} There are no pre-built wheels for this device, so you must either use the pre-built Docker image or build vLLM from source. :::

Requirements

  • GPU: MI200s (gfx90a), MI300 (gfx942), Radeon RX 7900 series (gfx1100)
  • ROCm 6.3

Set up using Python

Pre-built wheels

Currently, there are no pre-built ROCm wheels.

Build wheel from source

  1. Install prerequisites (skip if you are already in an environment/docker with the following installed):
  • ROCm

  • PyTorch

    For installing PyTorch, you can start from a fresh docker image, e.g, rocm/pytorch:rocm6.3_ubuntu24.04_py3.12_pytorch_release_2.4.0, rocm/pytorch-nightly. If you are using docker image, you can skip to Step 3.

    Alternatively, you can install PyTorch using PyTorch wheels. You can check PyTorch installation guide in PyTorch Getting Started. Example:

    # Install PyTorch
    $ pip uninstall torch -y
    $ pip install --no-cache-dir --pre torch --index-url https://download.pytorch.org/whl/rocm6.3
    
  1. Install Triton flash attention for ROCm

    Install ROCm's Triton flash attention (the default triton-mlir branch) following the instructions from ROCm/triton

    python3 -m pip install ninja cmake wheel pybind11
    pip uninstall -y triton
    git clone https://github.com/OpenAI/triton.git
    cd triton
    git checkout e5be006
    cd python
    pip3 install .
    cd ../..
    

    :::{note} If you see HTTP issue related to downloading packages during building triton, please try again as the HTTP error is intermittent. :::

  2. Optionally, if you choose to use CK flash attention, you can install flash attention for ROCm

    Install ROCm's flash attention (v2.7.2) following the instructions from ROCm/flash-attention Alternatively, wheels intended for vLLM use can be accessed under the releases.

    For example, for ROCm 6.3, suppose your gfx arch is gfx90a. To get your gfx architecture, run rocminfo |grep gfx.

    git clone https://github.com/ROCm/flash-attention.git
    cd flash-attention
    git checkout b7d29fb
    git submodule update --init
    GPU_ARCHS="gfx90a" python3 setup.py install
    cd ..
    

    :::{note} You might need to downgrade the "ninja" version to 1.10 it is not used when compiling flash-attention-2 (e.g. pip install ninja==1.10.2.4) :::

  3. Build vLLM. For example, vLLM on ROCM 6.3 can be built with the following steps:

    $ pip install --upgrade pip
    
    # Build & install AMD SMI
    $ pip install /opt/rocm/share/amd_smi
    
    # Install dependencies
    $ pip install --upgrade numba scipy huggingface-hub[cli,hf_transfer] setuptools_scm
    $ pip install "numpy<2"
    $ pip install -r requirements-rocm.txt
    
    # Build vLLM for MI210/MI250/MI300.
    $ export PYTORCH_ROCM_ARCH="gfx90a;gfx942"
    $ python3 setup.py develop
    

    This may take 5-10 minutes. Currently, pip install . does not work for ROCm installation.

    :::{tip}

    • Triton flash attention is used by default. For benchmarking purposes, it is recommended to run a warm up step before collecting perf numbers.
    • Triton flash attention does not currently support sliding window attention. If using half precision, please use CK flash-attention for sliding window support.
    • To use CK flash-attention or PyTorch naive attention, please use this flag export VLLM_USE_TRITON_FLASH_ATTN=0 to turn off triton flash attention.
    • The ROCm version of PyTorch, ideally, should match the ROCm driver version. :::

:::{tip}

Pre-built images

The AMD Infinity hub for vLLM offers a prebuilt, optimized docker image designed for validating inference performance on the AMD Instinct™ MI300X accelerator.

:::{tip} Please check LLM inference performance validation on AMD Instinct MI300X for instructions on how to use this prebuilt docker image. :::

Build image from source

Building the Docker image from source is the recommended way to use vLLM with ROCm.

(Optional) Build an image with ROCm software stack

Build a docker image from gh-file:Dockerfile.rocm_base which setup ROCm software stack needed by the vLLM. This step is optional as this rocm_base image is usually prebuilt and store at Docker Hub under tag rocm/vllm-dev:base to speed up user experience. If you choose to build this rocm_base image yourself, the steps are as follows.

It is important that the user kicks off the docker build using buildkit. Either the user put DOCKER_BUILDKIT=1 as environment variable when calling docker build command, or the user needs to setup buildkit in the docker daemon configuration /etc/docker/daemon.json as follows and restart the daemon:

{
    "features": {
        "buildkit": true
    }
}

To build vllm on ROCm 6.3 for MI200 and MI300 series, you can use the default:

DOCKER_BUILDKIT=1 docker build -f Dockerfile.rocm_base -t rocm/vllm-dev:base .

Build an image with vLLM

First, build a docker image from gh-file:Dockerfile.rocm and launch a docker container from the image. It is important that the user kicks off the docker build using buildkit. Either the user put DOCKER_BUILDKIT=1 as environment variable when calling docker build command, or the user needs to setup buildkit in the docker daemon configuration /etc/docker/daemon.json as follows and restart the daemon:

{
    "features": {
        "buildkit": true
    }
}

gh-file:Dockerfile.rocm uses ROCm 6.3 by default, but also supports ROCm 5.7, 6.0, 6.1, and 6.2, in older vLLM branches. It provides flexibility to customize the build of docker image using the following arguments:

  • BASE_IMAGE: specifies the base image used when running docker build. The default value rocm/vllm-dev:base is an image published and maintained by AMD. It is being built using gh-file:Dockerfile.rocm_base
  • USE_CYTHON: An option to run cython compilation on a subset of python files upon docker build
  • BUILD_RPD: Include RocmProfileData profiling tool in the image
  • ARG_PYTORCH_ROCM_ARCH: Allows to override the gfx architecture values from the base docker image

Their values can be passed in when running docker build with --build-arg options.

To build vllm on ROCm 6.3 for MI200 and MI300 series, you can use the default:

DOCKER_BUILDKIT=1 docker build -f Dockerfile.rocm -t vllm-rocm .

To build vllm on ROCm 6.3 for Radeon RX7900 series (gfx1100), you should pick the alternative base image:

DOCKER_BUILDKIT=1 docker build --build-arg BASE_IMAGE="rocm/vllm-dev:navi_base" -f Dockerfile.rocm -t vllm-rocm .

To run the above docker image vllm-rocm, use the below command:

docker run -it \
   --network=host \
   --group-add=video \
   --ipc=host \
   --cap-add=SYS_PTRACE \
   --security-opt seccomp=unconfined \
   --device /dev/kfd \
   --device /dev/dri \
   -v <path/to/model>:/app/model \
   vllm-rocm \
   bash

Where the <path/to/model> is the location where the model is stored, for example, the weights for llama2 or llama3 models.

Supported features

See project:#feature-x-hardware compatibility matrix for feature support information.