2023-06-18 01:26:12 +08:00
|
|
|
.. _installation:
|
|
|
|
|
2023-05-22 17:02:44 -07:00
|
|
|
Installation
|
|
|
|
============
|
|
|
|
|
2023-09-10 14:23:31 -07:00
|
|
|
vLLM is a Python library that also contains pre-compiled C++ and CUDA (11.8) binaries.
|
2023-06-18 03:19:38 -07:00
|
|
|
|
|
|
|
Requirements
|
|
|
|
------------
|
2023-05-27 01:13:06 -07:00
|
|
|
|
|
|
|
* OS: Linux
|
2023-09-10 14:23:31 -07:00
|
|
|
* Python: 3.8 -- 3.11
|
2023-06-18 03:19:38 -07:00
|
|
|
* GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, etc.)
|
2023-05-27 01:13:06 -07:00
|
|
|
|
|
|
|
Install with pip
|
|
|
|
----------------
|
|
|
|
|
2023-06-17 03:07:40 -07:00
|
|
|
You can install vLLM using pip:
|
2023-05-27 01:13:06 -07:00
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
$ # (Optional) Create a new conda environment.
|
|
|
|
$ conda create -n myenv python=3.8 -y
|
|
|
|
$ conda activate myenv
|
|
|
|
|
2023-06-17 03:07:40 -07:00
|
|
|
$ # Install vLLM.
|
2023-09-10 14:23:31 -07:00
|
|
|
$ pip install vllm
|
2023-05-27 01:13:06 -07:00
|
|
|
|
|
|
|
|
|
|
|
.. _build_from_source:
|
|
|
|
|
2023-05-22 17:02:44 -07:00
|
|
|
Build from source
|
|
|
|
-----------------
|
|
|
|
|
2023-06-18 03:19:38 -07:00
|
|
|
You can also build and install vLLM from source:
|
2023-05-27 01:13:06 -07:00
|
|
|
|
2023-05-22 17:02:44 -07:00
|
|
|
.. code-block:: console
|
|
|
|
|
2023-06-19 20:03:40 -07:00
|
|
|
$ git clone https://github.com/vllm-project/vllm.git
|
2023-06-17 03:07:40 -07:00
|
|
|
$ cd vllm
|
2023-05-27 01:13:06 -07:00
|
|
|
$ pip install -e . # This may take 5-10 minutes.
|
2023-09-10 14:23:31 -07:00
|
|
|
|
|
|
|
.. tip::
|
|
|
|
If you have trouble building vLLM, we recommend using the NVIDIA PyTorch Docker image.
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
$ # Pull the Docker image with CUDA 11.8.
|
|
|
|
$ docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/pytorch:22.12-py3
|