vllm/docs/source/getting_started/installation.rst

53 lines
1.3 KiB
ReStructuredText
Raw Normal View History

2023-05-22 17:02:44 -07:00
Installation
============
2023-06-17 03:07:40 -07:00
vLLM is a Python library that includes some C++ and CUDA code.
vLLM can run on systems that meet the following requirements:
2023-05-27 01:13:06 -07:00
* OS: Linux
* Python: 3.8 or higher
* CUDA: 11.0 -- 11.8
* GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, etc.)
.. note::
2023-06-17 03:07:40 -07:00
As of now, vLLM does not support CUDA 12.
2023-05-27 01:13:06 -07:00
If you are using Hopper or Lovelace GPUs, please use CUDA 11.8.
.. tip::
2023-06-17 03:07:40 -07:00
If you have trouble installing vLLM, we recommend using the NVIDIA PyTorch Docker image.
2023-05-27 01:13:06 -07:00
.. code-block:: console
2023-06-07 00:40:21 -07:00
$ # Pull the Docker image with CUDA 11.8.
2023-05-27 01:13:06 -07:00
$ docker run --gpus all -it --rm --shm-size=8g nvcr.io/nvidia/pytorch:22.12-py3
2023-06-17 03:07:40 -07:00
Inside the Docker container, please execute :code:`pip uninstall torch` before installing vLLM.
2023-06-07 00:40:21 -07:00
2023-05-27 01:13:06 -07:00
Install with pip
----------------
2023-06-17 03:07:40 -07:00
You can install vLLM using pip:
2023-05-27 01:13:06 -07:00
.. code-block:: console
$ # (Optional) Create a new conda environment.
$ conda create -n myenv python=3.8 -y
$ conda activate myenv
2023-06-17 03:07:40 -07:00
$ # Install vLLM.
$ pip install vllm # This may take 5-10 minutes.
2023-05-27 01:13:06 -07:00
.. _build_from_source:
2023-05-22 17:02:44 -07:00
Build from source
-----------------
2023-06-17 03:07:40 -07:00
You can also build and install vLLM from source.
2023-05-27 01:13:06 -07:00
2023-05-22 17:02:44 -07:00
.. code-block:: console
2023-06-17 03:07:40 -07:00
$ git clone https://github.com/WoosukKwon/vllm.git
$ cd vllm
2023-05-27 01:13:06 -07:00
$ pip install -e . # This may take 5-10 minutes.