2023-06-17 03:07:40 -07:00
# vLLM
2023-02-24 12:04:49 +00:00
2023-05-07 16:30:43 -07:00
## Build from source
2023-02-24 12:04:49 +00:00
```bash
2023-05-07 16:30:43 -07:00
pip install -r requirements.txt
pip install -e . # This may take several minutes.
2023-02-24 12:04:49 +00:00
```
2023-03-29 14:48:56 +08:00
## Test simple server
2023-02-24 12:04:49 +00:00
```bash
2023-05-20 13:06:59 -07:00
# Single-GPU inference.
python examples/simple_server.py # --model < your_model >
# Multi-GPU inference (e.g., 2 GPUs).
2023-03-22 04:45:42 +08:00
ray start --head
2023-05-20 13:06:59 -07:00
python examples/simple_server.py -tp 2 # --model < your_model >
2023-03-29 14:48:56 +08:00
```
The detailed arguments for `simple_server.py` can be found by:
```bash
2023-05-20 13:06:59 -07:00
python examples/simple_server.py --help
2023-03-29 14:48:56 +08:00
```
## FastAPI server
To start the server:
```bash
ray start --head
2023-06-17 03:07:40 -07:00
python -m vllm.entrypoints.fastapi_server # --model < your_model >
2023-03-29 14:48:56 +08:00
```
To test the server:
```bash
2023-05-20 13:06:59 -07:00
python test_cli_client.py
2023-03-29 14:48:56 +08:00
```
## Gradio web server
Install the following additional dependencies:
```bash
pip install gradio
```
Start the server:
```bash
2023-06-17 03:07:40 -07:00
python -m vllm.http_frontend.fastapi_frontend
2023-03-29 14:48:56 +08:00
# At another terminal
2023-06-17 03:07:40 -07:00
python -m vllm.http_frontend.gradio_webserver
2023-02-24 12:04:49 +00:00
```
2023-04-01 01:07:57 +08:00
## Load LLaMA weights
Since LLaMA weight is not fully public, we cannot directly download the LLaMA weights from huggingface. Therefore, you need to follow the following process to load the LLaMA weights.
1. Converting LLaMA weights to huggingface format with [this script ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py ).
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path/llama-7b
```
2. For all the commands above, specify the model with `--model /output/path/llama-7b` to load the model. For example:
```bash
python simple_server.py --model /output/path/llama-7b
2023-06-17 03:07:40 -07:00
python -m vllm.http_frontend.fastapi_frontend --model /output/path/llama-7b
2023-04-01 01:07:57 +08:00
```