vllm/docs/source/serving/serving_with_llamaindex.md
Rafael Vasquez 32aa2059ad
[Docs] Convert rST to MyST (Markdown) (#11145)
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
2024-12-23 22:35:38 +00:00

630 B

(run-on-llamaindex)=

Serving with llama_index

vLLM is also available via llama_index .

To install llamaindex, run

$ pip install llama-index-llms-vllm -q

To run inference on a single or multiple GPUs, use Vllm class from llamaindex.

from llama_index.llms.vllm import Vllm

llm = Vllm(
    model="microsoft/Orca-2-7b",
    tensor_parallel_size=4,
    max_new_tokens=100,
    vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
)

Please refer to this Tutorial for more details.