Rafael Vasquez 43f3d9e699
[CI/Build] Add markdown linter (#11857)
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
2025-01-12 00:17:13 -08:00

614 B

(serving-llamaindex)=

LlamaIndex

vLLM is also available via LlamaIndex .

To install LlamaIndex, run

pip install llama-index-llms-vllm -q

To run inference on a single or multiple GPUs, use Vllm class from llamaindex.

from llama_index.llms.vllm import Vllm

llm = Vllm(
    model="microsoft/Orca-2-7b",
    tensor_parallel_size=4,
    max_new_tokens=100,
    vllm_kwargs={"swap_space": 1, "gpu_memory_utilization": 0.5},
)

Please refer to this Tutorial for more details.