vllm/docs/source/serving/serving_with_llamastack.md
Rafael Vasquez 32aa2059ad
[Docs] Convert rST to MyST (Markdown) (#11145)
Signed-off-by: Rafael Vasquez <rafvasq21@gmail.com>
2024-12-23 22:35:38 +00:00

994 B

(run-on-llamastack)=

Serving with Llama Stack

vLLM is also available via Llama Stack .

To install Llama Stack, run

$ pip install llama-stack -q

Inference using OpenAI Compatible API

Then start Llama Stack server pointing to your vLLM server with the following configuration:

inference:
  - provider_id: vllm0
    provider_type: remote::vllm
    config:
      url: http://127.0.0.1:8000

Please refer to this guide for more details on this remote vLLM provider.

Inference via Embedded vLLM

An inline vLLM provider is also available. This is a sample of configuration using that method:

inference
  - provider_type: vllm
    config:
      model: Llama3.1-8B-Instruct
      tensor_parallel_size: 4