diff --git a/docs/source/deployment/frameworks/open-webui.md b/docs/source/deployment/frameworks/open-webui.md index 08ad90ba..83e5303a 100644 --- a/docs/source/deployment/frameworks/open-webui.md +++ b/docs/source/deployment/frameworks/open-webui.md @@ -2,7 +2,7 @@ # Open WebUI -1. Install the (Docker)[https://docs.docker.com/engine/install/] +1. Install the [Docker](https://docs.docker.com/engine/install/) 2. Start the vLLM server with the supported chat completion model, e.g. @@ -10,7 +10,7 @@ vllm serve qwen/Qwen1.5-0.5B-Chat ``` -1. Start the (Open WebUI)[https://github.com/open-webui/open-webui] docker container (replace the vllm serve host and vllm serve port): +1. Start the [Open WebUI](https://github.com/open-webui/open-webui) docker container (replace the vllm serve host and vllm serve port): ```console docker run -d -p 3000:8080 \