From 3cd91dc9555e6f10e55f23d37782c65b0366f7cf Mon Sep 17 00:00:00 2001 From: Harry Mellor <19981378+hmellor@users.noreply.github.com> Date: Thu, 17 Apr 2025 02:05:59 +0100 Subject: [PATCH] Help user create custom model for Transformers backend remote code models (#16719) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> --- docs/source/models/supported_models.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/source/models/supported_models.md b/docs/source/models/supported_models.md index ddb77f37..0b193ca0 100644 --- a/docs/source/models/supported_models.md +++ b/docs/source/models/supported_models.md @@ -55,6 +55,10 @@ If your model is neither supported natively by vLLM or Transformers, you can sti Simply set `trust_remote_code=True` and vLLM will run any model on the Model Hub that is compatible with Transformers. Provided that the model writer implements their model in a compatible way, this means that you can run new models before they are officially supported in Transformers or vLLM! +:::{tip} +If you have not yet created your custom model, you can follow this guide on [customising models in Transformers](https://huggingface.co/docs/transformers/en/custom_models). +::: + ```python from vllm import LLM llm = LLM(model=..., task="generate", trust_remote_code=True) # Name or path of your model