diff --git a/docs/source/models/supported_models.md b/docs/source/models/supported_models.md index ddb77f37..0b193ca0 100644 --- a/docs/source/models/supported_models.md +++ b/docs/source/models/supported_models.md @@ -55,6 +55,10 @@ If your model is neither supported natively by vLLM or Transformers, you can sti Simply set `trust_remote_code=True` and vLLM will run any model on the Model Hub that is compatible with Transformers. Provided that the model writer implements their model in a compatible way, this means that you can run new models before they are officially supported in Transformers or vLLM! +:::{tip} +If you have not yet created your custom model, you can follow this guide on [customising models in Transformers](https://huggingface.co/docs/transformers/en/custom_models). +::: + ```python from vllm import LLM llm = LLM(model=..., task="generate", trust_remote_code=True) # Name or path of your model