Logo
Explore Help
Register Sign In
20231088/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 1 Packages Projects Releases Wiki Activity
vllm/docs/source
History
Thomas Parnell 789937af2e
[Doc] [SpecDecode] Update MLPSpeculator documentation (#7100)
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
2024-08-05 23:29:43 +00:00
..
_static
[Docs] Add RunLLM chat widget (#6857)
2024-07-27 09:24:46 -07:00
_templates/sections
[Doc] Guide for adding multi-modal plugins (#6205)
2024-07-10 14:55:34 +08:00
assets
[Doc] add visualization for multi-stage dockerfile (#4456)
2024-04-30 17:41:59 +00:00
automatic_prefix_caching
[Doc] Add an automatic prefix caching section in vllm documentation (#5324)
2024-06-11 10:24:59 -07:00
community
[Docs] Publish 5th meetup slides (#6799)
2024-07-25 16:47:55 -07:00
dev
[Bugfix] Fix broadcasting logic for multi_modal_kwargs (#6836)
2024-07-31 10:38:45 +08:00
getting_started
bump version to v0.5.4 (#7139)
2024-08-05 14:39:48 -07:00
models
[Doc] [SpecDecode] Update MLPSpeculator documentation (#7100)
2024-08-05 23:29:43 +00:00
performance_benchmark
[Doc] Add documentations for nightly benchmarks (#6412)
2024-07-25 11:57:16 -07:00
quantization
[bitsandbytes]: support read bnb pre-quantized model (#5753)
2024-07-23 23:45:09 +00:00
serving
[Models] Support Qwen model with PP (#6974)
2024-08-01 12:40:43 -07:00
conf.py
Support for guided decoding for offline LLM (#6878)
2024-08-04 03:12:09 +00:00
generate_examples.py
Add example scripts to documentation (#4225)
2024-04-22 16:36:54 +00:00
index.rst
[Doc] Add documentations for nightly benchmarks (#6412)
2024-07-25 11:57:16 -07:00
Powered by Gitea Version: 23.0.0 Page: 193ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API