fuchen.ljl
ee37328da0
Unable to find Punica extension issue during source code installation ( #4494 )
...
Co-authored-by: Simon Mo <simon.mo@hey.com>
2024-05-01 00:42:09 +00:00
youkaichao
f3d0bf7589
[Doc][Installation] delete python setup.py develop ( #3989 )
2024-04-11 03:33:02 +00:00
youkaichao
9c82a1bec3
[Doc] Update installation doc ( #3746 )
...
[Doc] Update installation doc for build from source and explain the dependency on torch/cuda version (#3746 )
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2024-03-30 16:34:38 -07:00
youkaichao
42bc386129
[CI/Build] respect the common environment variable MAX_JOBS ( #3600 )
2024-03-24 17:04:00 -07:00
Philipp Moritz
931746bc6d
Add documentation on how to do incremental builds ( #2796 )
2024-02-07 14:42:02 -08:00
Shivam Thakkar
1db83e31a2
[Docs] Update installation instructions to include CUDA 11.8 xFormers ( #2246 )
2023-12-22 23:20:02 -08:00
Woosuk Kwon
6565d9e33e
Update installation instruction for vLLM + CUDA 11.8 ( #2086 )
2023-12-13 09:25:59 -08:00
Woosuk Kwon
06e9ebebd5
Add instructions to install vLLM+cu118 ( #1717 )
2023-11-18 23:48:58 -08:00
Woosuk Kwon
7d7e3b78a3
Use --ipc=host
in docker run for distributed inference ( #1125 )
2023-09-21 18:26:47 -07:00
Woosuk Kwon
b9cecc2635
[Docs] Update installation page ( #1005 )
2023-09-10 14:23:31 -07:00
Woosuk Kwon
b7e62d3454
Fix repo & documentation URLs ( #163 )
2023-06-19 20:03:40 -07:00
Woosuk Kwon
dcda03b4cb
Write README and front page of doc ( #147 )
2023-06-18 03:19:38 -07:00
Zhuohan Li
bec7b2dc26
Add quickstart guide ( #148 )
2023-06-18 01:26:12 +08:00
Woosuk Kwon
0b98ba15c7
Change the name to vLLM ( #150 )
2023-06-17 03:07:40 -07:00
Woosuk Kwon
e38074b1e6
Support FP32 ( #141 )
2023-06-07 00:40:21 -07:00
Woosuk Kwon
376725ce74
[PyPI] Packaging for PyPI distribution ( #140 )
2023-06-05 20:03:14 -07:00
Woosuk Kwon
56b7f0efa4
Add a doc for installation ( #128 )
2023-05-27 01:13:06 -07:00
Woosuk Kwon
19d2899439
Add initial sphinx docs ( #120 )
2023-05-22 17:02:44 -07:00