[CI/Build] Drop Python 3.8 support (#10038)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: DarkLight1337 <tlleungac@connect.ust.hk>
This commit is contained in:
Russell Bryant 2024-11-06 09:31:01 -05:00 committed by GitHub
parent 399c798608
commit 098f94de42
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 11 additions and 18 deletions

View File

@ -6,7 +6,7 @@ version: 2
build:
os: ubuntu-22.04
tools:
python: '3.9'
python: "3.12"
sphinx:
configuration: docs/source/conf.py

View File

@ -31,7 +31,7 @@ install(CODE "set(CMAKE_INSTALL_LOCAL_ONLY TRUE)" ALL_COMPONENTS)
# Supported python versions. These versions will be searched in order, the
# first match will be selected. These should be kept in sync with setup.py.
#
set(PYTHON_SUPPORTED_VERSIONS "3.8" "3.9" "3.10" "3.11" "3.12")
set(PYTHON_SUPPORTED_VERSIONS "3.9" "3.10" "3.11" "3.12")
# Supported NVIDIA architectures.
set(CUDA_SUPPORTED_ARCHS "7.0;7.5;8.0;8.6;8.9;9.0")

View File

@ -13,8 +13,6 @@ Requirements
* GPU: MI200s (gfx90a), MI300 (gfx942), Radeon RX 7900 series (gfx1100)
* ROCm 6.2
Note: PyTorch 2.5+/ROCm6.2 dropped the support for python 3.8.
Installation options:
#. :ref:`Build from source with docker <build_from_source_docker_rocm>`

View File

@ -66,7 +66,7 @@ If you want to access the wheels for previous commits, you can specify the commi
$ export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch
$ pip install https://vllm-wheels.s3.us-west-2.amazonaws.com/${VLLM_COMMIT}/vllm-1.0.0.dev-cp38-abi3-manylinux1_x86_64.whl
Note that the wheels are built with Python 3.8 ABI (see `PEP 425 <https://peps.python.org/pep-0425/>`_ for more details about ABI), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (``1.0.0.dev``) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata.
Note that the wheels are built with Python 3.9 ABI (see `PEP 425 <https://peps.python.org/pep-0425/>`_ for more details about ABI), so **they are compatible with Python 3.9 and later**. The version string in the wheel file name (``1.0.0.dev``) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata.
Another way to access the latest code is to use the docker images:

View File

@ -11,7 +11,7 @@ Requirements
------------
* OS: Linux
* Python: 3.8 -- 3.11
* Python: 3.9 -- 3.11
* Accelerator: NeuronCore_v2 (in trn1/inf2 instances)
* Pytorch 2.0.1/2.1.1
* AWS Neuron SDK 2.16/2.17 (Verified on python 3.8)

View File

@ -12,7 +12,7 @@ This guide will help you quickly get started with vLLM to:
Prerequisites
--------------
- OS: Linux
- Python: 3.8 - 3.12
- Python: 3.9 -- 3.12
- GPU: compute capability 7.0 or higher (e.g., V100, T4, RTX20xx, A100, L4, H100, etc.)
Installation

View File

@ -55,12 +55,6 @@ def is_ninja_available() -> bool:
return which("ninja") is not None
def remove_prefix(text, prefix):
if text.startswith(prefix):
return text[len(prefix):]
return text
class CMakeExtension(Extension):
def __init__(self, name: str, cmake_lists_dir: str = '.', **kwa) -> None:
@ -197,8 +191,10 @@ class cmake_build_ext(build_ext):
os.makedirs(self.build_temp)
targets = []
target_name = lambda s: remove_prefix(remove_prefix(s, "vllm."),
"vllm_flash_attn.")
def target_name(s: str) -> str:
return s.removeprefix("vllm.").removeprefix("vllm_flash_attn.")
# Build all the extensions
for ext in self.extensions:
self.configure(ext)

View File

@ -89,12 +89,11 @@ def _get_unique_name(name: str) -> str:
return newname
_groups: Dict[str, Callable[[], "GroupCoordinator"]] = {}
_groups: Dict[str, Callable[[], Optional["GroupCoordinator"]]] = {}
def _register_group(group: "GroupCoordinator") -> None:
# looks like Python 3.8 does not understand `ReferenceType`
_groups[group.unique_name] = weakref.ref(group) # type: ignore
_groups[group.unique_name] = weakref.ref(group)
if supports_custom_op():