From bc2d4473bf943feb8abac1688dbc7c5701d7282d Mon Sep 17 00:00:00 2001 From: Harry Mellor <19981378+hmellor@users.noreply.github.com> Date: Mon, 10 Mar 2025 18:43:08 +0100 Subject: [PATCH] [Docs] Make installation URLs nicer (#14556) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> --- README.md | 2 +- .../index.md => installation.md} | 12 ++-- .../index.md => ai_accelerator.md} | 64 +++++++++---------- .../installation/{cpu/index.md => cpu.md} | 20 +++--- .../installation/cpu/arm.inc.md | 2 +- .../installation/cpu/x86.inc.md | 2 +- .../installation/{gpu/index.md => gpu.md} | 46 ++++++------- docs/source/index.md | 2 +- 8 files changed, 75 insertions(+), 75 deletions(-) rename docs/source/getting_started/{installation/index.md => installation.md} (62%) rename docs/source/getting_started/installation/{ai_accelerator/index.md => ai_accelerator.md} (77%) rename docs/source/getting_started/installation/{cpu/index.md => cpu.md} (96%) rename docs/source/getting_started/installation/{gpu/index.md => gpu.md} (84%) diff --git a/README.md b/README.md index 5367972f..405e3a25 100644 --- a/README.md +++ b/README.md @@ -91,7 +91,7 @@ pip install vllm ``` Visit our [documentation](https://docs.vllm.ai/en/latest/) to learn more. -- [Installation](https://docs.vllm.ai/en/latest/getting_started/installation/index.html) +- [Installation](https://docs.vllm.ai/en/latest/getting_started/installation.html) - [Quickstart](https://docs.vllm.ai/en/latest/getting_started/quickstart.html) - [List of Supported Models](https://docs.vllm.ai/en/latest/models/supported_models.html) diff --git a/docs/source/getting_started/installation/index.md b/docs/source/getting_started/installation.md similarity index 62% rename from docs/source/getting_started/installation/index.md rename to docs/source/getting_started/installation.md index c64c3a72..cbaa1180 100644 --- a/docs/source/getting_started/installation/index.md +++ b/docs/source/getting_started/installation.md @@ -8,20 +8,20 @@ vLLM supports the following hardware platforms: :maxdepth: 1 :hidden: -gpu/index -cpu/index -ai_accelerator/index +installation/gpu +installation/cpu +installation/ai_accelerator ::: -- +- - NVIDIA CUDA - AMD ROCm - Intel XPU -- +- - Intel/AMD x86 - ARM AArch64 - Apple silicon -- +- - Google TPU - Intel Gaudi - AWS Neuron diff --git a/docs/source/getting_started/installation/ai_accelerator/index.md b/docs/source/getting_started/installation/ai_accelerator.md similarity index 77% rename from docs/source/getting_started/installation/ai_accelerator/index.md rename to docs/source/getting_started/installation/ai_accelerator.md index 01793572..61a853cc 100644 --- a/docs/source/getting_started/installation/ai_accelerator/index.md +++ b/docs/source/getting_started/installation/ai_accelerator.md @@ -9,7 +9,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you :selected: :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -19,7 +19,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -29,7 +29,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -39,7 +39,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} openvino.inc.md +:::{include} ai_accelerator/openvino.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -56,7 +56,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Google TPU :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "## Requirements" :end-before: "## Configure a new environment" ::: @@ -66,7 +66,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "## Requirements" :end-before: "## Configure a new environment" ::: @@ -76,7 +76,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "## Requirements" :end-before: "## Configure a new environment" ::: @@ -86,7 +86,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} openvino.inc.md +:::{include} ai_accelerator/openvino.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" ::: @@ -103,7 +103,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Google TPU :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "## Configure a new environment" :end-before: "## Set up using Python" ::: @@ -113,7 +113,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "## Configure a new environment" :end-before: "## Set up using Python" ::: @@ -123,7 +123,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "## Configure a new environment" :end-before: "## Set up using Python" ::: @@ -133,7 +133,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} ../python_env_setup.inc.md +:::{include} python_env_setup.inc.md ::: :::: @@ -150,7 +150,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Google TPU :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "### Pre-built wheels" :end-before: "### Build wheel from source" ::: @@ -160,7 +160,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "### Pre-built wheels" :end-before: "### Build wheel from source" ::: @@ -170,7 +170,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "### Pre-built wheels" :end-before: "### Build wheel from source" ::: @@ -180,7 +180,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} openvino.inc.md +:::{include} ai_accelerator/openvino.inc.md :start-after: "### Pre-built wheels" :end-before: "### Build wheel from source" ::: @@ -197,7 +197,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Google TPU :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -207,7 +207,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -217,7 +217,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -227,7 +227,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} openvino.inc.md +:::{include} ai_accelerator/openvino.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -246,7 +246,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Google TPU :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" ::: @@ -256,7 +256,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" ::: @@ -266,7 +266,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" ::: @@ -276,7 +276,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} openvino.inc.md +:::{include} ai_accelerator/openvino.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" ::: @@ -293,7 +293,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Google TPU :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "### Build image from source" :end-before: "## Extra information" ::: @@ -303,7 +303,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "### Build image from source" :end-before: "## Extra information" ::: @@ -313,7 +313,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "### Build image from source" :end-before: "## Extra information" ::: @@ -323,7 +323,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} openvino.inc.md +:::{include} ai_accelerator/openvino.inc.md :start-after: "### Build image from source" :end-before: "## Extra information" ::: @@ -340,7 +340,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Google TPU :sync: tpu -:::{include} tpu.inc.md +:::{include} ai_accelerator/tpu.inc.md :start-after: "## Extra information" ::: @@ -349,7 +349,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} Intel Gaudi :sync: hpu-gaudi -:::{include} hpu-gaudi.inc.md +:::{include} ai_accelerator/hpu-gaudi.inc.md :start-after: "## Extra information" ::: @@ -358,7 +358,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} AWS Neuron :sync: neuron -:::{include} neuron.inc.md +:::{include} ai_accelerator/neuron.inc.md :start-after: "## Extra information" ::: @@ -367,7 +367,7 @@ vLLM is a Python library that supports the following AI accelerators. Select you ::::{tab-item} OpenVINO :sync: openvino -:::{include} openvino.inc.md +:::{include} ai_accelerator/openvino.inc.md :start-after: "## Extra information" ::: diff --git a/docs/source/getting_started/installation/cpu/index.md b/docs/source/getting_started/installation/cpu.md similarity index 96% rename from docs/source/getting_started/installation/cpu/index.md rename to docs/source/getting_started/installation/cpu.md index 9c597793..d7b8cc84 100644 --- a/docs/source/getting_started/installation/cpu/index.md +++ b/docs/source/getting_started/installation/cpu.md @@ -9,7 +9,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C :selected: :sync: x86 -:::{include} x86.inc.md +:::{include} cpu/x86.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -19,7 +19,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C ::::{tab-item} ARM AArch64 :sync: arm -:::{include} arm.inc.md +:::{include} cpu/arm.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -29,7 +29,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C ::::{tab-item} Apple silicon :sync: apple -:::{include} apple.inc.md +:::{include} cpu/apple.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -48,7 +48,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C ::::{tab-item} Intel/AMD x86 :sync: x86 -:::{include} x86.inc.md +:::{include} cpu/x86.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" ::: @@ -58,7 +58,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C ::::{tab-item} ARM AArch64 :sync: arm -:::{include} arm.inc.md +:::{include} cpu/arm.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" ::: @@ -68,7 +68,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C ::::{tab-item} Apple silicon :sync: apple -:::{include} apple.inc.md +:::{include} cpu/apple.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" ::: @@ -81,7 +81,7 @@ vLLM is a Python library that supports the following CPU variants. Select your C ### Create a new Python environment -:::{include} ../python_env_setup.inc.md +:::{include} python_env_setup.inc.md ::: ### Pre-built wheels @@ -96,7 +96,7 @@ Currently, there are no pre-built CPU wheels. ::::{tab-item} Intel/AMD x86 :sync: x86 -:::{include} x86.inc.md +:::{include} cpu/x86.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -106,7 +106,7 @@ Currently, there are no pre-built CPU wheels. ::::{tab-item} ARM AArch64 :sync: arm -:::{include} arm.inc.md +:::{include} cpu/arm.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -116,7 +116,7 @@ Currently, there are no pre-built CPU wheels. ::::{tab-item} Apple silicon :sync: apple -:::{include} apple.inc.md +:::{include} cpu/apple.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: diff --git a/docs/source/getting_started/installation/cpu/arm.inc.md b/docs/source/getting_started/installation/cpu/arm.inc.md index a661a0ca..e7d8d606 100644 --- a/docs/source/getting_started/installation/cpu/arm.inc.md +++ b/docs/source/getting_started/installation/cpu/arm.inc.md @@ -20,7 +20,7 @@ There are no pre-built wheels or images for this device, so you must build vLLM ### Build wheel from source -:::{include} build.inc.md +:::{include} cpu/build.inc.md ::: Testing has been conducted on AWS Graviton3 instances for compatibility. diff --git a/docs/source/getting_started/installation/cpu/x86.inc.md b/docs/source/getting_started/installation/cpu/x86.inc.md index 1dafc366..b2f3bafb 100644 --- a/docs/source/getting_started/installation/cpu/x86.inc.md +++ b/docs/source/getting_started/installation/cpu/x86.inc.md @@ -22,7 +22,7 @@ There are no pre-built wheels or images for this device, so you must build vLLM ### Build wheel from source -:::{include} build.inc.md +:::{include} cpu/build.inc.md ::: :::{note} diff --git a/docs/source/getting_started/installation/gpu/index.md b/docs/source/getting_started/installation/gpu.md similarity index 84% rename from docs/source/getting_started/installation/gpu/index.md rename to docs/source/getting_started/installation/gpu.md index f82c4bda..22db9923 100644 --- a/docs/source/getting_started/installation/gpu/index.md +++ b/docs/source/getting_started/installation/gpu.md @@ -9,7 +9,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G :selected: :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -19,7 +19,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G ::::{tab-item} AMD ROCm :sync: rocm -:::{include} rocm.inc.md +:::{include} gpu/rocm.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -29,7 +29,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G ::::{tab-item} Intel XPU :sync: xpu -:::{include} xpu.inc.md +:::{include} gpu/xpu.inc.md :start-after: "# Installation" :end-before: "## Requirements" ::: @@ -49,7 +49,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G ::::{tab-item} NVIDIA CUDA :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" ::: @@ -59,7 +59,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G ::::{tab-item} AMD ROCm :sync: rocm -:::{include} rocm.inc.md +:::{include} gpu/rocm.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" ::: @@ -69,7 +69,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G ::::{tab-item} Intel XPU :sync: xpu -:::{include} xpu.inc.md +:::{include} gpu/xpu.inc.md :start-after: "## Requirements" :end-before: "## Set up using Python" ::: @@ -82,7 +82,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G ### Create a new Python environment -:::{include} ../python_env_setup.inc.md +:::{include} python_env_setup.inc.md ::: :::::{tab-set} @@ -91,7 +91,7 @@ vLLM is a Python library that supports the following GPU variants. Select your G ::::{tab-item} NVIDIA CUDA :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "## Create a new Python environment" :end-before: "### Pre-built wheels" ::: @@ -122,7 +122,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} NVIDIA CUDA :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "### Pre-built wheels" :end-before: "### Build wheel from source" ::: @@ -132,7 +132,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} AMD ROCm :sync: rocm -:::{include} rocm.inc.md +:::{include} gpu/rocm.inc.md :start-after: "### Pre-built wheels" :end-before: "### Build wheel from source" ::: @@ -142,7 +142,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} Intel XPU :sync: xpu -:::{include} xpu.inc.md +:::{include} gpu/xpu.inc.md :start-after: "### Pre-built wheels" :end-before: "### Build wheel from source" ::: @@ -161,7 +161,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} NVIDIA CUDA :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -171,7 +171,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} AMD ROCm :sync: rocm -:::{include} rocm.inc.md +:::{include} gpu/rocm.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -181,7 +181,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} Intel XPU :sync: xpu -:::{include} xpu.inc.md +:::{include} gpu/xpu.inc.md :start-after: "### Build wheel from source" :end-before: "## Set up using Docker" ::: @@ -200,7 +200,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} NVIDIA CUDA :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" ::: @@ -210,7 +210,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} AMD ROCm :sync: rocm -:::{include} rocm.inc.md +:::{include} gpu/rocm.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" ::: @@ -220,7 +220,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} Intel XPU :sync: xpu -:::{include} xpu.inc.md +:::{include} gpu/xpu.inc.md :start-after: "### Pre-built images" :end-before: "### Build image from source" ::: @@ -237,7 +237,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} NVIDIA CUDA :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "### Build image from source" :end-before: "## Supported features" ::: @@ -247,7 +247,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} AMD ROCm :sync: rocm -:::{include} rocm.inc.md +:::{include} gpu/rocm.inc.md :start-after: "### Build image from source" :end-before: "## Supported features" ::: @@ -257,7 +257,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} Intel XPU :sync: xpu -:::{include} xpu.inc.md +:::{include} gpu/xpu.inc.md :start-after: "### Build image from source" :end-before: "## Supported features" ::: @@ -274,7 +274,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} NVIDIA CUDA :sync: cuda -:::{include} cuda.inc.md +:::{include} gpu/cuda.inc.md :start-after: "## Supported features" ::: @@ -283,7 +283,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} AMD ROCm :sync: rocm -:::{include} rocm.inc.md +:::{include} gpu/rocm.inc.md :start-after: "## Supported features" ::: @@ -292,7 +292,7 @@ There is no extra information on creating a new Python environment for this devi ::::{tab-item} Intel XPU :sync: xpu -:::{include} xpu.inc.md +:::{include} gpu/xpu.inc.md :start-after: "## Supported features" ::: diff --git a/docs/source/index.md b/docs/source/index.md index 09ada433..52c4622d 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -62,7 +62,7 @@ For more information, check out the following: :caption: Getting Started :maxdepth: 1 -getting_started/installation/index +getting_started/installation getting_started/quickstart getting_started/examples/examples_index getting_started/troubleshooting