
- **Add SPDX license headers to python source files** - **Check for SPDX headers using pre-commit** commit 9d7ef44c3cfb72ca4c32e1c677d99259d10d4745 Author: Russell Bryant <rbryant@redhat.com> Date: Fri Jan 31 14:18:24 2025 -0500 Add SPDX license headers to python source files This commit adds SPDX license headers to python source files as recommended to the project by the Linux Foundation. These headers provide a concise way that is both human and machine readable for communicating license information for each source file. It helps avoid any ambiguity about the license of the code and can also be easily used by tools to help manage license compliance. The Linux Foundation runs license scans against the codebase to help ensure we are in compliance with the licenses of the code we use, including dependencies. Having these headers in place helps that tool do its job. More information can be found on the SPDX site: - https://spdx.dev/learn/handling-license-info/ Signed-off-by: Russell Bryant <rbryant@redhat.com> commit 5a1cf1cb3b80759131c73f6a9dddebccac039dea Author: Russell Bryant <rbryant@redhat.com> Date: Fri Jan 31 14:36:32 2025 -0500 Check for SPDX headers using pre-commit Signed-off-by: Russell Bryant <rbryant@redhat.com> --------- Signed-off-by: Russell Bryant <rbryant@redhat.com>
vLLM benchmark suite
Introduction
This directory contains two sets of benchmark for vllm.
- Performance benchmark: benchmark vllm's performance under various workload, for developers to gain clarity on whether their PR improves/degrades vllm's performance
- Nightly benchmark: compare vllm's performance against alternatives (tgi, trt-llm and lmdeploy), for the public to know when to choose vllm.
See vLLM performance dashboard for the latest performance benchmark results and vLLM GitHub README for latest nightly benchmark results.
Performance benchmark quick overview
Benchmarking Coverage: latency, throughput and fix-qps serving on A100 (the support for FP8 benchmark on H100 is coming!), with different models.
Benchmarking Duration: about 1hr.
For benchmarking developers: please try your best to constraint the duration of benchmarking to about 1 hr so that it won't take forever to run.
Nightly benchmark quick overview
Benchmarking Coverage: Fix-qps serving on A100 (the support for FP8 benchmark on H100 is coming!) on Llama-3 8B, 70B and Mixtral 8x7B.
Benchmarking engines: vllm, TGI, trt-llm and lmdeploy.
Benchmarking Duration: about 3.5hrs.
Trigger the benchmark
Performance benchmark will be triggered when:
- A PR being merged into vllm.
- Every commit for those PRs with
perf-benchmarks
label ANDready
label.
Nightly benchmark will be triggered when:
- Every commit for those PRs with
perf-benchmarks
label andnightly-benchmarks
label.
Performance benchmark details
See performance-benchmarks-descriptions.md for detailed descriptions, and use tests/latency-tests.json
, tests/throughput-tests.json
, tests/serving-tests.json
to configure the test cases.
Latency test
Here is an example of one test inside latency-tests.json
:
[
{
"test_name": "latency_llama8B_tp1",
"parameters": {
"model": "meta-llama/Meta-Llama-3-8B",
"tensor_parallel_size": 1,
"load_format": "dummy",
"num_iters_warmup": 5,
"num_iters": 15
}
},
]
In this example:
- The
test_name
attributes is a unique identifier for the test. Inlatency-tests.json
, it must start withlatency_
. - The
parameters
attribute control the command line arguments to be used forbenchmark_latency.py
. Note that please use underline_
instead of the dash-
when specifying the command line arguments, andrun-performance-benchmarks.sh
will convert the underline to dash when feeding the arguments tobenchmark_latency.py
. For example, the corresponding command line arguments forbenchmark_latency.py
will be--model meta-llama/Meta-Llama-3-8B --tensor-parallel-size 1 --load-format dummy --num-iters-warmup 5 --num-iters 15
Note that the performance numbers are highly sensitive to the value of the parameters. Please make sure the parameters are set correctly.
WARNING: The benchmarking script will save json results by itself, so please do not configure --output-json
parameter in the json file.
Throughput test
The tests are specified in throughput-tests.json
. The syntax is similar to latency-tests.json
, except for that the parameters will be fed forward to benchmark_throughput.py
.
The number of this test is also stable -- a slight change on the value of this number might vary the performance numbers by a lot.
Serving test
We test the throughput by using benchmark_serving.py
with request rate = inf to cover the online serving overhead. The corresponding parameters are in serving-tests.json
, and here is an example:
[
{
"test_name": "serving_llama8B_tp1_sharegpt",
"qps_list": [1, 4, 16, "inf"],
"server_parameters": {
"model": "meta-llama/Meta-Llama-3-8B",
"tensor_parallel_size": 1,
"swap_space": 16,
"disable_log_stats": "",
"disable_log_requests": "",
"load_format": "dummy"
},
"client_parameters": {
"model": "meta-llama/Meta-Llama-3-8B",
"backend": "vllm",
"dataset_name": "sharegpt",
"dataset_path": "./ShareGPT_V3_unfiltered_cleaned_split.json",
"num_prompts": 200
}
},
]
Inside this example:
- The
test_name
attribute is also a unique identifier for the test. It must start withserving_
. - The
server-parameters
includes the command line arguments for vLLM server. - The
client-parameters
includes the command line arguments forbenchmark_serving.py
. - The
qps_list
controls the list of qps for test. It will be used to configure the--request-rate
parameter inbenchmark_serving.py
The number of this test is less stable compared to the delay and latency benchmarks (due to randomized sharegpt dataset sampling inside benchmark_serving.py
), but a large change on this number (e.g. 5% change) still vary the output greatly.
WARNING: The benchmarking script will save json results by itself, so please do not configure --save-results
or other results-saving-related parameters in serving-tests.json
.
Visualizing the results
The convert-results-json-to-markdown.py
helps you put the benchmarking results inside a markdown table, by formatting descriptions.md with real benchmarking results.
You can find the result presented as a table inside the buildkite/performance-benchmark
job page.
If you do not see the table, please wait till the benchmark finish running.
The json version of the table (together with the json version of the benchmark) will be also attached to the markdown file.
The raw benchmarking results (in the format of json files) are in the Artifacts
tab of the benchmarking.
Nightly test details
See nightly-descriptions.md for the detailed description on test workload, models and docker containers of benchmarking other llm engines.
Workflow
- The nightly-pipeline.yaml specifies the docker containers for different LLM serving engines.
- Inside each container, we run run-nightly-suite.sh, which will probe the serving engine of the current container.
- The
run-nightly-suite.sh
will redirect the request totests/run-[llm serving engine name]-nightly.sh
, which parses the workload described in nightly-tests.json and performs the benchmark. - At last, we run scripts/plot-nightly-results.py to collect and plot the final benchmarking results, and update the results to buildkite.
Nightly tests
In nightly-tests.json, we include the command line arguments for benchmarking commands, together with the benchmarking test cases. The format is highly similar to performance benchmark.
Docker containers
The docker containers for benchmarking are specified in nightly-pipeline.yaml
.
WARNING: the docker versions are HARD-CODED and SHOULD BE ALIGNED WITH nightly-descriptions.md
. The docker versions need to be hard-coded as there are several version-specific bug fixes inside tests/run-[llm serving engine name]-nightly.sh
.
WARNING: populating trt-llm
to latest version is not easy, as it requires updating several protobuf files in tensorrt-demo.