[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
import random
|
2025-03-03 01:34:51 +00:00
|
|
|
from typing import Optional
|
2025-02-24 11:29:41 -05:00
|
|
|
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
import pytest
|
|
|
|
|
|
|
|
from vllm import LLM, SamplingParams
|
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
MODEL = "facebook/opt-125m"
|
|
|
|
DTYPE = "half"
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
|
|
|
|
|
2025-02-24 11:29:41 -05:00
|
|
|
def _vllm_model(apc: bool, vllm_runner, monkeypatch):
|
|
|
|
"""Set up VllmRunner instance."""
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
monkeypatch.setenv("VLLM_USE_V1", "1")
|
2025-02-24 11:29:41 -05:00
|
|
|
return vllm_runner(
|
|
|
|
MODEL,
|
|
|
|
dtype=DTYPE,
|
|
|
|
max_model_len=128,
|
|
|
|
enforce_eager=True,
|
|
|
|
enable_prefix_caching=apc,
|
|
|
|
gpu_memory_utilization=0.5,
|
|
|
|
)
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture(
|
|
|
|
# Function scope decouples tests & allows
|
|
|
|
# env var adjustment via monkeypatch
|
|
|
|
scope="function",
|
|
|
|
# Prefix caching
|
|
|
|
params=[False, True])
|
|
|
|
def vllm_model(vllm_runner, request, monkeypatch):
|
|
|
|
"""VllmRunner test fixture parameterized by APC True/False."""
|
|
|
|
with _vllm_model(request.param, vllm_runner, monkeypatch) as vllm_model:
|
|
|
|
yield vllm_model
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture(scope="function")
|
|
|
|
def vllm_model_apc(vllm_runner, monkeypatch):
|
|
|
|
"""VllmRunner test fixture with APC."""
|
|
|
|
with _vllm_model(True, vllm_runner, monkeypatch) as vllm_model:
|
|
|
|
yield vllm_model
|
|
|
|
|
|
|
|
|
|
|
|
def _get_test_sampling_params(
|
2025-03-03 01:34:51 +00:00
|
|
|
prompt_list: list[str],
|
2025-02-24 11:29:41 -05:00
|
|
|
seed: Optional[int] = 42,
|
2025-03-03 01:34:51 +00:00
|
|
|
) -> tuple[list[SamplingParams], list[int]]:
|
2025-02-24 11:29:41 -05:00
|
|
|
"""Generate random sampling params for a batch."""
|
|
|
|
|
|
|
|
def get_mostly_n_gt1() -> int:
|
|
|
|
"""Mostly n \in [2,20], ~1/3 n=1"""
|
|
|
|
x = random.randint(0, 28)
|
|
|
|
if x < 10:
|
|
|
|
return 1
|
|
|
|
else:
|
|
|
|
return x - 8
|
|
|
|
|
|
|
|
n_list = [get_mostly_n_gt1() for _ in range(len(prompt_list))]
|
|
|
|
# High temperature to maximize the chance of unique completions
|
|
|
|
return [
|
|
|
|
SamplingParams(temperature=0.95, top_p=0.95, n=n, seed=seed)
|
|
|
|
for n in n_list
|
|
|
|
], n_list
|
|
|
|
|
|
|
|
|
|
|
|
def test_parallel_sampling(vllm_model, example_prompts) -> None:
|
|
|
|
"""Test passes if parallel sampling `n>1` yields `n` unique completions.
|
|
|
|
|
|
|
|
Args:
|
|
|
|
vllm_model: VllmRunner instance under test.
|
|
|
|
example_prompt: test fixture providing prompts for testing.
|
|
|
|
"""
|
|
|
|
sampling_params_list, n_list = _get_test_sampling_params(example_prompts)
|
|
|
|
model: LLM = vllm_model.model
|
|
|
|
outputs = model.generate(example_prompts, sampling_params_list)
|
|
|
|
|
|
|
|
# Validate each request response
|
|
|
|
for out, n in zip(outputs, n_list):
|
2025-03-03 01:34:51 +00:00
|
|
|
completion_counts: dict[str, int] = {}
|
2025-02-24 11:29:41 -05:00
|
|
|
# Assert correct number of completions
|
|
|
|
assert len(out.outputs) == n, (
|
|
|
|
f"{len(out.outputs)} completions; {n} expected.")
|
|
|
|
for idx in range(n):
|
|
|
|
comp = out.outputs[idx]
|
|
|
|
# Assert correct completion indices
|
|
|
|
assert comp.index == idx, (f"Index {comp.index}; expected {idx}.")
|
|
|
|
text = comp.text
|
|
|
|
completion_counts[text] = completion_counts.get(text, 0) + 1
|
|
|
|
# Assert unique completions
|
|
|
|
if len(completion_counts) != n:
|
|
|
|
repeats = {
|
|
|
|
txt: num
|
|
|
|
for (txt, num) in completion_counts.items() if num > 1
|
|
|
|
}
|
|
|
|
raise AssertionError(
|
|
|
|
f"{len(completion_counts)} unique completions; expected"
|
|
|
|
f" {n}. Repeats: {repeats}")
|