[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
|
|
|
|
|
|
import pytest
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def sample_prompts():
|
|
|
|
return [
|
|
|
|
"Hello, my name is",
|
|
|
|
"The president of the United States is",
|
|
|
|
"The capital of France is",
|
|
|
|
"The future of AI is",
|
|
|
|
]
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def sample_token_ids():
|
|
|
|
return [
|
|
|
|
[0],
|
|
|
|
[0, 1],
|
|
|
|
[0, 2, 1],
|
|
|
|
[0, 3, 1, 2],
|
|
|
|
]
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def sample_regex():
|
|
|
|
return (r"((25[0-5]|(2[0-4]|1\d|[1-9]|)\d)\.){3}"
|
|
|
|
r"(25[0-5]|(2[0-4]|1\d|[1-9]|)\d)")
|
|
|
|
|
|
|
|
|
2025-03-07 10:19:11 -05:00
|
|
|
# Note: Ensure this only uses attributes compatible with xgrammar
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
@pytest.fixture
|
|
|
|
def sample_json_schema():
|
|
|
|
return {
|
|
|
|
"type": "object",
|
|
|
|
"properties": {
|
|
|
|
"name": {
|
|
|
|
"type": "string"
|
|
|
|
},
|
|
|
|
"age": {
|
|
|
|
"type": "integer"
|
|
|
|
},
|
|
|
|
"skills": {
|
|
|
|
"type": "array",
|
|
|
|
"items": {
|
|
|
|
"type": "string",
|
2025-03-07 10:19:11 -05:00
|
|
|
}
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
},
|
|
|
|
"work_history": {
|
|
|
|
"type": "array",
|
|
|
|
"items": {
|
|
|
|
"type": "object",
|
|
|
|
"properties": {
|
|
|
|
"company": {
|
|
|
|
"type": "string"
|
|
|
|
},
|
|
|
|
"duration": {
|
|
|
|
"type": "number"
|
|
|
|
},
|
|
|
|
"position": {
|
|
|
|
"type": "string"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"required": ["company", "position"]
|
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"required": ["name", "age", "skills", "work_history"]
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2025-03-07 10:19:11 -05:00
|
|
|
# A schema unsupported by xgrammar
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
@pytest.fixture
|
2025-03-07 10:19:11 -05:00
|
|
|
def unsupported_json_schema():
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
return {
|
|
|
|
"type": "object",
|
|
|
|
"properties": {
|
|
|
|
"score": {
|
|
|
|
"type": "integer",
|
|
|
|
"minimum": 0,
|
|
|
|
"maximum": 100 # Numeric range
|
|
|
|
},
|
|
|
|
"grade": {
|
|
|
|
"type": "string",
|
|
|
|
"pattern": "^[A-D]$" # Regex pattern
|
|
|
|
},
|
|
|
|
"email": {
|
|
|
|
"type": "string",
|
|
|
|
"pattern": "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
|
|
|
|
},
|
|
|
|
"tags": {
|
|
|
|
"type": "array",
|
|
|
|
"items": {
|
|
|
|
"type": "string",
|
|
|
|
"pattern":
|
|
|
|
"^[a-z]{1,10}$" # Combining length and pattern restrictions
|
|
|
|
}
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"required": ["score", "grade", "email", "tags"]
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def sample_definition_json_schema():
|
|
|
|
return {
|
|
|
|
'$defs': {
|
|
|
|
'Step': {
|
|
|
|
'properties': {
|
|
|
|
'explanation': {
|
|
|
|
'title': 'Explanation',
|
|
|
|
'type': 'string'
|
|
|
|
},
|
|
|
|
'output': {
|
|
|
|
'title': 'Output',
|
|
|
|
'type': 'string'
|
|
|
|
}
|
|
|
|
},
|
|
|
|
'required': ['explanation', 'output'],
|
|
|
|
'title': 'Step',
|
|
|
|
'type': 'object'
|
|
|
|
}
|
|
|
|
},
|
|
|
|
'properties': {
|
|
|
|
'steps': {
|
|
|
|
'items': {
|
|
|
|
'$ref': '#/$defs/Step'
|
|
|
|
},
|
|
|
|
'title': 'Steps',
|
|
|
|
'type': 'array'
|
|
|
|
},
|
|
|
|
'final_answer': {
|
|
|
|
'title': 'Final Answer',
|
|
|
|
'type': 'string'
|
|
|
|
}
|
|
|
|
},
|
|
|
|
'required': ['steps', 'final_answer'],
|
|
|
|
'title': 'MathReasoning',
|
|
|
|
'type': 'object'
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def sample_guided_choice():
|
|
|
|
return [
|
|
|
|
"Python", "Java", "JavaScript", "C++", "C#", "PHP", "TypeScript",
|
|
|
|
"Ruby", "Swift", "Kotlin"
|
|
|
|
]
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
2025-03-07 10:19:11 -05:00
|
|
|
def sample_sql_ebnf():
|
|
|
|
return """
|
|
|
|
root ::= select_statement
|
|
|
|
select_statement ::= "SELECT" column "from" table "where" condition
|
|
|
|
column ::= "col_1" | "col_2"
|
|
|
|
table ::= "table_1" | "table_2"
|
|
|
|
condition ::= column "=" number
|
|
|
|
number ::= "1" | "2"
|
|
|
|
"""
|
|
|
|
|
|
|
|
|
|
|
|
@pytest.fixture
|
|
|
|
def sample_sql_lark():
|
[V1] Logprobs and prompt logprobs support (#9880)
This PR is adding support for sample logprobs & prompt logprobs to vLLM v1.
New behavior:
- During model execution, model runner computes sample logprobs (if user-provided logprobs setting is not None) and prompt logprobs (if user-provided prompt_logprobs setting is not None). For both sample and prompt logprobs, the engine core returns 3 vectors: token ids, token logprob values, token ranks. Ranks reflect tokens' 1-indexed positions in the vocabulary vector after sorting the vocabulary by log probability in descending order.
- In scheduler.update_from_output(), sample and prompt logprobs are incorporated into the EngineCoreOutput data structure which is transferred to the engine client. If multiprocessing is enabled, then sample and prompt logprobs will be (de)serialized when the EngineCoreOutput data structure is (de)serialized.
- During output processing, the LogprobsProcessor transforms the triplet of token ids, token logprobs values, and token ranks into the OpenAI-compatible List[Dict[token id,Logprob]] format (for sample and prompt logprobs respectively.)
- Each Logprob instance (whether sample- or prompt-) consists of a token's log-probability, rank, and detokenized string representation. Note that logprob detokenization is handled by the LogprobsProcessor not the detokenizer.
Signed-off-by: Andrew Feldman <afeldman@neuralmagic.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
2025-02-07 10:26:20 -05:00
|
|
|
return ("""
|
|
|
|
start: select_statement
|
|
|
|
select_statement: "SELECT" column "from" table "where" condition
|
|
|
|
column: "col_1" | "col_2"
|
|
|
|
table: "table_1" | "table_2"
|
|
|
|
condition: column "=" number
|
|
|
|
number: "1" | "2"
|
|
|
|
""")
|