[V1][Bugfix][Spec Decode] Fix incorrect outputs in V1 speculative decoding due to batch indexing #14645
+50
−15
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In
gpu_model_runner.py
, the requests in the batch are ordered according toself.input_batch.req_ids
. However, the speculative decoding code is inferring the batch order from the keys of the dictionaryscheduler_output.num_scheduled_tokens
which is not necessarily ordered in the same way.When multiple requests in the batch have different speculative lengths, the grouping of output probability slices into speculative decoding requests depends on this ordering. The existing end-to-end test is insufficient to capture this effect as the batch size is not large enough and the requests are too similar. I have updated the E2E test with a large batch of mixed requests, some of which will have full ngram completion and some which will not. Running this test on the main branch leads to many failures where one completion will have tokens "leaked" from other requests.
This one-line fix resolves the issue completely.