-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1][Core] Add support for V1 Engine #295
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: shen-shanshan <467638484@qq.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice you copied much code from vllm cuda v1. Please check if you can just import the code instaed of rewrite.
Another question:
Do we really need v1
module? How about just move the file to the right place and rename to something like model_runner_v1.py
?
vllm_ascend/platform.py
Outdated
return int(physical_device_id) | ||
else: | ||
return device_id | ||
# def _device_id_to_physical_device_id(device_id: int) -> int: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do not comment code, please remove it if it's useless, otherwise add a note here to explain why the code is comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about just move the file to the right place and rename to something like model_runner_v1.py?
I think it's a good idea.
vllm_ascend/platform.py
Outdated
@@ -74,8 +77,9 @@ def get_device_capability(cls, device_id: int = 0): | |||
|
|||
@classmethod | |||
def get_device_name(cls, device_id: int = 0) -> str: | |||
physical_device_id = _device_id_to_physical_device_id(device_id) | |||
return torch.npu.get_device_name(physical_device_id) | |||
# physical_device_id = _device_id_to_physical_device_id(device_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
vllm_ascend/platform.py
Outdated
@@ -103,17 +107,35 @@ def mem_get_info(cls) -> Tuple[int, int]: | |||
|
|||
@classmethod | |||
def check_and_update_config(cls, vllm_config: VllmConfig) -> None: | |||
compilation_config = vllm_config.compilation_config | |||
if compilation_config.level != CompilationLevel.NO_COMPILATION: | |||
logger.info("[NPU] Forcing NO_COMPILATION compilation level") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print current compilation_config.level to log as well. and change to warning level.
vllm_ascend/platform.py
Outdated
logger.warning("[V1][NPU] Disable prefix caching") | ||
cache_config.enable_prefix_caching = False | ||
|
||
assert not vllm_config.speculative_config, ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Speculative decoding works now for 0.7.3. we don't need this in main IMO.
vllm_ascend/v1/npu_attention.py
Outdated
|
||
import torch | ||
|
||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need to try catch, just import torch_npu is fine
vllm_ascend/v1/npu_model_runner.py
Outdated
logger = init_logger(__name__) | ||
|
||
|
||
class NPUModelRunner(LoRAModelRunnerMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why based from LORA?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In vLLM V1, LoRAModelRunnerMixin
is a base class for GPUModelRunner
, but I see TPUModelRunner
doesn't extends this base class, so I think NPUModelRunner
may also don't neet to extends this base class. I will modify this soon.
vllm_ascend/v1/npu_model_runner.py
Outdated
vocab_size=model_config.get_vocab_size(), | ||
) | ||
|
||
self.use_cuda_graph = (self.vllm_config.compilation_config.level |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove cuda related code
vllm_ascend/v1/npu_worker.py
Outdated
def init_device(self): | ||
if self.device_config.device.type == "npu": | ||
# # This env var set by Ray causes exceptions with graph building. | ||
# os.environ.pop("NCCL_ASYNC_ERROR_HANDLING", None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please remove the NCCL related comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK👌
What this PR does / why we need it?
Add support for V1 Engine.
Please note that this is just the initial version, and there may be some places need to be fixed or optimized in the future, feel free to leave some comments to us.
Does this PR introduce any user-facing change?
To use V1 Engine on NPU device, you need to set the env variable shown below:
How was this patch tested?
I have tested the online serving with
Qwen2.5-7B-Instruct
using this command:Query the model with input prompts:
The test logs are shown below: