-
Notifications
You must be signed in to change notification settings - Fork 239
Remove zero_point_domain from quant configs #2058
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2058
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 8 Pending, 1 Unrelated FailureAs of commit 2542f5a with merge base 88cd9c7 ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
please provide some context in the PR summary on the motivation on the change and the summary of the changes in the PR |
@@ -370,7 +355,7 @@ def test_export_QDQLayout(self): | |||
"torch.ops.torchao.choose_qparams_affine.default(input_1, 'ASYMMETRIC', [1, 512], torch.int8, None, None, None, torch.float64, torch.int64)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think the test should probably just be checking the ops unless strictly necessary, otherwise the test might be a bit fragile, there is no guarantee on the names of intermediate values in fx graph
zero_point_min = zero_point.min().item() | ||
zero_point_max = zero_point.max().item() | ||
if zero_point_min == 0 and zero_point_max == 0: | ||
has_weight_zeros = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: might be better to have a util to check for zero_point being all zero and use it everywhere to keep consistent
@@ -156,48 +156,109 @@ def from_plain( | |||
zero_point: Optional[torch.Tensor], | |||
layout: Layout, | |||
bias: Optional[torch.Tensor] = None, | |||
*, | |||
validate_inputs: bool = True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this needed? I think ideally should be enforced by the choose qparams by construction
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You do not need to go through the quant config to construct a packed tensor. You can construct one directly with from_plain.
When going from quant_config, it sets validate_inputs = False in from_plain because it's already enforced by construction. But if someone calls from_plain outside of this, I wanted the inputs to be validated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I see
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good, just left some nit comments
ZeroPointDomain will be removed from the quant primitives, so this PR removes it from the configs and instead relies on MappingType to convey similar information.