-
Notifications
You must be signed in to change notification settings - Fork 770
Adding test for CadenceWakeWordQuantizer #16356
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16356
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 257c67e with merge base 6785e1f ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
Summary: As title. Differential Revision: D88898933
Summary: As title. Differential Revision: D88898933
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds test coverage for four Cadence quantizers that were previously missing tests: CadenceWakeWordQuantizer, CadenceWith16BitConvActivationsQuantizer, CadenceWithLayerNormQuantizer, and CadenceWithSoftmaxQuantizer. The TODO comments for these quantizers are removed from the exclusion list.
Key changes:
- Adds five new test cases to
QUANTIZER_ANNOTATION_TEST_CASEScovering conv1d, conv2d, softmax, layer_norm, and add operations - Implements five new graph builder helper methods for constructing test graphs for each operation type
- Removes four quantizers from
EXCLUDED_FROM_ANNOTATION_TESTINGset
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| ( | ||
| "add_A8W8", | ||
| lambda self: self._build_add_graph(), | ||
| CadenceWakeWordQuantizer(), | ||
| torch.ops.aten.add.Tensor, | ||
| qconfig_A8W8.output_activation, | ||
| # For add: both inputs are activations | ||
| [qconfig_A8W8.input_activation, qconfig_A8W8.input_activation], | ||
| ), |
Copilot
AI
Dec 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CadenceWakeWordQuantizer includes both AddPattern and CatPattern (for add and cat operations), but this test only covers the add operation. Consider adding a test case for the cat operation to ensure complete coverage of CadenceWakeWordQuantizer's functionality.
Summary: Add annotation tests for CadenceWith16BitConvActivationsQuantizer covering both conv1d and conv2d operations. Differential Revision: D88895865
Differential Revision: D88896712
Differential Revision: D88898823
Summary: Pull Request resolved: pytorch#16356 As title. Reviewed By: hsharma35 Differential Revision: D88898933
0db3ba5 to
257c67e
Compare
Summary: As title.
Differential Revision: D88898933