Skip to content

Conversation

@winskuo-quic
Copy link
Collaborator

Summary

  • AOT Debug Handle Enablement. Also supported debug_handle map, although debugger did not use it. Enable this because of community request.
  • Runtime Debug Handle Enablement: Parse debug handle in runtime and use debug_handle and tensors key when storing result into etdump.
  • Reuse ExecuTorch Debugger feature and reduce redundancy between QNN ExecuTorch Debugger and ExecuTorch Debugger Utils.

Additional Topics:

  • What is the official way of retrieving an edge module that does not carry backend info?

Test plan

  • E2E example script test
    • python backends/qualcomm/tests/test_qnn_delegate.py -k TestExampleUtilsScript.test_intermediate_debugger -s $DEVICE --model SM8650 --build_folder build-android/ --executorch_root . --image_dataset ../imagenet-mini/val/ --artifact ./e2e_test_debug
  • Simple model test
    • python backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_simple_model --model SM8550 --device $DEVICE --build_folder build-android
    • python backends/qualcomm/tests/test_qnn_delegate.py -k TestQNNQuantizedUtils.test_qnn_backend_dump_intermediate_outputs_topk --model SM8550 --device $DEVICE --build_folder build-android

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 18, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16316

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit 00c4f7e with merge base 3233761 (image):

NEW FAILURE - The following job has failed:

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 18, 2025
@github-actions
Copy link

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@winskuo-quic winskuo-quic marked this pull request as draft December 18, 2025 09:51
@Gasoonjia
Copy link
Contributor

Is the PR ready to be reviewed now?

For any passes executed during qnn_preprocess, users will need to handle debug_handle ID themselves.
Description: During passes transformation, some passes might be copying some node's meta when creating a new node,
which means multiple nodes might be sharing the same debug_handle ID while it shouldn't.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

im not super understand here: if several nodes comes from one acient node (e..g doing decomposition on some op), they should have the same debug handle for tracing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea is that if we decompose the node but never assign a new handle ID, we are just saving the information for the last decomposed node rather than all decomposed node. I have draw an example below. Since edge and QNN has 1 to 1 mapping in this case, I think it would be better to gather all possible information rather than the last node's debug info. Since we reassign graph_handle, instead of only getting the output of node2, we can also get info for node1.
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here im a little confused: when we see the qnn graph, how can we know that the qnn_node_1 and qnn_node2 here comes from a same super node? Or another q might be, which graph will play as the ground truth graph, when you doing intermediate comparsion?

gather all possible information rather than the last node's debug info.

We won't gather only the last node debug info, but all info.

In ExecuTorch normally we follow this rule:
if we transform {old_node_1, old_node_2, ..., old_node_n} into {new_node_1, new_node_2, ..., new_node_m}, where n and m can be arbitrary number starting from 1, then: eery new_node should have same debug handle, and the debug handle will be set(old_node_1.debug_handle + old_node_2.debug_handle, ..., old_node_n.debug_handle)

you can see if n is 1, this transform will be a operator decomposition; if m is 1, this transform will be a operator fusion, etc.

In this way whenever we see an arbitrary new_node, we will know its ancestor.

Not sure if that make sense to you?

@winskuo-quic winskuo-quic marked this pull request as ready for review December 19, 2025 01:36
@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/debug_handle branch 2 times, most recently from 9a7ca59 to dc72614 Compare December 19, 2025 01:45
@winskuo-quic
Copy link
Collaborator Author

Is the PR ready to be reviewed now?

Hi @Gasoonjia,
I was rebasing previously, so I set it to draft. This PR should now be ready for review. Thanks.

@winskuo-quic
Copy link
Collaborator Author

Hi @cccclai, @Gasoonjia, @kimishpatel,
I have supported the debug_handle in this PR, which has been mentioned in #5310 and #15735.
We will now be using the ExecuTorch's official Intermediate_Output_Capturer to capture CPU's intermediate result.

Also, I would also like to get some suggestions on the official API to retrieve an edge IR. The current way of retrieving an edge IR is through:

edge_module = lower_module.original_module.module()

However, I encountered following issues when retrieving edge IR using the above method.

  1. If there are partitions, I'll be getting a graph that fuses supported nodes to delegate node(s). However, it would be helpful for debugging if we could get the edge IR graph that does not fuse the backend supported nodes to delegate node(s).
  2. I noticed that by using the edge IR graph above, the input order might have changed. This can be easily reproduced by using model that has more than 1 input (e.g., Roberta). I would like to know how if there's any way I can get a graph that has the correct input order.

Thanks

@Gasoonjia
Copy link
Contributor

hi @winskuo-quic

I think instead of using the edge graph IR as the ground truth for comparsion, it will be great if we can use the export program ET stack get at the first place (e.g. the export graph of model variable

here), since it should be the source graph ET stack take and our job is making sure our intermediate output the same as the input graph as much as possible.

You can see how we calculate intermediate output numercal descrepancy:

def calculate_numeric_gap(

https://github.com/pytorch/executorch/blob/0fb422f9c59e0e5526c0082352a583baf0510fb7/exir/passes/debug_handle_generator_pass.py here's pass for debug handle generation, where the debug handle of a node is the same as the node sharing the same greatest ancestor node in the export flow.

@Gasoonjia
Copy link
Contributor

Gasoonjia commented Dec 19, 2025

Here's an example of how our current API works on VIT model on xnnpack backend: https://gist.github.com/Gasoonjia/db6285ac39ad5759b95c7a92d37cd4f8

and below is the expected output. For some ops like layernorm there're still some issue i need to fix.

idx aot_ops aot_intermediate_output runtime_ops runtime_intermediate_output gap
0 [conv2d] [[[tensor(0.0253, -0.0287, -0.0042, 0.0118, …)]] [DELEGATE_CALL] [[[tensor(0.0253, -0.0287, -0.0042, 0.0118, …)]] 3.2825094945114346e-15
1 [permute, cat, add, dropout] [[[tensor(-0.0024), tensor(0.0054), tensor(0.0…), …]] [DELEGATE_CALL] [[[tensor(-0.0024), tensor(0.0054), tensor(0.0…), …]] 3.281230918554512e-15
2 [expand] [[[tensor(-0.0012), tensor(0.0027), tensor(0.0…), …]] [native_call_expand_copy.out] [[[tensor(-0.0012), tensor(0.0027), tensor(0.0…), …]] 0.0
3 [layer_norm] [[[tensor(-0.0001), tensor(0.0009), tensor(-0.…), …]] [native_call_native_layer_norm.out] [[[tensor(31.1172)], [tensor(4.3549)], [tensor(…)…]] 19.7299543374596
4 [transpose, linear, unflatten, …, transpose] [[[tensor(0.0027), tensor(-0.0032), tensor(0.0…), …]] [DELEGATE_CALL, DELEGATE_CALL, DELEGATE_…] [[[tensor(0.0027), tensor(-0.0032), tensor(0.0…), …]] 9.381436078525961e-05
61 [layer_norm_23] [[[tensor(-0.8604), tensor(-0.1713), tensor(-0.…),…]] [native_call_native_layer_norm.out] [[[tensor(2.2180)], [tensor(1.8462)], [tensor(…)…]] 2.8061147356332854
62 [linear_46, gelu_11, dropout_35, …, dropout_36] [[[tensor(-0.6561), tensor(-0.0496), tensor(-0.…),…]] [DELEGATE_CALL] [[[tensor(-0.6561), tensor(-0.0496), tensor(-0.…),…]] 1.0872256686587983e-11
63 [layer_norm_24] [[[tensor(-0.9040), tensor(-0.1004), tensor(-0.…),…]] [native_call_native_layer_norm.out] [[[tensor(1.9138)], [tensor(1.9031)], [tensor(…)…]] 3.104443617092582
64 [select_36] [[tensor(-0.9040), tensor(-0.1004), tensor(-0.…),…]] [native_call_select_copy.int_out] [[tensor(-0.9040), tensor(-0.1004), tensor(-0.…),…]] 1.1178469901123618e-12
65 [linear_48] [[tensor(-0.9624), tensor(0.7285), tensor(0.79…),…]] [DELEGATE_CALL] [[tensor(-0.9624), tensor(0.7285), tensor(0.79…),…]] 1.7864835786911282e-12

I would love to chat with you regarding how we can make the pipeline works on qualcomm backend!
Hope it can help you!

@winskuo-quic winskuo-quic force-pushed the dev1/winskuo/debug_handle branch from dc72614 to 00c4f7e Compare December 19, 2025 05:25
@winskuo-quic winskuo-quic marked this pull request as draft December 19, 2025 05:26
@winskuo-quic
Copy link
Collaborator Author

Hi @Gasoonjia,
I have turned this PR back to draft for now.
I would love to learn more about ExecuTorch's Debugger Framework and let's move this conversation to email first.
I would also love to chat with you to discuss more in details.

Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx for the work!

import operator

import torch
from executorch.backends.qualcomm.utils.constants import QCOM_DEBUG_HANDLE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would love to directly reuse DEBUG_HANDLE_KEY from ExecuTorch https://github.com/pytorch/executorch/blob/main/exir/passes/debug_handle_generator_pass.py#L10 to make sure that we are working on same item.

def call(self, graph_module: torch.fx.GraphModule):
handle_counter = 1
visited = set()
for node in graph_module.graph.nodes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if Qualcomm can handle conditional graph. If so i think the way you are adding debug handle might not able to equip debug handle to all branches. You can follow what i'm doing here:
https://github.com/pytorch/executorch/blob/main/exir/passes/debug_handle_generator_pass.py#L14

assert (
source_node.name in visited
), "Graph is not traversed in topological order, unexpected behavior."
node.meta[QCOM_DEBUG_HANDLE] = source_node.meta[QCOM_DEBUG_HANDLE]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious why do we need to set the get_item node the same debug handle as the soruce node? Since it will introduce duplicate debug handle in the graph and im a little bit worried if it could cause any issue in the down stream,

tensor_name = f"{node.name}_{wrapper_idx}"

# Only append special namings when enable tensor dump, since longer name results bigger .pte
if (handle_id := node.meta.get(QCOM_DEBUG_HANDLE)) and self.enable_tensor_dump:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wondering if we still need this file since we will migrate to devtool infra?

This class serves as an intermediate point and is inserted right after the call_function node.
It also saves some metadata such as scale, offset, etc.
Since we just want to check the intermediate output, we will directly return the value during the forward call.
class QNNIntermediateDebugger:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can change or update the class target; from the comments it plays as the same role of Inspector.calculate_nunmeric_gap()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants