-
Notifications
You must be signed in to change notification settings - Fork 774
Qualcomm AI Engine Direct - Support Debug Handle and Integrate IntermediateOutputCapturer #16316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,44 @@ | ||
| # Copyright (c) Qualcomm Innovation Center, Inc. | ||
| # All rights reserved | ||
| # | ||
| # This source code is licensed under the BSD-style license found in the | ||
| # LICENSE file in the root directory of this source tree. | ||
| import operator | ||
|
|
||
| import torch | ||
| from executorch.backends.qualcomm.utils.constants import QCOM_DEBUG_HANDLE | ||
| from executorch.exir.pass_base import ExportPass, PassResult | ||
|
|
||
|
|
||
| class ResolveDebugHandle(ExportPass): | ||
| """ | ||
| Caution: This pass is executed as the last of the edge_passes. | ||
| For any passes executed during qnn_preprocess, users will need to handle debug_handle ID themselves. | ||
| Description: During passes transformation, some passes might be copying some node's meta when creating a new node, | ||
| which means multiple nodes might be sharing the same debug_handle ID while it shouldn't. | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. im not super understand here: if several nodes comes from one acient node (e..g doing decomposition on some op), they should have the same debug handle for tracing.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the idea is that if we decompose the node but never assign a new handle ID, we are just saving the information for the last decomposed node rather than all decomposed node. I have draw an example below. Since edge and QNN has 1 to 1 mapping in this case, I think it would be better to gather all possible information rather than the last node's debug info. Since we reassign graph_handle, instead of only getting the output of node2, we can also get info for node1.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. here im a little confused: when we see the qnn graph, how can we know that the qnn_node_1 and qnn_node2 here comes from a same super node? Or another q might be, which graph will play as the ground truth graph, when you doing intermediate comparsion?
We won't gather only the last node debug info, but all info. In ExecuTorch normally we follow this rule: you can see if n is 1, this transform will be a operator decomposition; if m is 1, this transform will be a operator fusion, etc. In this way whenever we see an arbitrary new_node, we will know its ancestor. Not sure if that make sense to you? |
||
| This is critical as Intermediate Debugger uses debug handle as key. | ||
| debug_handle ID must be resolved so each op gets its own set of debug_handle ID and intermediate output. | ||
| """ | ||
|
|
||
| def __init__(self): | ||
| super(ResolveDebugHandle, self).__init__() | ||
|
|
||
| def call(self, graph_module: torch.fx.GraphModule): | ||
| handle_counter = 1 | ||
| visited = set() | ||
| for node in graph_module.graph.nodes: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. not sure if Qualcomm can handle conditional graph. If so i think the way you are adding debug handle might not able to equip debug handle to all branches. You can follow what i'm doing here: |
||
| # Assume node is traversed in topological order, adding a check here to be safe. | ||
| if node.target == operator.getitem: | ||
| source_node = node.args[0] | ||
| assert ( | ||
| source_node.name in visited | ||
| ), "Graph is not traversed in topological order, unexpected behavior." | ||
| node.meta[QCOM_DEBUG_HANDLE] = source_node.meta[QCOM_DEBUG_HANDLE] | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Curious why do we need to set the get_item node the same debug handle as the soruce node? Since it will introduce duplicate debug handle in the graph and im a little bit worried if it could cause any issue in the down stream, |
||
| elif node.op == "call_function": | ||
| node.meta[QCOM_DEBUG_HANDLE] = handle_counter | ||
| handle_counter += 1 | ||
| visited.add(node.name) | ||
|
|
||
| graph_module.recompile() | ||
| return PassResult(graph_module, True) | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -19,6 +19,7 @@ | |
| QCOM_BLOCK_SCALE_OFFSET, | ||
| QCOM_BLOCK_SCALES, | ||
| QCOM_BLOCK_STORAGE_TYPE, | ||
| QCOM_DEBUG_HANDLE, | ||
| QCOM_DTYPE, | ||
| QCOM_ENCODING, | ||
| QCOM_NUM_BLOCKS_PER_AXIS, | ||
|
|
@@ -30,7 +31,6 @@ | |
| QCOM_SCALE, | ||
| QCOM_SCALE_OFFSET, | ||
| QCOM_SCALES, | ||
| QCOM_TENSOR_NAME, | ||
| QCOM_ZERO_POINT, | ||
| QCOM_ZERO_POINTS, | ||
| ) | ||
|
|
@@ -377,6 +377,11 @@ def get_tensor_name( | |
| wrapper_idx: int = 0, | ||
| ): | ||
| tensor_name = f"{node.name}_{wrapper_idx}" | ||
|
|
||
| # Only append special namings when enable tensor dump, since longer name results bigger .pte | ||
| if (handle_id := node.meta.get(QCOM_DEBUG_HANDLE)) and self.enable_tensor_dump: | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. wondering if we still need this file since we will migrate to devtool infra? |
||
| tensor_name = f"{tensor_name}_debugID_{str(handle_id)}" | ||
|
|
||
| # The `input_{id}` is utilized for sorting at runtime. Due to multiple passes in qnn_preprocess, | ||
| # the input order between QNN and the original graph’s forward function may differ. | ||
| # The `mutbuf_{id}` is utilized for mapping I/O of mutable buffer at runtime. | ||
|
|
@@ -397,12 +402,6 @@ def get_tensor_name( | |
| elif is_graph_output(node): | ||
| tensor_name = f"output_{tensor_name}" | ||
|
|
||
| # Save this for intermediate debugger | ||
| # Needs idx since node like topk has 2 outputs | ||
| if QCOM_TENSOR_NAME in node.meta: | ||
| node.meta[QCOM_TENSOR_NAME][wrapper_idx] = tensor_name | ||
| else: | ||
| node.meta[QCOM_TENSOR_NAME] = {wrapper_idx: tensor_name} | ||
| return tensor_name | ||
|
|
||
| def define_custom_tensor_wrapper( | ||
|
|
@@ -465,7 +464,6 @@ def define_tensor( | |
|
|
||
| if cached := nodes_to_wrappers[node_name].get(wrapper_idx, None): | ||
| return cached | ||
|
|
||
| tensor_name = self.get_tensor_name(tensor_source_node, wrapper_idx) | ||
| dims = torch.Size([1]) if len(tensor.size()) == 0 else tensor.size() | ||
| dynamic_dims, nominal_dims = self.get_dynamic_dimension(dims) | ||
|
|
||

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would love to directly reuse
DEBUG_HANDLE_KEYfrom ExecuTorch https://github.com/pytorch/executorch/blob/main/exir/passes/debug_handle_generator_pass.py#L10 to make sure that we are working on same item.