-
Notifications
You must be signed in to change notification settings - Fork 768
Cortex-M: Add depthwise conv2d operator #16233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16233
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit c795646 with merge base b8916b7 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
db21fc0 to
3de2c83
Compare
Add quantized depthwise convolution operator for the Cortex-M backend using CMSIS-NN's optimized arm_depthwise_conv_wrapper_s8 function. Key changes: - New op_quantized_depthwise_conv2d.cpp with CMSIS-NN implementation - Python operator registration in operators.py with reference implementation - Operator schema definition in operators.yaml - Updated ConvertToCortexMPass to automatically detect and route depthwise convolutions (where groups == input_channels) to the specialized operator - Comprehensive test coverage with 5 test cases covering different depthwise convolution scenarios (stride, padding, bias, depth multiplier) The implementation validates the depthwise constraint (groups must equal input channels) and supports NHWC layout, int8 quantization, per-channel requantization, and configurable stride/padding/dilation parameters.
…lidations Key changes: - Move depth_multiplier calculation from runtime to AOT pass (eliminates runtime division by computing depth_multiplier = output_channels / input_channels in the graph transformation pass) - Add critical defensive validations in validate_depthwise_conv2d_arguments(): * Validate IHWO weight layout (dimension 0 must be 1) * Validate dilation == 1 (CMSIS-NN constraint) * Validate depth_multiplier consistency with channel counts - Fix CMSIS-NN API usage: * Use arm_depthwise_conv_wrapper_s8_get_buffer_size() with correct parameters * Improve buffer allocation error handling with detailed error messages - Add _compute_depthwise_conv2d_output_shape() to read channels from correct dimension (dim 3 for IHWO layout vs dim 0 for OHWI) - Update operator schema to use depth_multiplier parameter instead of groups This ensures proper validation of CMSIS-NN constraints and moves computation to compile-time where possible.
CMSIS-NN arm_depthwise_conv_wrapper_s8 only supports batch size 1. Add validation in both AOT pass (fail during compilation) and runtime (defensive check). Add 6 test cases covering edge cases: - Combined stride/padding/bias - 1x1 kernels (common in mobile networks) - Higher depth_multiplier (4) - Asymmetric kernels (1x3) - Asymmetric stride/padding - Larger kernels (5x5) Fix depthwise_conv2d_stride test to use batch size 1.
3de2c83 to
577364c
Compare
mansnils
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this @rascani ! It looks good, just a couple of comments.
| # and groups == input_channels (groups > 1) | ||
| is_depthwise = weight_tensor.shape[1] == 1 and groups > 1 | ||
|
|
||
| if is_depthwise: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we actually have the benefit of choosing between a regular and DW conv. It is likely but not certain that the un-optimized CMSIS-NN DW conv or the one without any SIMD is less efficient that the corresponding CMSIS-NN conv. We don't know exactly until we measure. We could then add something like this for now with a TODO comment:
optimal_dw_conv_constraints = (
in_channels == out_channels and dilation == [1, 1]
) or in_channels == 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I tried to add these constraints, which resulted in falling back to conv2d for some of the tests cases. Unfortunately, that also resulted in some output mismatches. I filed #16347 to revisit this and left a TODO to improve the smarts here.
Two fixes for Cortex-M convolution pass: 1. Fix depthwise detection: Check `groups == in_channels` instead of `weight_tensor.shape[1] == 1 and groups > 1`. Previous logic failed for Conv2d(1, 4, 3) where in_channels=groups=1. 2. Route to standard conv when batch_size > 1: CMSIS-NN's `arm_depthwise_conv_wrapper_s8` falls back to unoptimized implementation for batch_size != 1, so check upfront and use standard conv instead of throwing ValueError. Added `_get_batch_size_from_conv` helper that safely extracts batch size from convolution node's output shape metadata. More reliable than accessing input node metadata, which may be unavailable during multi-pass transformations. Fixes: - conv2d_dilation: batch_size=3 with depthwise conv routes to standard conv - conv2d_x3: KeyError accessing input metadata in multi-conv chains
rascani
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # and groups == input_channels (groups > 1) | ||
| is_depthwise = weight_tensor.shape[1] == 1 and groups > 1 | ||
|
|
||
| if is_depthwise: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I tried to add these constraints, which resulted in falling back to conv2d for some of the tests cases. Unfortunately, that also resulted in some output mismatches. I filed #16347 to revisit this and left a TODO to improve the smarts here.
psiddh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Summary
Add quantized depthwise convolution operator for the Cortex-M backend using CMSIS-NN's optimized arm_depthwise_conv_wrapper_s8 function.
Fixes #16105
Test plan