Skip to content

Conversation

@rascani
Copy link
Contributor

@rascani rascani commented Dec 12, 2025

Summary

Add quantized depthwise convolution operator for the Cortex-M backend using CMSIS-NN's optimized arm_depthwise_conv_wrapper_s8 function.

Fixes #16105

Test plan

./backends/cortex_m/test/build_test_runner.sh
pytest --config-file=backends/arm/test/pytest.ini backends/cortex_m/test/ops/test_conv.py

@rascani rascani added the release notes: none Do not include this in the release notes label Dec 12, 2025
@pytorch-bot
Copy link

pytorch-bot bot commented Dec 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16233

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit c795646 with merge base b8916b7 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 12, 2025
@rascani rascani force-pushed the cmsis_depthwise_conv branch from db21fc0 to 3de2c83 Compare December 15, 2025 19:48
RJ Ascani added 7 commits December 15, 2025 11:49
Add quantized depthwise convolution operator for the Cortex-M backend
using CMSIS-NN's optimized arm_depthwise_conv_wrapper_s8 function.

Key changes:
- New op_quantized_depthwise_conv2d.cpp with CMSIS-NN implementation
- Python operator registration in operators.py with reference implementation
- Operator schema definition in operators.yaml
- Updated ConvertToCortexMPass to automatically detect and route depthwise
  convolutions (where groups == input_channels) to the specialized operator
- Comprehensive test coverage with 5 test cases covering different
  depthwise convolution scenarios (stride, padding, bias, depth multiplier)

The implementation validates the depthwise constraint (groups must equal
input channels) and supports NHWC layout, int8 quantization, per-channel
requantization, and configurable stride/padding/dilation parameters.
…lidations

Key changes:
- Move depth_multiplier calculation from runtime to AOT pass (eliminates
  runtime division by computing depth_multiplier = output_channels / input_channels
  in the graph transformation pass)
- Add critical defensive validations in validate_depthwise_conv2d_arguments():
  * Validate IHWO weight layout (dimension 0 must be 1)
  * Validate dilation == 1 (CMSIS-NN constraint)
  * Validate depth_multiplier consistency with channel counts
- Fix CMSIS-NN API usage:
  * Use arm_depthwise_conv_wrapper_s8_get_buffer_size() with correct parameters
  * Improve buffer allocation error handling with detailed error messages
- Add _compute_depthwise_conv2d_output_shape() to read channels from correct
  dimension (dim 3 for IHWO layout vs dim 0 for OHWI)
- Update operator schema to use depth_multiplier parameter instead of groups

This ensures proper validation of CMSIS-NN constraints and moves computation
to compile-time where possible.
CMSIS-NN arm_depthwise_conv_wrapper_s8 only supports batch size 1.
Add validation in both AOT pass (fail during compilation) and runtime
(defensive check).

Add 6 test cases covering edge cases:
- Combined stride/padding/bias
- 1x1 kernels (common in mobile networks)
- Higher depth_multiplier (4)
- Asymmetric kernels (1x3)
- Asymmetric stride/padding
- Larger kernels (5x5)

Fix depthwise_conv2d_stride test to use batch size 1.
@rascani rascani force-pushed the cmsis_depthwise_conv branch from 3de2c83 to 577364c Compare December 15, 2025 19:49
@rascani rascani marked this pull request as ready for review December 15, 2025 22:05
Copy link
Collaborator

@mansnils mansnils left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this @rascani ! It looks good, just a couple of comments.

# and groups == input_channels (groups > 1)
is_depthwise = weight_tensor.shape[1] == 1 and groups > 1

if is_depthwise:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we actually have the benefit of choosing between a regular and DW conv. It is likely but not certain that the un-optimized CMSIS-NN DW conv or the one without any SIMD is less efficient that the corresponding CMSIS-NN conv. We don't know exactly until we measure. We could then add something like this for now with a TODO comment:
optimal_dw_conv_constraints = (
in_channels == out_channels and dilation == [1, 1]
) or in_channels == 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I tried to add these constraints, which resulted in falling back to conv2d for some of the tests cases. Unfortunately, that also resulted in some output mismatches. I filed #16347 to revisit this and left a TODO to improve the smarts here.

Two fixes for Cortex-M convolution pass:

1. Fix depthwise detection: Check `groups == in_channels` instead of
   `weight_tensor.shape[1] == 1 and groups > 1`. Previous logic failed
   for Conv2d(1, 4, 3) where in_channels=groups=1.

2. Route to standard conv when batch_size > 1: CMSIS-NN's
   `arm_depthwise_conv_wrapper_s8` falls back to unoptimized
   implementation for batch_size != 1, so check upfront and use
   standard conv instead of throwing ValueError.

Added `_get_batch_size_from_conv` helper that safely extracts batch
size from convolution node's output shape metadata. More reliable than
accessing input node metadata, which may be unavailable during
multi-pass transformations.

Fixes:
- conv2d_dilation: batch_size=3 with depthwise conv routes to standard conv
- conv2d_x3: KeyError accessing input metadata in multi-conv chains
Copy link
Contributor Author

@rascani rascani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the reviews @mansnils and @psiddh!

# and groups == input_channels (groups > 1)
is_depthwise = weight_tensor.shape[1] == 1 and groups > 1

if is_depthwise:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I tried to add these constraints, which resulted in falling back to conv2d for some of the tests cases. Unfortunately, that also resulted in some output mismatches. I filed #16347 to revisit this and left a TODO to improve the smarts here.

Copy link
Contributor

@psiddh psiddh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: none Do not include this in the release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

DepthwiseConv : Add support to CMSiS-NN

3 participants