⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

Conversation

@wj-laskowski
Copy link
Contributor

@wj-laskowski wj-laskowski commented Jan 16, 2026

Proposed changes

Added bias bnorm clamp operation for WMMA conv fwd large tensor (FP16/BF16 data type and NHWGCxGKYXC layout).

Checklist

Please put an x into the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.

  • I have added tests relevant to the introduced functionality, and the unit tests are passing locally
  • I have added the test to REGRESSION_TESTS list defined at the top of CMakeLists.txt in tests/CMakeLists.txt, IF the test takes more than 30 seconds to run.
  • I have added inline documentation which enables the maintainers with understanding the motivation
  • I have removed the stale documentation which is no longer relevant after this pull request
  • (If this change is user-facing) I have added release notes which provide the end users with a brief summary of the improvement from this pull request
  • I have run clang-format on all changed files
  • Any dependent changes have been merged

Discussion

If this is a relatively large or complex change, feel free to start a discussion by explaining why you chose the solution you did and what alternatives you considered

Following operations are added for FP16/BF16 data type and NHWGCxGKYXC layout.
- grouped_conv2d_fwd_bias_bnorm_clamp
- grouped_conv3d_fwd_bias_bnorm_clamp
@wj-laskowski wj-laskowski force-pushed the streamhpc/grouped-conv-fwd-wmma-large-tensor-bias_bnorm_clamp branch from 7b0341d to b5c541f Compare January 19, 2026 09:35
@wj-laskowski wj-laskowski changed the title Streamhpc/grouped conv fwd wmma large tensor bias bnorm clamp WMMA grouped conv fwd large tensor bias bnorm clamp Jan 19, 2026
@wj-laskowski wj-laskowski marked this pull request as ready for review January 19, 2026 12:58
@krithalith
Copy link
Contributor

Looks good overall! I have some comments:

  1. They wanted only a single generic instance per instance list for (Large Tensor) bias bnorm clamp. You can just make two new instance lists that only contain one generic instance and use them only for bias bnorm clamp. We did the same for the non-large version. Please first find an actual generic instance, because it seems like the currents instance lists do not contain one. You can probably adapt them from the XDL Large Tensor instance lists. They should have all the scalarPerVector values equal to 1.
  2. Are the current bias bnorm clamp tests sufficient for testing the Large implementation? I.e. are the tensors large enough to actually cause splitting? If not it might be useful to add a "Large Cases" test like for the other flavors.
  3. Did you check if the Large Tensor implementation is actually run and can support all the test cases (especially check after reducing instance lists to generic only)?

@krithalith krithalith self-requested a review January 20, 2026 09:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants