Enable group mode (varlen) kernel generation for PyTorch integration#3553
Closed
chinmaydk99 wants to merge 1 commit intoROCm:developfrom
Closed
Enable group mode (varlen) kernel generation for PyTorch integration#3553chinmaydk99 wants to merge 1 commit intoROCm:developfrom
chinmaydk99 wants to merge 1 commit intoROCm:developfrom
Conversation
691b71a to
4670a96
Compare
Contributor
|
May I know why the group-mode kernels were not included previously, and why they are required now? |
4670a96 to
6258488
Compare
Author
This is part of the ongoing parity effort to bridge feature gaps between CK and AOTriton backends in PyTorch. Varlen attention is one of the features being enabled |
070806a to
49d0721
Compare
49d0721 to
f4d8329
Compare
7 tasks
Contributor
|
Imported to ROCm/rocm-libraries |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Proposed changes
This PR enables group mode (variable-length attention) kernel generation for PyTorch's CK SDPA backend.
Checklist
Please put an
xinto the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-formaton all changed filesDiscussion
The change is minimal (single line deletion) but enables a significant feature: variable-length attention support for ROCm users via PyTorch's torch.nn.attention.varlen API.