⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content

[model][NPU]:Add NPU fusion operator patch to Zimage model to improve performance#1256

Open
Feng0w0 wants to merge 1 commit intomodelscope:mainfrom
Feng0w0:npu_fused
Open

[model][NPU]:Add NPU fusion operator patch to Zimage model to improve performance#1256
Feng0w0 wants to merge 1 commit intomodelscope:mainfrom
Feng0w0:npu_fused

Conversation

@Feng0w0
Copy link
Contributor

@Feng0w0 Feng0w0 commented Feb 4, 2026

Replace RMSNorm and Rope in ZImage

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Feng0w0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the performance of the Zimage model, particularly on NPU hardware, by introducing and integrating specialized fused operators for RMSNorm and Rotary Embeddings. The changes provide a configurable system to dynamically apply these NPU-optimized operations, aiming to accelerate model execution and training efficiency. This allows for better utilization of NPU capabilities without altering the core model architecture.

Highlights

  • NPU Fused Operators: Introduced a new file npu_fused_operator.py containing NPU-optimized implementations for RMSNorm and Rotary Embedding (RoPE) to enhance performance on NPU devices.
  • Dynamic Patching Mechanism: Implemented a mechanism to dynamically patch the RMSNorm and Attention.apply_rotary_emb methods within the Zimage model to use the NPU-optimized fused operators when an NPU is available and the patch is enabled.
  • Integration into Training Pipeline: Integrated the NPU patch into the Zimage training pipeline, allowing users to enable this performance optimization via a new enable_npu_patch argument in the ZImagePipeline.from_pretrained method and a corresponding command-line flag in training scripts.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • diffsynth/core/npu_patch/npu_fused_operator.py
    • Added a new file to house NPU-specific fused implementations for RMSNorm (for both diffsynth and transformers' Qwen3RMSNorm) and Rotary Embedding.
  • diffsynth/models/z_image_dit.py
    • Refactored the apply_rotary_emb function into a class method within the Attention module.
    • Updated the forward method to call the new class method self.apply_rotary_emb, preparing it for dynamic patching.
  • diffsynth/pipelines/z_image.py
    • Imported IS_NPU_AVAILABLE to conditionally apply NPU patches.
    • Added an enable_npu_patch argument to the from_pretrained method of ZImagePipeline.
    • Introduced and called apply_npu_patch function to conditionally replace RMSNorm and Attention.apply_rotary_emb with NPU-optimized versions based on enable_npu_patch and NPU availability.
  • examples/z_image/model_training/special/npu_training/Z-Image-Turbo-NPU.sh
    • Added the --enable_npu_patch flag to the training script to activate NPU optimizations during model training.
  • examples/z_image/model_training/train.py
    • Modified the ZImageTrainer constructor to accept and pass the enable_npu_patch argument to the ZImagePipeline.from_pretrained method.
    • Added --enable_npu_patch as a new command-line argument for the training script, allowing users to control NPU patch activation.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces NPU-specific fused operators for RMSNorm and RoPE to improve performance for the Zimage model. The changes include adding new NPU operator implementations, refactoring the attention module to allow monkey-patching, and adding logic to apply these patches conditionally. The implementation looks good, but I have a few suggestions to improve robustness and consistency.

Comment on lines +5 to +6
except:
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The bare except: is too broad and can hide other unexpected errors. It's better to catch the specific ImportError that can occur if torch_npu is not installed.

Suggested change
except:
pass
except ImportError:
pass

Comment on lines +677 to +688
from ..models.general_modules import RMSNorm
from transformers.models.qwen3.modeling_qwen3 import Qwen3RMSNorm
from ..models.z_image_dit import Attention
from ..core.npu_patch.npu_fused_operator import (
rms_norm_forward_npu,
rms_norm_forward_transformers_npu,
rotary_emb_Zimage_npu
)

RMSNorm.forward = rms_norm_forward_npu
Qwen3RMSNorm.forward = rms_norm_forward_transformers_npu
Attention.apply_rotary_emb = rotary_emb_Zimage_npu
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The import from transformers on line 678 can fail if the library isn't installed or the module path changes. To make this patching more robust, it's best to wrap the transformers-related import and monkey-patch in a try...except ImportError block. This prevents a missing optional dependency from crashing the application, while still allowing other patches to be applied.

Suggested change
from ..models.general_modules import RMSNorm
from transformers.models.qwen3.modeling_qwen3 import Qwen3RMSNorm
from ..models.z_image_dit import Attention
from ..core.npu_patch.npu_fused_operator import (
rms_norm_forward_npu,
rms_norm_forward_transformers_npu,
rotary_emb_Zimage_npu
)
RMSNorm.forward = rms_norm_forward_npu
Qwen3RMSNorm.forward = rms_norm_forward_transformers_npu
Attention.apply_rotary_emb = rotary_emb_Zimage_npu
from ..models.general_modules import RMSNorm
from ..models.z_image_dit import Attention
from ..core.npu_patch.npu_fused_operator import (
rms_norm_forward_npu,
rms_norm_forward_transformers_npu,
rotary_emb_Zimage_npu
)
RMSNorm.forward = rms_norm_forward_npu
Attention.apply_rotary_emb = rotary_emb_Zimage_npu
try:
from transformers.models.qwen3.modeling_qwen3 import Qwen3RMSNorm
Qwen3RMSNorm.forward = rms_norm_forward_transformers_npu
except ImportError:
pass # Silently ignore if transformers is not installed

offload_models=None,
device="cpu",
task="sft",
enable_npu_patch=True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The default value for enable_npu_patch is True here, but the corresponding command-line argument in z_image_parser defaults to False. To maintain consistency between programmatic use and CLI use, it's better to have the same default value. I suggest changing this to False.

Suggested change
enable_npu_patch=True,
enable_npu_patch=False,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant