实现底层原理
Diffusers中的Attention操作实现在AttnProcessor类(diffusers.models.attention_processor.py),里面定义了单次Attention操作。 添加LoRA,本质上是用LoRAAttnProcessor类替换AttnProcessor类。 LoRAAttnProcessor中新增了四个线性层,分别是to_q_lora、to_k_lora、to_v_lora、to_out_lora,并在执行AttnProcessor前分配给attn,然后再嵌套执行AttnProcessor:
修改的attn是Attention类,其中的to_q、to_k、to_v和to_out是LoRACompatibleLinear(diffusers.models.lora.py),在增加lora_layer后,计算如下。其中,scale是用来控制LoRA强度,如果为0则不添加LoRA。
具体实现
lora_attn_procs = {}
for name, processor in self.unet_lora.attn_processors.items():
cross_attention_dim = (
None
if name.endswith("attn1.processor")
else self.unet_lora.config.cross_attention_dim
)
if name.startswith("mid_block"):
hidden_size = self.unet_lora.config.block_out_channels[-1]
elif name.startswith("up_blocks"):
block_id = int(name[len("up_blocks.")])
hidden_size = list(reversed(self.unet_lora.config.block_out_channels))[
block_id
]
elif name.startswith("down_blocks"):
block_id = int(name[len("down_blocks.")])
hidden_size = self.unet_lora.config.block_out_channels[block_id]
processor_name = type(processor).__name__
if processor_name == 'AttnProcessor':
lora_attn_procs[name] = LoRAAttnProcessor(
hidden_size=hidden_size, cross_attention_dim=cross_attention_dim
)
elif processor_name == 'AttnProcessor2_0':
lora_attn_procs[name] = LoRAAttnProcessor2_0(
hidden_size=hidden_size, cross_attention_dim=cross_attention_dim
)
else:
raise ValueError(
f"Unknown processor type {processor_name}"
)
self.unet_lora.set_attn_processor(lora_attn_procs)
self.lora_layers = AttnProcsLayers(self.unet_lora.attn_processors).to(
self.device
)
self.lora_layers._load_state_dict_pre_hooks.clear()
self.lora_layers._state_dict_hooks.clear()
SD的整体结构
diffusers中的SD实现为StableDiffusionPipeline,其中UNet为UNet2DConditionModel,包含down_block -> mid_block -> up_block。
以down_block_types为例,包含3个CrossAttnDownBlock2D和1个DownBlock2D。对于CrossAttnDownBlock2D包含多组ResnetBlock2D 和Transformer2DModel ,以及最后的Downsample2D 。从下面代码中可以看到,LoRA的scale系数可以同时嵌入到上述三个加粗模块中: