1.概述
AnimateDiff 设计了3个模块来微调通用的文生图Stable Diffusion预训练模型, 以较低的消耗实现图片到动画生成。
-
论文名:AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
-
三大模块:
-
视频域适应模块(Domain-Adapter):即让SD时应生成视频相关的内容
-
动作学习模块(Motion-Module): 让SD从文生图的特征中,再学习序列特征
-
动作微调模块(Motion-LoRA): 让SD学习特定的视频动作(如zoom-in, zoom-out)
-
2.具体实现
2.1 Domain-Adapter
这里是对u-net的LoRA微调。
- 模块结构:LoRA低秩矩阵,作为可学习参数加在u-net的self-attention 与 cross-attention中
注意:SD的u-net中,cross-attention用于文-图跨模态特征融合(每个block都有)。self-attention用于捕获图像的全局特征(无需每个block都用,可以间隔添加)
- 微调数据:同一目标的随机视频帧(Frame)
原文摘录:
We implement the domain
adapter layers with LoRA (Hu et al., 2021) and insert them into the self-/cross-attention layers in
the base T2I, as shown in Fig. 3.
We then optimize only the parameters of the domain adapter on
static frames randomly sampled from video datasets with the same objective in Eq. (2).
2.2 Motion Module
-
模块结构:
sinusoidal position embedding + self-attention blocks, 添加在U-net的每个blocks中
-
维度处理:
图像的维度是: [batch_size, channel, height, width],
而视频会多一个<时间维度>即视频帧数: [batch_size, frames, channel, height, width]
-
sd:由于sd本身是处理图片,没有<时间维度>,即frames,这里将frams这个维度整合到batch_size这个维度,以便sd按照图像的方式处理frames
-
motion module: 这个新增部分只需要学习时间维度的特征。因此,它将空间维度 h,w合并到batch_size,即以特征shape为[batch_size, frames, channel]作为该模块的输入,输出时再将其h,w从batch_size还原。
-
-
初始化&残差
-
为了提升训练效果,这里用了control-net的0值初始化(在transformer的最后输出层—projection layers)
-
motion module用了残差连接
-
原文摘录:
the temporal Transformer
consists of several self-attention blocks along the temporal axis, with sinusoidal position encoding
to encode the location of each frame in the animation. As mentioned above, the input of the motion
module is the reshaped feature map whose spatial dimensions are merged into the batch axis.
Note that sinusoidal position encoding added before the self-attention
is essential; otherwise, the module is not aware of the frame order in the animation. To avoid any
harmful effects that the additional module might introduce, we zero initialize (Zhang et al., 2023)
the output projection layers of the temporal Transformer and add a residual connection so that the
motion module is an identity mapping at the beginning of training.
2.3 MotionLoRA
在Motion Module的self-attention上增加LoRA低秩可学习矩阵,再用特定的帧学习一个动作(如zoom-in,zoom-out)
该步骤需要20-50个动作帧,2000次训练迭代(约1-2小时), 30Mb的低秩。
原文摘录:
we add LoRA layers to the self-attention
layers of the motion module in the inflated model described in Sec. 4.2, then train these LoRA layers
on the reference videos of new motion patterns.
, to get videos with zooming effects, we augment the videos by gradually reducing
(zoom-in) or enlarging (zoom-out) the cropping area of video frames along the temporal axis. We
demonstrate that our MotionLoRA can achieve promising results even with as few as 20 ∼ 50 ref
erence videos, 2,000 training iterations (around 1 ∼ 2 hours) as well as about 30M storage space,
enabling efficient model tuning and sharing among users.
3.实验与推理
3.1 概述
-
训练的损失函数都是根据vedio的样本进行mse,
-
这里核心是第二部分,即运动模块,基于sd1.5和WebVid-dataset,这个开销还是非常大的。
-
消费卡只能玩模块3,即运动模块的lora微调。
3.2 消融
- 运动模块
这里比较了运动模块的两种可行layer,temporal Transformer 和 1D Temporal Convolution:
实验表明Transformer能构建时序关系,即捕获全局时间依赖关系,更适合视频生成任务。而
1D Temporal Convolution生成的frames几乎一样,即没有视频效果。
- 运动模块LoRA
这个部分为个人用户提供价值,在有限的视频(50个)和低训练成本下,实现特定动作生成。
3.3 实验总结
-
可控性:可结合 ControlNet,可以使用条件(如深度图)对生成结果进行精准控制。
-
独立性:无需依赖复杂的反推过程(如 DDIM inversion),直接从噪声生成,简化了生成流程。
-
质量和细节:生成结果在动态细节和视觉表现上都非常出色,能够细腻地还原运动特征(例如头发的动态、面部表情的变化等)。
4. 相关工作
-
Tune-a-Video
-
Text2Video-Zero
Ref:
- https://github.com/guoyww/AnimateDiff.