吴恩达AIGC《How Diffusion Models Work》笔记

news2024/10/1 9:34:44

1. Introduction

Midjourney,Stable Diffusion,DALL-E等产品能够仅通过Prompt就能够生成图像。本课程将介绍这些应用背后算法的原理。

课程地址:https://learn.deeplearning.ai/diffusion-models/

2. Intuition

本小节将介绍扩散模型的基础知识,探讨扩散模型的目标,如何利用各种游戏角色图片训练数据来增强模型的能力。

假设下面是你的数据集,你想要更多的在这些数据集中没有的角色图片,如何做到?可以使用扩散模型生成这样的角色图片。
在这里插入图片描述

扩散模型应该是这样的一个神经网络:它能够学习到游戏角色的一般概念,例如游戏角色是什么,游戏角色的身体轮廓等等,甚至是更细微的细节,例如游戏角色的头发颜色或者腰带扣。
在这里插入图片描述
有一种方法可以做到这一点,就是在训练数据上添加不同程度的噪声,这被称为噪声过程(noising process),这是时受到物理学的启发而取的名字。你可以想象一滴墨水滴入一杯水中,随着时间的推移,它会在水中扩散,直至消失。当你逐渐给图像添加更多的噪声时,图片的轮廓从清晰变模糊,直到完全无法辨别。这个逐渐添加不同程度噪声的过程很像墨水在水中逐渐消失的过程。
在这里插入图片描述
训练模型的目标是:模型会移除你添加的噪声,将添加了不同噪声的图像变换为游戏角色。

3. Sampling

采样是神经网络训练完成之后,在推理时做的事。

假设有一个已经添加了噪声的样本,将其输入到已经训练好的神经网络中,这个神经网络已经理解了游戏角色图片。之后,让神经网络预测噪声,然后,从噪声样本中减去预测的噪声,得到的结果就更接近游戏角色图片。

现实情况是,这只是对噪声的预测,并没有完全消除所有的噪声,需要迭代很多次在能够得到接近原始图片的高质量的样本。
在这里插入图片描述
下面是采样算法的实现:
在这里插入图片描述
首先可以采样一个随机噪声样本,这就是一开始的原始噪声,然后逆序从最后一次迭代的完全噪声的状态遍历到第一次迭代。就像时光倒流,也可以想象是墨水的例子,一开始完全扩散的,然后一直倒退回到刚刚滴入水中的状态。

之后,采样一些特殊的噪声。

之后,将原始的噪声再次输入到神经网络,然后得到一些预测的噪声。这个预测的噪声就是训练过的神经网络想从原始噪声中减去的噪声,从而得到更像原始的游戏角色的图片。

最后,通过降噪扩散概率模型(Denoising Diffusion Probabilistic Models,DDPM)得到从原始噪声中减去预测的噪声,再加上一些额外的噪声。

from typing import Dict, Tuple
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import models, transforms
from torchvision.utils import save_image, make_grid
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, PillowWriter
import numpy as np
from IPython.display import HTML
from diffusion_utilities import *

class ContextUnet(nn.Module):
    def __init__(self, in_channels, n_feat=256, n_cfeat=10, height=28):  # cfeat - context features
        super(ContextUnet, self).__init__()

        # number of input channels, number of intermediate feature maps and number of classes
        self.in_channels = in_channels
        self.n_feat = n_feat
        self.n_cfeat = n_cfeat
        self.h = height  #assume h == w. must be divisible by 4, so 28,24,20,16...

        # Initialize the initial convolutional layer
        self.init_conv = ResidualConvBlock(in_channels, n_feat, is_res=True)

        # Initialize the down-sampling path of the U-Net with two levels
        self.down1 = UnetDown(n_feat, n_feat)        # down1 #[10, 256, 8, 8]
        self.down2 = UnetDown(n_feat, 2 * n_feat)    # down2 #[10, 256, 4,  4]
        
         # original: self.to_vec = nn.Sequential(nn.AvgPool2d(7), nn.GELU())
        self.to_vec = nn.Sequential(nn.AvgPool2d((4)), nn.GELU())

        # Embed the timestep and context labels with a one-layer fully connected neural network
        self.timeembed1 = EmbedFC(1, 2*n_feat)
        self.timeembed2 = EmbedFC(1, 1*n_feat)
        self.contextembed1 = EmbedFC(n_cfeat, 2*n_feat)
        self.contextembed2 = EmbedFC(n_cfeat, 1*n_feat)

        # Initialize the up-sampling path of the U-Net with three levels
        self.up0 = nn.Sequential(
            nn.ConvTranspose2d(2 * n_feat, 2 * n_feat, self.h//4, self.h//4), # up-sample  
            nn.GroupNorm(8, 2 * n_feat), # normalize                       
            nn.ReLU(),
        )
        self.up1 = UnetUp(4 * n_feat, n_feat)
        self.up2 = UnetUp(2 * n_feat, n_feat)

        # Initialize the final convolutional layers to map to the same number of channels as the input image
        self.out = nn.Sequential(
            nn.Conv2d(2 * n_feat, n_feat, 3, 1, 1), # reduce number of feature maps   #in_channels, out_channels, kernel_size, stride=1, padding=0
            nn.GroupNorm(8, n_feat), # normalize
            nn.ReLU(),
            nn.Conv2d(n_feat, self.in_channels, 3, 1, 1), # map to same number of channels as input
        )

    def forward(self, x, t, c=None):
        """
        x : (batch, n_feat, h, w) : input image
        t : (batch, n_cfeat)      : time step
        c : (batch, n_classes)    : context label
        """
        # x is the input image, c is the context label, t is the timestep, context_mask says which samples to block the context on

        # pass the input image through the initial convolutional layer
        x = self.init_conv(x)
        # pass the result through the down-sampling path
        down1 = self.down1(x)       #[10, 256, 8, 8]
        down2 = self.down2(down1)   #[10, 256, 4, 4]
        
        # convert the feature maps to a vector and apply an activation
        hiddenvec = self.to_vec(down2)
        
        # mask out context if context_mask == 1
        if c is None:
            c = torch.zeros(x.shape[0], self.n_cfeat).to(x)
            
        # embed context and timestep
        cemb1 = self.contextembed1(c).view(-1, self.n_feat * 2, 1, 1)     # (batch, 2*n_feat, 1,1)
        temb1 = self.timeembed1(t).view(-1, self.n_feat * 2, 1, 1)
        cemb2 = self.contextembed2(c).view(-1, self.n_feat, 1, 1)
        temb2 = self.timeembed2(t).view(-1, self.n_feat, 1, 1)
        #print(f"uunet forward: cemb1 {cemb1.shape}. temb1 {temb1.shape}, cemb2 {cemb2.shape}. temb2 {temb2.shape}")


        up1 = self.up0(hiddenvec)
        up2 = self.up1(cemb1*up1 + temb1, down2)  # add and multiply embeddings
        up3 = self.up2(cemb2*up2 + temb2, down1)
        out = self.out(torch.cat((up3, x), 1))
        return out

# hyperparameters

# diffusion hyperparameters
timesteps = 500
beta1 = 1e-4
beta2 = 0.02

# network hyperparameters
device = torch.device("cuda:0" if torch.cuda.is_available() else torch.device('cpu'))
n_feat = 64 # 64 hidden dimension feature
n_cfeat = 5 # context vector is of size 5
height = 16 # 16x16 image
save_dir = './weights/'

# construct DDPM noise schedule
b_t = (beta2 - beta1) * torch.linspace(0, 1, timesteps + 1, device=device) + beta1
a_t = 1 - b_t
ab_t = torch.cumsum(a_t.log(), dim=0).exp()    
ab_t[0] = 1

# construct model
nn_model = ContextUnet(in_channels=3, n_feat=n_feat, n_cfeat=n_cfeat, height=height).to(device)

超参数介绍:

  • beta1:DDPM算法的超参数;
  • beta2:DDPM算法的超参数;
  • height:图片的长度和高度;
  • noise schedule(噪声调度):确定在某个时间步长应用于图像的噪声级别;
  • S1,S2,S3:缩放因子的值
# helper function; removes the predicted noise (but adds some noise back in to avoid collapse)
def denoise_add_noise(x, t, pred_noise, z=None):
    if z is None:
        z = torch.randn_like(x)
    noise = b_t.sqrt()[t] * z
    mean = (x - pred_noise * ((1 - a_t[t]) / (1 - ab_t[t]).sqrt())) / a_t[t].sqrt()
    return mean + noise

# load in model weights and set to eval mode
nn_model.load_state_dict(torch.load(f"{save_dir}/model_trained.pth", map_location=device))
nn_model.eval()
print("Loaded in Model")


# sample using standard algorithm
@torch.no_grad()
def sample_ddpm(n_sample, save_rate=20):
    # x_T ~ N(0, 1), sample initial noise
    samples = torch.randn(n_sample, 3, height, height).to(device)  

    # array to keep track of generated steps for plotting
    intermediate = [] 
    for i in range(timesteps, 0, -1):
        print(f'sampling timestep {i:3d}', end='\r')

        # reshape time tensor
        t = torch.tensor([i / timesteps])[:, None, None, None].to(device)

        # sample some random noise to inject back in. For i = 1, don't add back in noise
        # 这里设置为0,会导致模型坍缩
        z = torch.randn_like(samples) if i > 1 else 0

        eps = nn_model(samples, t)    # predict noise e_(x_t,t)
        samples = denoise_add_noise(samples, i, eps, z)
        if i % save_rate ==0 or i==timesteps or i<8:
            intermediate.append(samples.detach().cpu().numpy())

    intermediate = np.stack(intermediate)
    return samples, intermediate
# visualize samples
plt.clf()
samples, intermediate_ddpm = sample_ddpm(32)
animation_ddpm = plot_sample(intermediate_ddpm,32,4,save_dir, "ani_run", None, save=False)
HTML(animation_ddpm.to_jshtml())

在这里插入图片描述

需要注意的是,神经网络的输入需要的是符合正态分布的噪声样本。在迭代过程中,噪声样本减去模型预测的噪声之后得到的样本已经不符合正态分布了,容易导致模型坍缩。所以每次迭代之后需要根据所处的时间步长添加额外的采样噪声,以让其符合正态分布。从经验上看,这可以保证训练的稳定性,以避免模型模型坍缩,导致模型的预测结果接近于数据集的平均值。
在这里插入图片描述

4. Neural Network

用于扩散模型的神经网络架构是U-Net。它将原始图像作为输入,输出与原始图像大小相同的预测噪声。输入与输出尺寸相同,这是其一大优点。
在这里插入图片描述
U-Net首先将这个输入的信息进行嵌入(Embedding),通过多层卷积将其降采样到一个压缩了所有信息的嵌入(Embedding)中。然后用相同数量的上采样块将输出返回到任务中,在这个例子中,它的任务就是预测应用到这个图片上的噪声。

U-Net的另一个优点是可以接受额外的信息,所以它压缩了图像以了解发生了什么,但也可以接收更多的信息。那么我们需要哪些额外的信息呢?

对于这些模型来说,一个非常重要的信息就是时间嵌入(Time Embedding),这是一种告诉模型时间步长的嵌入,因此我们需要某种级别的噪声。对于这个时间嵌入,需要将其嵌入到一个向量中,然后将其添加到这些上采样块中。

另一个非常重要的信息是上下文嵌入(Context Embedding),它的根本作用是控制模型生成的内容。例如,一个文本描述,你想让它生成的是Bob,或者某种因子,或者某种颜色。具体到代码实现如下:
在这里插入图片描述

5. Training

训练神经网络(NN)的目标是让网络预测噪声,真正的任务是让它学习图像上的噪声分布(也包括需要学习什么是游戏角色图片的特征)。

训练策略是:从训练数据中去一张游戏角色图片,然后添加随机噪声,然后让NN预测这个噪声。之后,将预测的噪声与实际的噪声进行比较,计算损失函数。通过BP算法不断迭代,让NN学会更好的预测噪声。
在这里插入图片描述
那么如何确定这里的噪声是什么?

可以通过时间和采样,给它不同的噪声级别。但在实际的训练过程中,我们不希望NN一直观察同一个游戏角色图片,因为如果在一个周期内观察到不同的游戏角色图片,NN会更稳定,更均匀。所以,我们实际上是随机采样一个可能的时间步长,然后获取相应的噪声级别,添加到图像中,再让NN做预测。

之后,选择下一张游戏角色图片,执行同样的过程。这样就得到了一个稳定的训练过程。
在这里插入图片描述
在这里插入图片描述

# load dataset and construct optimizer
dataset = CustomDataset("./sprites_1788_16x16.npy", "./sprite_labels_nc_1788_16x16.npy", transform, null_context=False)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True, num_workers=1)
optim = torch.optim.Adam(nn_model.parameters(), lr=lrate)
# helper function: perturbs an image to a specified noise level
def perturb_input(x, t, noise):
    return ab_t.sqrt()[t, None, None, None] * x + (1 - ab_t[t, None, None, None]) * noise
# training without context code

# set into train mode
nn_model.train()

for ep in range(n_epoch):
    print(f'epoch {ep}')
    
    # linearly decay learning rate
    optim.param_groups[0]['lr'] = lrate*(1-ep/n_epoch)
    
    pbar = tqdm(dataloader, mininterval=2 )
    for x, _ in pbar:   # x: images
        optim.zero_grad()
        x = x.to(device)
        
        # perturb data
        noise = torch.randn_like(x)
        t = torch.randint(1, timesteps + 1, (x.shape[0],)).to(device) 
        x_pert = perturb_input(x, t, noise)
        
        # use network to recover noise
        pred_noise = nn_model(x_pert, t / timesteps)
        
        # loss is mean squared error between the predicted and true noise
        loss = F.mse_loss(pred_noise, noise)
        loss.backward()
        
        optim.step()

    # save model periodically
    if ep%4==0 or ep == int(n_epoch-1):
        if not os.path.exists(save_dir):
            os.mkdir(save_dir)
        torch.save(nn_model.state_dict(), save_dir + f"model_{ep}.pth")
        print('saved model at ' + save_dir + f"model_{ep}.pth")
# load in model weights and set to eval mode
nn_model.load_state_dict(torch.load(f"{save_dir}/model_0.pth", map_location=device))
nn_model.eval()
print("Loaded in Model")
# visualize samples
plt.clf()
samples, intermediate_ddpm = sample_ddpm(32)
animation_ddpm = plot_sample(intermediate_ddpm,32,4,save_dir, "ani_run", None, save=False)
HTML(animation_ddpm.to_jshtml())

6. Controlling

本小节介绍如何控制模型生成的内容。

在想控制这些模型时,需要使用嵌入(Embeddings)。嵌入的特殊之处在它可以捕获文本的语义,内容相似的文本具有相似的向量。并且可以对向量进行算术运算,例如,将巴黎的嵌入减去法国的嵌入,加上英国的嵌入,等于伦敦的嵌入。
在这里插入图片描述
那么在训练过程中,这些嵌入是如何成为模型上下文的呢?将图片的标题文本转换成Embedding,随原始的图片一起输入到NN中,这也就构成了上下文的一部分。

扩散模型的神奇之处在于,在进行采样时,可以生成数据集中没有的东西,例如下面的例子中,输入“牛油果扶手椅”的文本Embedding,最终模型生成了一个牛油果做成的扶手椅。
在这里插入图片描述
从更广泛的意义上来看,上下文可以是一个控制NN生成的向量。除了文本Embedding当做上下文以外,也可以将one-hot编码后的图片类别当做上下文。
在这里插入图片描述

from typing import Dict, Tuple
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import models, transforms
from torchvision.utils import save_image, make_grid
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation, PillowWriter
import numpy as np
from IPython.display import HTML
from diffusion_utilities import *


class ContextUnet(nn.Module):
    def __init__(self, in_channels, n_feat=256, n_cfeat=10, height=28):  # cfeat - context features
        super(ContextUnet, self).__init__()

        # number of input channels, number of intermediate feature maps and number of classes
        self.in_channels = in_channels
        self.n_feat = n_feat
        self.n_cfeat = n_cfeat
        self.h = height  #assume h == w. must be divisible by 4, so 28,24,20,16...

        # Initialize the initial convolutional layer
        self.init_conv = ResidualConvBlock(in_channels, n_feat, is_res=True)

        # Initialize the down-sampling path of the U-Net with two levels
        self.down1 = UnetDown(n_feat, n_feat)        # down1 #[10, 256, 8, 8]
        self.down2 = UnetDown(n_feat, 2 * n_feat)    # down2 #[10, 256, 4,  4]
        
         # original: self.to_vec = nn.Sequential(nn.AvgPool2d(7), nn.GELU())
        self.to_vec = nn.Sequential(nn.AvgPool2d((4)), nn.GELU())

        # Embed the timestep and context labels with a one-layer fully connected neural network
        self.timeembed1 = EmbedFC(1, 2*n_feat)
        self.timeembed2 = EmbedFC(1, 1*n_feat)
        self.contextembed1 = EmbedFC(n_cfeat, 2*n_feat)
        self.contextembed2 = EmbedFC(n_cfeat, 1*n_feat)

        # Initialize the up-sampling path of the U-Net with three levels
        self.up0 = nn.Sequential(
            nn.ConvTranspose2d(2 * n_feat, 2 * n_feat, self.h//4, self.h//4), # up-sample  
            nn.GroupNorm(8, 2 * n_feat), # normalize                        
            nn.ReLU(),
        )
        self.up1 = UnetUp(4 * n_feat, n_feat)
        self.up2 = UnetUp(2 * n_feat, n_feat)

        # Initialize the final convolutional layers to map to the same number of channels as the input image
        self.out = nn.Sequential(
            nn.Conv2d(2 * n_feat, n_feat, 3, 1, 1), # reduce number of feature maps   #in_channels, out_channels, kernel_size, stride=1, padding=0
            nn.GroupNorm(8, n_feat), # normalize
            nn.ReLU(),
            nn.Conv2d(n_feat, self.in_channels, 3, 1, 1), # map to same number of channels as input
        )

    def forward(self, x, t, c=None):
        """
        x : (batch, n_feat, h, w) : input image
        t : (batch, n_cfeat)      : time step
        c : (batch, n_classes)    : context label
        """
        # x is the input image, c is the context label, t is the timestep, context_mask says which samples to block the context on

        # pass the input image through the initial convolutional layer
        x = self.init_conv(x)
        # pass the result through the down-sampling path
        down1 = self.down1(x)       #[10, 256, 8, 8]
        down2 = self.down2(down1)   #[10, 256, 4, 4]
        
        # convert the feature maps to a vector and apply an activation
        hiddenvec = self.to_vec(down2)
        
        # mask out context if context_mask == 1
        if c is None:
            c = torch.zeros(x.shape[0], self.n_cfeat).to(x)
            
        # embed context and timestep
        cemb1 = self.contextembed1(c).view(-1, self.n_feat * 2, 1, 1)     # (batch, 2*n_feat, 1,1)
        temb1 = self.timeembed1(t).view(-1, self.n_feat * 2, 1, 1)
        cemb2 = self.contextembed2(c).view(-1, self.n_feat, 1, 1)
        temb2 = self.timeembed2(t).view(-1, self.n_feat, 1, 1)
        #print(f"uunet forward: cemb1 {cemb1.shape}. temb1 {temb1.shape}, cemb2 {cemb2.shape}. temb2 {temb2.shape}")


        up1 = self.up0(hiddenvec)
        up2 = self.up1(cemb1*up1 + temb1, down2)  # add and multiply embeddings
        up3 = self.up2(cemb2*up2 + temb2, down1)
        out = self.out(torch.cat((up3, x), 1))
        return out

# hyperparameters

# diffusion hyperparameters
timesteps = 500
beta1 = 1e-4
beta2 = 0.02

# network hyperparameters
device = torch.device("cuda:0" if torch.cuda.is_available() else torch.device('cpu'))
n_feat = 64 # 64 hidden dimension feature
n_cfeat = 5 # context vector is of size 5
height = 16 # 16x16 image
save_dir = './weights/'

# training hyperparameters
batch_size = 100
n_epoch = 32
lrate=1e-3

# construct DDPM noise schedule
b_t = (beta2 - beta1) * torch.linspace(0, 1, timesteps + 1, device=device) + beta1
a_t = 1 - b_t
ab_t = torch.cumsum(a_t.log(), dim=0).exp()    
ab_t[0] = 1

# construct model
nn_model = ContextUnet(in_channels=3, n_feat=n_feat, n_cfeat=n_cfeat, height=height).to(device)

设置上下文

# reset neural network
nn_model = ContextUnet(in_channels=3, n_feat=n_feat, n_cfeat=n_cfeat, height=height).to(device)

# re setup optimizer
optim = torch.optim.Adam(nn_model.parameters(), lr=lrate)

训练

# training with context code
# set into train mode
nn_model.train()

for ep in range(n_epoch):
    print(f'epoch {ep}')
    
    # linearly decay learning rate
    optim.param_groups[0]['lr'] = lrate*(1-ep/n_epoch)
    
    pbar = tqdm(dataloader, mininterval=2 )
    for x, c in pbar:   # x: images  c: context
        optim.zero_grad()
        x = x.to(device)
        c = c.to(x)
        
        # randomly mask out c
        context_mask = torch.bernoulli(torch.zeros(c.shape[0]) + 0.9).to(device)
        c = c * context_mask.unsqueeze(-1)
        
        # perturb data
        noise = torch.randn_like(x)
        t = torch.randint(1, timesteps + 1, (x.shape[0],)).to(device) 
        x_pert = perturb_input(x, t, noise)
        
        # use network to recover noise
        pred_noise = nn_model(x_pert, t / timesteps, c=c)
        
        # loss is mean squared error between the predicted and true noise
        loss = F.mse_loss(pred_noise, noise)
        loss.backward()
        
        optim.step()

    # save model periodically
    if ep%4==0 or ep == int(n_epoch-1):
        if not os.path.exists(save_dir):
            os.mkdir(save_dir)
        torch.save(nn_model.state_dict(), save_dir + f"context_model_{ep}.pth")
        print('saved model at ' + save_dir + f"context_model_{ep}.pth")
# load in pretrain model weights and set to eval mode
nn_model.load_state_dict(torch.load(f"{save_dir}/context_model_trained.pth", map_location=device))
nn_model.eval() 
print("Loaded in Context Model")
# sample with context using standard algorithm
@torch.no_grad()
def sample_ddpm_context(n_sample, context, save_rate=20):
    # x_T ~ N(0, 1), sample initial noise
    samples = torch.randn(n_sample, 3, height, height).to(device)  

    # array to keep track of generated steps for plotting
    intermediate = [] 
    for i in range(timesteps, 0, -1):
        print(f'sampling timestep {i:3d}', end='\r')

        # reshape time tensor
        t = torch.tensor([i / timesteps])[:, None, None, None].to(device)

        # sample some random noise to inject back in. For i = 1, don't add back in noise
        z = torch.randn_like(samples) if i > 1 else 0

        eps = nn_model(samples, t, c=context)    # predict noise e_(x_t,t, ctx)
        samples = denoise_add_noise(samples, i, eps, z)
        if i % save_rate==0 or i==timesteps or i<8:
            intermediate.append(samples.detach().cpu().numpy())

    intermediate = np.stack(intermediate)
    return samples, intermediate

使用完全随机的上下文

# visualize samples with randomly selected context
plt.clf()
ctx = F.one_hot(torch.randint(0, 5, (32,)), 5).to(device=device).float()
samples, intermediate = sample_ddpm_context(32, ctx)
animation_ddpm_context = plot_sample(intermediate,32,4,save_dir, "ani_run", None, save=False)
HTML(animation_ddpm_context.to_jshtml())

在这里插入图片描述

def show_images(imgs, nrow=2):
    _, axs = plt.subplots(nrow, imgs.shape[0] // nrow, figsize=(4,2 ))
    axs = axs.flatten()
    for img, ax in zip(imgs, axs):
        img = (img.permute(1, 2, 0).clip(-1, 1).detach().cpu().numpy() + 1) / 2
        ax.set_xticks([])
        ax.set_yticks([])
        ax.imshow(img)
    plt.show()
# user defined context
ctx = torch.tensor([
    # hero, non-hero, food, spell, side-facing
    [1,0,0,0,0],  
    [1,0,0,0,0],    
    [0,0,0,0,1],
    [0,0,0,0,1],    
    [0,1,0,0,0],
    [0,1,0,0,0],
    [0,0,1,0,0],
    [0,0,1,0,0],
]).float().to(device)
samples, _ = sample_ddpm_context(ctx.shape[0], ctx)
show_images(samples)

在这里插入图片描述
通过浮点数来控制各种图片的混合效果

# mix of defined context
ctx = torch.tensor([
    # hero, non-hero, food, spell, side-facing
    [1,0,0,0,0],      #human
    [1,0,0.6,0,0],    
    [0,0,0.6,0.4,0],  
    [1,0,0,0,1],  
    [1,1,0,0,0],
    [1,0,0,1,0]
]).float().to(device)
samples, _ = sample_ddpm_context(ctx.shape[0], ctx)
show_images(samples)

在这里插入图片描述

7. Speeding up

本小节将会介绍一种新的采样方法——DDIM,速度比DDPM快10倍以上。

训练NN的目标是想要快速地获得更多的图片。但是目前采样速度很慢,因为涉及到很多时间步骤,之前的例子中都是需要500步,才能得到一个好的样本。而且每个时间步都依赖与前一个时间步,它遵循马尔科夫链过程。幸运的是,有许多新的采样器可以解决这个问题,毕竟这一直是扩散模型的痛点。

去噪扩散隐式模型(Denoising Diffusion Implicit Models,DDIM)就是其中之一。

DDIM更快的原因是它能够跨过时间步长,而不再遵循马尔科夫假设。马尔科夫链实际上只适用于概率过程,但是DDIM从采样过程中去处了随机性,因此是确定性的。DDIM所做的本质上就是预测最终输出的粗略草图,然后用去噪过程对其进行细化。
在这里插入图片描述
下面是两个模型的对比:
在这里插入图片描述

# construct DDPM noise schedule
b_t = (beta2 - beta1) * torch.linspace(0, 1, timesteps + 1, device=device) + beta1
a_t = 1 - b_t
ab_t = torch.cumsum(a_t.log(), dim=0).exp()    
ab_t[0] = 1

# construct model
nn_model = ContextUnet(in_channels=3, n_feat=n_feat, n_cfeat=n_cfeat, height=height).to(device)
# define sampling function for DDIM   
# removes the noise using ddim
def denoise_ddim(x, t, t_prev, pred_noise):
    ab = ab_t[t]
    ab_prev = ab_t[t_prev]
    
    x0_pred = ab_prev.sqrt() / ab.sqrt() * (x - (1 - ab).sqrt() * pred_noise)
    dir_xt = (1 - ab_prev).sqrt() * pred_noise

    return x0_pred + dir_xt
# load in model weights and set to eval mode
nn_model.load_state_dict(torch.load(f"{save_dir}/model_31.pth", map_location=device))
nn_model.eval() 
print("Loaded in Model without context")
# sample quickly using DDIM
@torch.no_grad()
def sample_ddim(n_sample, n=20):
    # x_T ~ N(0, 1), sample initial noise
    samples = torch.randn(n_sample, 3, height, height).to(device)  

    # array to keep track of generated steps for plotting
    intermediate = [] 
    step_size = timesteps // n
    for i in range(timesteps, 0, -step_size):
        print(f'sampling timestep {i:3d}', end='\r')

        # reshape time tensor
        t = torch.tensor([i / timesteps])[:, None, None, None].to(device)

        eps = nn_model(samples, t)    # predict noise e_(x_t,t)
        samples = denoise_ddim(samples, i, i - step_size, eps)
        intermediate.append(samples.detach().cpu().numpy())

    intermediate = np.stack(intermediate)
    return samples, intermediate

不带上下文的DDIM生成样例

# visualize samples
plt.clf()
samples, intermediate = sample_ddim(32, n=25)

animation_ddim = plot_sample(intermediate,32,4,save_dir, "ani_run", None, save=False)

HTML(animation_ddim.to_jshtml())

在这里插入图片描述

# load in model weights and set to eval mode
nn_model.load_state_dict(torch.load(f"{save_dir}/context_model_31.pth", map_location=device))
nn_model.eval() 
print("Loaded in Context Model")
# fast sampling algorithm with context
@torch.no_grad()
def sample_ddim_context(n_sample, context, n=20):
    # x_T ~ N(0, 1), sample initial noise
    samples = torch.randn(n_sample, 3, height, height).to(device)  

    # array to keep track of generated steps for plotting
    intermediate = [] 
    step_size = timesteps // n
    for i in range(timesteps, 0, -step_size):
        print(f'sampling timestep {i:3d}', end='\r')

        # reshape time tensor
        t = torch.tensor([i / timesteps])[:, None, None, None].to(device)

        eps = nn_model(samples, t, c=context)    # predict noise e_(x_t,t)
        samples = denoise_ddim(samples, i, i - step_size, eps)
        intermediate.append(samples.detach().cpu().numpy())

    intermediate = np.stack(intermediate)
    return samples, intermediate

带上下文的DDIM生成样例

# visualize samples
plt.clf()
ctx = F.one_hot(torch.randint(0, 5, (32,)), 5).to(device=device).float()
samples, intermediate = sample_ddim_context(32, ctx)

animation_ddpm_context = plot_sample(intermediate,32,4,save_dir, "ani_run", None, save=False)

HTML(animation_ddpm_context.to_jshtml())

在这里插入图片描述
速度对比

# helper function; removes the predicted noise (but adds some noise back in to avoid collapse)
def denoise_add_noise(x, t, pred_noise, z=None):
    if z is None:
        z = torch.randn_like(x)
    noise = b_t.sqrt()[t] * z
    mean = (x - pred_noise * ((1 - a_t[t]) / (1 - ab_t[t]).sqrt())) / a_t[t].sqrt()
    return mean + noise
# sample using standard algorithm
@torch.no_grad()
def sample_ddpm(n_sample, save_rate=20):
    # x_T ~ N(0, 1), sample initial noise
    samples = torch.randn(n_sample, 3, height, height).to(device)  

    # array to keep track of generated steps for plotting
    intermediate = [] 
    for i in range(timesteps, 0, -1):
        print(f'sampling timestep {i:3d}', end='\r')

        # reshape time tensor
        t = torch.tensor([i / timesteps])[:, None, None, None].to(device)

        # sample some random noise to inject back in. For i = 1, don't add back in noise
        z = torch.randn_like(samples) if i > 1 else 0

        eps = nn_model(samples, t)    # predict noise e_(x_t,t)
        samples = denoise_add_noise(samples, i, eps, z)
        if i % save_rate ==0 or i==timesteps or i<8:
            intermediate.append(samples.detach().cpu().numpy())

    intermediate = np.stack(intermediate)
    return samples, intermediate
%timeit -r 1 sample_ddim(32, n=25)
%timeit -r 1 sample_ddpm(32, )
6 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
1min 58s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

采样如果在500步以上,DDPM效果会更好,500步以内,DDIM会更好。

8. Summary

扩散模型不仅仅可以用于图片,还可以通过Prompt,让它用于文本反演(textual inversion),图像修复,音乐生成,视频生成,药物发现等。

Stable Diffusion 采用一种名为潜在扩散(latent diffusion)的方法,它直接在图像嵌入(Embedding)上操作,而不是在图像上操作,使得过程更加高效。此外还有一些方法值得尝试,交叉注意力文本调节(cross-attention text conditioning),无分类器引导(classifier-free guidance)等。

扩散模型,去噪,采样,DDPM,DDIM,U-Net,上下文融入模型。

samples, i, eps, z)
if i % save_rate 0 or itimesteps or i<8:
intermediate.append(samples.detach().cpu().numpy())

intermediate = np.stack(intermediate)
return samples, intermediate

```python
%timeit -r 1 sample_ddim(32, n=25)
%timeit -r 1 sample_ddpm(32, )
6 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
1min 58s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)

采样如果在500步以上,DDPM效果会更好,500步以内,DDIM会更好。

8. Summary

扩散模型不仅仅可以用于图片,还可以通过Prompt,让它用于文本反演(textual inversion),图像修复,音乐生成,视频生成,药物发现等。

Stable Diffusion 采用一种名为潜在扩散(latent diffusion)的方法,它直接在图像嵌入(Embedding)上操作,而不是在图像上操作,使得过程更加高效。此外还有一些方法值得尝试,交叉注意力文本调节(cross-attention text conditioning),无分类器引导(classifier-free guidance)等。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/710866.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

gof23设计模式之代理模型

1. 代理模式 1.1. 概述 由于某些原因需要给某对象提供一个代理以控制对该对象的访问。这时&#xff0c;访问对象不适合或者不能直接引用目标对象&#xff0c;代理对象作为访问对象和目标对象之间的中介。 Java中的代理按照代理类生成时机不同又分为静态代理和动态代理。静态代…

Kubernetes对象深入学习之一:概览

欢迎访问我的GitHub 这里分类和汇总了欣宸的全部原创(含配套源码)&#xff1a;https://github.com/zq2599/blog_demos 关于《Kubernetes对象深入学习》系列 在client-go的学习和使用过程中&#xff0c;不可避免涉及到对象相关的代码&#xff0c;经常面临一个尴尬的现象&#x…

PCL点云处理之多角度剖面切割(一百九十五)

PCL点云处理之多角度切割点云剖面(一百九十五) 一、算法介绍二、具体实现1.沿法向量方向切割剖面2.沿竖直方向切割剖面3.沿水平方向切割剖面一、算法介绍 点云的剖面往往隐藏着很多有用信息,而且分析更加简单一些,这里自己实现一系列不同角度的点云剖面切割,包括沿着法向量…

车载软件架构 —— 闲聊几句AUTOSAR OS(七)

我是穿拖鞋的汉子,魔都中坚持长期主义的汽车电子工程师。 老规矩,分享一段喜欢的文字,避免自己成为高知识低文化的工程师: 没有人关注你。也无需有人关注你。你必须承认自己的价值,你不能站在他人的角度来反对自己。人生在世,最怕的就是把别人的眼光当成自己生活的唯一标…

Framework - AMS

一、概念 Android10&#xff08;API29&#xff09;开始&#xff0c;ActivityTaskManagerService 接管 ActivityManagerService。 二、启动ATMS过程 三、启动APP & 跳转Activity过程 3.1 Activity → ATMS 启动一个 APP 或跳转一个 Activity 都是调用的 startActivity()&a…

数据结构--串的存储结构

数据结构–串的存储结构 串的顺序存储 静态数组实现(定长顺序存储) #define MAXLEN 255 typedef struct {char ch[MAXLEN];int length; }SString;动态数组实现(堆分配存储) typedef struct {char* ch;int length; }HString;int main() {HString S;S.ch (char*)malloc(sizeo…

问题解决:centos7异常关闭后无法开机

前言&#xff1a;主机卡死&#xff0c;直接关了电脑电源&#xff0c;虚拟机中的centos7 产生错误&#xff0c;无法开机 重点是取消挂载。很多文章都提到了xfs_repair /dev/dm-0 , 但是不适用我遇到的情况。 # ls /dev/mapper umount /dev/mapper/centos-root xfs_repair -v -…

[洛谷]B3601 [图论与代数结构 201] 最短路问题_1(负权)(spfa)

SPFA模板啦~ 直接上ACcode: #include<bits/stdc.h> using namespace std; //#define int long long #define inf 2147483647 const int N15e310,M2*N; int dis[N],head[N],cnt; bool vis[N]; int n,m; struct E {int to,w,next; } e[M]; queue<int>q; void add(in…

U-Boot移植 - 1_嵌入式Linux系统移植概述

文章目录 1. 嵌入式Linux系统移植概述2. 实验开发板简介3. U-Boot简介4. NXP uboot测试 1. 嵌入式Linux系统移植概述 Linux 的移植主要包括3部分&#xff1a; 移植「bootloader 代码」&#xff0c; Linux 系统要启动就必须需要一个 bootloader 程序&#xff0c;也就说芯片上电…

【Android Framework (十) 】- ContentProvider

文章目录 知识回顾启动第一个流程initZygote的流程system_serverServiceManagerBinderLauncher的启动AMSservicebinderService 前言源码分析1.使用方法2.ContentProvider实现类。3.使用方法4.注册Observer正文 拓展知识 总结 知识回顾 启动第一个流程init 1&#xff0c;挂载文…

基于eBPF技术的云原生可观测实践

** 基于eBPF技术的云原生可观测实践 ** eBPF技术是Linux内核3.15版本中引入的全新设计&#xff0c;自从2014年发布以来&#xff0c;一直都备受瞩目。在过去几年中&#xff0c;基于eBPF技术的实践和工程落地层出不穷&#xff0c;出现了爆发式的增长。2015年微软、Google、Face…

浏览器里的任意一个请求通过postman生成对应的代码

大多数情况下&#xff0c;我们都是不知道某个网站的get或者post请求以及其他请求&#xff08;比如说PUT请求等&#xff09;是该加哪些headers和cookie才能用代码请求成功&#xff0c;这时就需要下面的操作了。 浏览器里的任意一个请求通过postman生成对应的代码&#xff1a; …

外观模式的学习与使用

1、外观模式的学习 当你在开发软件系统时&#xff0c;系统内部的子系统可能会变得非常复杂&#xff0c;包含了许多相互关联的类和接口。在使用这些子系统时&#xff0c;你可能需要调用多个类和方法才能完成所需的功能。这样的复杂性可能导致代码难以维护、理解和使用。外观模式…

NSQ 实现逻辑探秘

1 什么是 NSQ NSQ 是一个消息队列中间件&#xff0c;用 go 实现&#xff0c;有如下特点&#xff1a; 分布式&#xff1a; 它提供了分布式的、去中心化且没有单点故障的拓扑结构&#xff0c;稳定的消息传输发布保障&#xff0c;能够具有高容错和高可用特性。 易于扩展&#xf…

【Echarts】echarts饼图、圆环图配置代码详解

前言 简介&#xff1a;本文将从头开始&#xff0c;带你快速上手 echarts最常用图例—饼图 准备&#xff1a;请自行先将echarts图例引入你的项目&#xff0c;本文不多介绍。&#xff08;引入 echarts教程&#xff1a;http://t.csdn.cn/mkTa4&#xff09; 心得&#xff1a;echar…

递归函数:

含义&#xff1a;自己调自己 递归三要素&#xff1a;定义函数、终止条件和等价关系式 小案例&#xff1a;排序 let arr1 [8, 8, 9, 13, 45, 8, 0, 1, 9, 66];//定义函数function quickSort(arr) {//终止条件if (arr.length < 1) return arr;const baseIndex Math.floor(…

十五、docker学习-docker核心docker数据卷

什么是数据卷 当我们在使用docker容器的时候&#xff0c;会产生一系列的数据文件&#xff0c;这些数据文件在我们删除docker容器时是会消失的&#xff0c;但是其中产生的部分内容我们是希望能够把它给保存起来另作用途的&#xff0c;Docker将应用与运行环境打包成容器发布&…

【游戏逆向】D3D HOOK实现透视讲解

实现目的: 目前大部分游戏通过Direct3D实现3D效果,通过挂钩相应函数,可以实现3D透视,屏幕挂字效果。而透视,屏蔽特定效果,设置透明在很多游戏(特别是FPS)中发挥着巨大的作用! 实现思路: [D3D] DirectX的功能都是以COM组件的形式提供的。在Direct3D中,主要通过采…

Unity新输入系统

1、导入新输入系统 &#xff08;1&#xff09; 这里改成.NET Framework&#xff0c;下面改成input system package(New) 2、使用新系统 &#xff08;1&#xff09; 在你的player物体上添加Player Input组件&#xff0c;然后CreateAction &#xff08;2&#xff09; 创建出…

连接器信号完整性仿真教程 五

本文将详细介绍CST电磁仿真的激励源&#xff08;Excitation Source&#xff09;及其设置。CST微波工作室根据具体应用和结构类型提供多种不同的激励源&#xff0c;总得来说包含激励端口&#xff08;Excitation Port&#xff09;和场源&#xff08;Field Sources&#xff09;。 …