接续上一天的学习任务,我们要继续进行下一步的操作
构造网络
当处理完数据后,就可以来进行网络的搭建了。按照DCGAN论文中的描述,所有模型权重均应从mean
为0,sigma
为0.02的正态分布中随机初始化。
接下来了解一下其他内容
生成器
生成器G
的功能是将隐向量z
映射到数据空间。实践场景中,该功能是通过一系列Conv2dTranspose
转置卷积层来完成的,每个层都与BatchNorm2d
层和ReLu
激活层配对,输出数据会经过tanh
函数,使其返回[-1,1]
的数据范围内。
DCGAN论文生成图像如下所示:
通过输入部分中设置的nz
、ngf
和nc
来影响代码中的生成器结构。nz
是隐向量z
的长度,ngf
与通过生成器传播的特征图的大小有关,nc
是输出图像中的通道数。
代码实现
import mindspore as ms
from mindspore import nn, ops
from mindspore.common.initializer import Normal
weight_init = Normal(mean=0, sigma=0.02)
gamma_init = Normal(mean=1, sigma=0.02)
class Generator(nn.Cell):
"""DCGAN网络生成器"""
def __init__(self):
super(Generator, self).__init__()
self.generator = nn.SequentialCell(
nn.Conv2dTranspose(nz, ngf * 8, 4, 1, 'valid', weight_init=weight_init),
nn.BatchNorm2d(ngf * 8, gamma_init=gamma_init),
nn.ReLU(),
nn.Conv2dTranspose(ngf * 8, ngf * 4, 4, 2, 'pad', 1, weight_init=weight_init),
nn.BatchNorm2d(ngf * 4, gamma_init=gamma_init),
nn.ReLU(),
nn.Conv2dTranspose(ngf * 4, ngf * 2, 4, 2, 'pad', 1, weight_init=weight_init),
nn.BatchNorm2d(ngf * 2, gamma_init=gamma_init),
nn.ReLU(),
nn.Conv2dTranspose(ngf * 2, ngf, 4, 2, 'pad', 1, weight_init=weight_init),
nn.BatchNorm2d(ngf, gamma_init=gamma_init),
nn.ReLU(),
nn.Conv2dTranspose(ngf, nc, 4, 2, 'pad', 1, weight_init=weight_init),
nn.Tanh()
)
def construct(self, x):
return self.generator(x)
generator = Generator()
判别器
判别器D
是一个二分类网络模型,输出判定该图像为真实图的概率。
代码实现
class Discriminator(nn.Cell):
"""DCGAN网络判别器"""
def __init__(self):
super(Discriminator, self).__init__()
self.discriminator = nn.SequentialCell(
nn.Conv2d(nc, ndf, 4, 2, 'pad', 1, weight_init=weight_init),
nn.LeakyReLU(0.2),
nn.Conv2d(ndf, ndf * 2, 4, 2, 'pad', 1, weight_init=weight_init),
nn.BatchNorm2d(ngf * 2, gamma_init=gamma_init),
nn.LeakyReLU(0.2),
nn.Conv2d(ndf * 2, ndf * 4, 4, 2, 'pad', 1, weight_init=weight_init),
nn.BatchNorm2d(ngf * 4, gamma_init=gamma_init),
nn.LeakyReLU(0.2),
nn.Conv2d(ndf * 4, ndf * 8, 4, 2, 'pad', 1, weight_init=weight_init),
nn.BatchNorm2d(ngf * 8, gamma_init=gamma_init),
nn.LeakyReLU(0.2),
nn.Conv2d(ndf * 8, 1, 4, 1, 'valid', weight_init=weight_init),
)
self.adv_layer = nn.Sigmoid()
def construct(self, x):
out = self.discriminator(x)
out = out.reshape(out.shape[0], -1)
return self.adv_layer(out)
discriminator = Discriminator()
接下来进入模型训练阶段
模型训练
其中分为几个要素:
损失函数
当定义了D
和G
后,接下来将使用MindSpore中定义的二进制交叉熵损失函数BCELoss。
优化器
训练模型:训练判别器和训练生成器。
实现模型训练正向逻辑:
def generator_forward(real_imgs, valid):
# 将噪声采样为发生器的输入
z = ops.standard_normal((real_imgs.shape[0], nz, 1, 1))
# 生成一批图像
gen_imgs = generator(z)
# 损失衡量发生器绕过判别器的能力
g_loss = adversarial_loss(discriminator(gen_imgs), valid)
return g_loss, gen_imgs
def discriminator_forward(real_imgs, gen_imgs, valid, fake):
# 衡量鉴别器从生成的样本中对真实样本进行分类的能力
real_loss = adversarial_loss(discriminator(real_imgs), valid)
fake_loss = adversarial_loss(discriminator(gen_imgs), fake)
d_loss = (real_loss + fake_loss) / 2
return d_loss
grad_generator_fn = ms.value_and_grad(generator_forward, None,
optimizer_G.parameters,
has_aux=True)
grad_discriminator_fn = ms.value_and_grad(discriminator_forward, None,
optimizer_D.parameters)
@ms.jit
def train_step(imgs):
valid = ops.ones((imgs.shape[0], 1), mindspore.float32)
fake = ops.zeros((imgs.shape[0], 1), mindspore.float32)
(g_loss, gen_imgs), g_grads = grad_generator_fn(imgs, valid)
optimizer_G(g_grads)
d_loss, d_grads = grad_discriminator_fn(imgs, gen_imgs, valid, fake)
optimizer_D(d_grads)
return g_loss, d_loss, gen_imgs
代码训练
结果展示就不多说了看成品
文末附上打卡时间