BatchNorm2d
对二维矩阵进行批量归一化,mean
为当前batch
的均值,std
为当前batch
的标准差,使用批量归一化能够将取值范围不同的数据映射到标准正态分布的区间中,减少数据之间的差距,方便模型快速收敛。批量归一化本质上减少了样本之间的绝对误差,但不改变相对误差,比如对[1,2,3,4]
做归一化,虽然数字大小变了,但数字之间的大小关系不会变。一般建议在卷积核后面接一个批量归一化
公式
-
归一化公式
-
全局均值估计:running_mean
和全局方差估计:running_var
x n e w = ( 1 − m o m e n t u m ) × x o l d + m o m e n t u m × x t x_{new}=(1-momentum) \times x_{old}+momentum \times x_{t} xnew=(1−momentum)×xold+momentum×xt
x n e w x_{new} xnew为更新后的running_mean/running_var
, x o l d x_{old} xold为更新前的running_mean/running_var
, x t x_{t} xt为当前batch的mean和var
,momentum
为权重因子,一般取0.1
- pytorch中使用
BatchNorm2d
batchnorm=torch.nn.BatchNorm2d(num_features=通道的数量)
不建议更改其他参数
关于BatchNorm2d的实验验证
- 归一化公式的验证
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as opti
from torchvision.transforms import RandomRotation
import torchsummary
import time
import datetime
import numpy as np
import copy
import torch.nn as nn
data=torch.tensor(
[[[[1,2],
[3,4]]]],dtype=torch.float32
)
batchnorm=nn.BatchNorm2d(num_features=1,momentum=0.1)
print('------------1--------------')
print("初始状态下的running_mean,running_var")
print(batchnorm.running_mean)
print(batchnorm.running_var)
print('------------2--------------')
print("输入data后状态下的running_mean,running_var")
test=batchnorm(data)
print(batchnorm.running_mean)
print(batchnorm.running_var)
print('训练状态下对data进行batchNorm')
print(test)
print('手动计算的batchNorm')
mean=torch.mean(data)
std=torch.var(data,False)
print((data[0][0]-mean)/torch.sqrt(std+1e-5))
结论,归一化的mean和std都是当前batch的mean和std
running_mean
和running_var
的公式验证
print('------------3--------------')
print("人工计算的running_mean,running_var")
running_mean=torch.tensor(0)
running_var=torch.tensor(1)
running_mean=0.9*running_mean+0.1*mean
running_var=0.9*running_var+0.1*std
print(running_mean)
print(running_var)
print('测试状态下对data进行batchNorm')
batchnorm.training=False
test=batchnorm(data)
print(test)
#得出如下结论:
#running_mean=(1-momentum)*running_mean+momentum*batch_mean
#running_var=(1-momentum)*running_var+momentum*batch_var
running_mean和running_var只对测试有影响,对训练没有任何影响,测试数据使用running_mean
和running_var
进行归一化
当track_running_stats=False
时的影响
print('------------4--------------')
print('track_running_stats设置为False时,输入data前得running_mean,running_var')
batchnorm=nn.BatchNorm2d(num_features=1,momentum=0.1,track_running_stats=False)
print(batchnorm.running_mean)
print(batchnorm.running_var)
print('------------5--------------')
print('track_running_stats设置为False时,输入data后得running_mean,running_var')
test=batchnorm(data)
print(batchnorm.running_mean)
print(batchnorm.running_var)
print('------------6--------------')
print('track_running_stats设置为False时,训练状态下对data进行batchnorm')
print(test)
print('------------7--------------')
print('track_running_stats设置为False时,测试状态下对data进行batchnorm')
batchnorm.training=False
test=batchnorm(data)
print(test)
#得出如下结论
#running_mean和running_var是用于对测试集进行归一化,如果track_running_stats设置为False,则测试集进行归一化时不会使用running_mean和running_var
#而是直接用自身得mean和std
不要将track_running_stats
设置为False