实验内容
目录
1 .小批量梯度下降法
2 .数据处理
(1)将数据集封装为Dataset类
(2)用DataLoader进行封装
3 .模型构建
4 .完善Runner类
5 .模型训练
可视化观察训练集损失和训练集loss变化情况
6 .模型评价
7.模型预测
完整代码
8 .与实验四“基于Softmax回归完成鸢尾花分类”比较,谈谈自己的看法
Softmax分类(逻辑回归的多分类版本)
前馈神经网络分类(FNN)
我的看法
9.基于MNIST手写数字识别数据集,设计合适的前馈神经网络进行实验
参考链接
1 .小批量梯度下降法
- 批量梯度下降(BGD):在每一次迭代中,梯度下降使用整个训练数据集来计算梯度。(当训练集中的样本数量NN很大时,空间复杂度比较高,每次迭代的计算开销也很大)
- 随机梯度下降(SGD):在每次迭代中只随机采样一个样本来计算梯度。(可能会收敛到局部最优)
- 小批量随机梯度下降(mini-bantch):是BGD和SGD的折中方案,每次从训练样本集上随机抽取一个小样本集,在抽出来的小样本集上采用FGD迭代更新权重。(batch_size=1,则变成了SGD,若batch_size=n,则变成了BGD)(每次使用一个batch可以大大减小收敛所需要的迭代次数,同时可以使收敛到的结果更加接近梯度下降的效果)
2 .数据处理
为了小批量梯度下降法,我们需要对数据进行随机分组===>构建一个数据迭代器,每个迭代过程中从全部数据集中获取一批指定数量的数据。
迭代器的实现原理:(参考飞桨AI Studio星河社区-人工智能学习与实训社区)
(1)将数据集封装为Dataset类
构造IrisDataset类进行数据读取,继承自torch.utils.data.Dataset
类
__getitem__
:传入指定的索引 index 后,该方法能够根据索引返回对应的单个样本及其对应的标签(以元组形式)。__len__
:返回整个数据集的大小,即前面所说的 Data Size。
需要的头文件:
from torch.utils.data import DataLoader, Dataset
from sklearn.datasets import load_iris
import numpy as np
import torch
class IrisDataset(Dataset):
# mode 用于区分数据集类型,训练集train,验证集dev
def __init__(self, mode='train', num_train=120, num_dev=15):
super(IrisDataset, self).__init__()
# 调用名为load_data的函数,此函数负责加载鸢尾花数据集
# shuffle=True表示在加载数据时会被随机打乱,确保每个epoch的顺序是随机的
X, y = load_data(shuffle=True)
# ========================分割数据集=============================
# 作为训练集
if mode == 'train':
self.X, self.y = X[:num_train], y[:num_train]
# 作为验证集
elif mode == 'dev':
self.X, self.y = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
# 去剩余样本作为测试集
else:
self.X, self.y = X[num_train + num_dev:], y[num_train + num_dev:]
# 从数据集中获取一个样本,返回一个样本的输入X和标签y
def __getitem__(self, idx):
return self.X[idx], self.y[idx]
# 返回数据集标签的数量(即数据集的大小)
def __len__(self):
return len(self.y)
# 加载数据集
def load_data(shuffle=True):
"""
加载鸢尾花数据
输入:
- shuffle:是否打乱数据,数据类型为bool
输出:
- X:特征数据,shape=[150,4]
- y:标签数据, shape=[150]
"""
# 加载原始数据
iris = load_iris()
X = np.array(iris.data, dtype=np.float32)
# 注意int 64
y = np.array(iris.target, dtype=np.int64)
X = torch.tensor(X)
y = torch.tensor(y)
# 数据归一化
X_min,_ = torch.min(X, axis=0)
X_max,_ = torch.max(X, axis=0)
X = (X-X_min) / (X_max-X_min)
# 如果shuffle为True,随机打乱数据
if shuffle:
idx = torch.randperm(X.shape[0])
X = X[idx]
y = y[idx]
return X, y
# 设置随机种子以保证结果的可重复性
torch.random.manual_seed(12)
# 创建了训练集、验证集和测试集的实例
train_dataset = IrisDataset(mode='train')
dev_dataset = IrisDataset(mode='dev')
test_dataset = IrisDataset(mode='test')
# 打印训练集长度
print("length of train set: ", len(train_dataset))
运行结果:
length of train set: 120
【torch.utils.data】 Dataset和Dataloader的解读和使用
为什么要继承Dataset类呢?Dataset 是一个抽象类,我们自己编写的数据集类必须继承 Dataset,且需重新改写
__getitem__
和__len__
方法。
(2)用DataLoader进行封装
我们需要以 batch 的形式访问数据集。Dataloader
这个接口提供了这样的功能,能够基于我们自定义的数据集将其转换成一个可迭代对象以便我们批量访问。
# ===========================用DataLoader进行封装=============================
# 将数据集封装成DataLoader,方便批量加载和打乱数据
batch_size =16 #设置批次大小
train_loader = DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)
dev_loader = DataLoader(dataset=dev_dataset,batch_size=batch_size,shuffle=False)
test_loader = DataLoader(dataset=test_dataset,batch_size=batch_size,shuffle=False)
3 .模型构建
构建一个简单的前馈神经网络进行鸢尾花分类实验。其中输入层神经元个数为4,输出层神经元个数为3,隐含层神经元个数为6:
# ================================模型构建====================================
# 模型构建
# 实现一个两层前馈神经网络
class IrisSort(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(IrisSort, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
normal_(self.fc1.weight, mean=0, std=0.01)
constant_(self.fc1.bias, val=1.0)
self.fc2 = nn.Linear(hidden_size, output_size)
normal_(self.fc2.weight, mean=0, std=0.01)
constant_(self.fc2.bias, val=1.0)
self.act = torch.sigmoid
def forward(self, inputs):
outputs = self.fc1(inputs)
outputs = self.act(outputs)
outputs = self.fc2(outputs)
return outputs
fnn_model = IrisSort(input_size=4, hidden_size=6, output_size=3)
4 .完善Runner类
训练过程使用自动梯度计算,使用DataLoader
加载批量数据,使用小批量随机梯度下降法进行参数优化。
- __init__(self, model, optimizer, metric, loss_fn, **kwargs):构造函数,初始化模型、优化器、损失函数、评价指标以及训练过程中的记录列表(如损失和得分)
- train(self, train_loader, dev_loader=None, **kwargs):进行模型的训练,包含多个训练周期(epoch),记录训练损失,并定期评估模型性能。
- evaluate(self, dev_loader, **kwargs):通过对训练好的模型进行评价,在验证集上查看模型训练效果;
- predict(self, x, **kwargs):选取一条数据对训练好的模型进行预测;
- save_model(self, save_path):模型在训练过程和训练结束后需要进行保存;
- load_model(self, model_path):从指定路径加载模型参数。
# 完善Runner类
import torch
class RunnerV3(object):
def __init__(self, model, optimizer, metric,loss_fn, **kwargs):
self.model = model
self.optimizer = optimizer
self.loss_fn = loss_fn
self.metric = metric # 只用于计算评价指标
# 记录训练过程中的评价指标变化情况
self.dev_scores = []
# 记录训练过程中的损失函数变化情况
self.train_epoch_losses = [] # 一个epoch记录一次loss
self.train_step_losses = [] # 一个step记录一次loss
self.dev_losses = []
# 记录全局最优指标
self.best_score = 0
def train(self, train_loader, dev_loader=None, **kwargs):
# 将模型切换为训练模式
self.model.train()
# 传入训练轮数,如果没有传入值则默认为0
num_epochs = kwargs.get("num_epochs", 0)
# 传入log打印频率,如果没有传入值则默认为100
log_steps = kwargs.get("log_steps", 100)
# 评价频率
eval_steps = kwargs.get("eval_steps", 0)
# 传入模型保存路径,如果没有传入值则默认为"best_model.pdparams"
save_path = kwargs.get("save_path", "best_model.pdparams")
custom_print_log = kwargs.get("custom_print_log", None)
# 训练总的步数
num_training_steps = num_epochs * len(train_loader)
if eval_steps:
if self.metric is None:
raise RuntimeError('Error: Metric can not be None!')
if dev_loader is None:
raise RuntimeError('Error: dev_loader can not be None!')
# 运行的step数目
global_step = 0
# 进行num_epochs轮训练
for epoch in range(num_epochs):
# 用于统计训练集的损失
total_loss = 0
for step, data in enumerate(train_loader):
X, y = data
# 获取模型预测
logits = self.model(X)
loss = self.loss_fn(logits, y) # 默认求mean
total_loss += loss
# 训练过程中,每个step的loss进行保存
self.train_step_losses.append((global_step, loss.item()))
if log_steps and global_step % log_steps == 0:
print(
f"[Train] epoch: {epoch}/{num_epochs}, step: {global_step}/{num_training_steps}, loss: {loss.item():.5f}")
# 梯度反向传播,计算每个参数的梯度值
loss.backward()
if custom_print_log:
custom_print_log(self)
# 小批量梯度下降进行参数更新
self.optimizer.step()
# 梯度归零
self.optimizer.zero_grad()
# 判断是否需要评价
if eval_steps > 0 and global_step > 0 and \
(global_step % eval_steps == 0 or global_step == (num_training_steps - 1)):
dev_score, dev_loss = self.evaluate(dev_loader, global_step=global_step)
print(f"[Evaluate] dev score: {dev_score:.5f}, dev loss: {dev_loss:.5f}")
# 将模型切换为训练模式
self.model.train()
# 如果当前指标为最优指标,保存该模型
if dev_score > self.best_score:
self.save_model(save_path)
print(
f"[Evaluate] best accuracy performence has been updated: {self.best_score:.5f} --> {dev_score:.5f}")
self.best_score = dev_score
global_step += 1
# 当前epoch 训练loss累计值
trn_loss = (total_loss / len(train_loader)).item()
# epoch粒度的训练loss保存
self.train_epoch_losses.append(trn_loss)
print("[Train] Training done!")
# 模型评估阶段,使用'torch.no_grad()'控制不计算和存储梯度
@torch.no_grad()
def evaluate(self, dev_loader, **kwargs):
assert self.metric is not None
# 将模型设置为评估模式
self.model.eval()
global_step = kwargs.get("global_step", -1)
# 用于统计训练集的损失
total_loss = 0
# 重置评价
self.metric.reset()
# 遍历验证集每个批次
for batch_id, data in enumerate(dev_loader):
X, y = data
# 计算模型输出
logits = self.model(X)
# 计算损失函数
loss = self.loss_fn(logits, y).item()
# 累积损失
total_loss += loss
# 累积评价
self.metric.update(logits, y)
dev_loss = (total_loss / len(dev_loader))
dev_score = self.metric.accumulate()
# 记录验证集loss
if global_step != -1:
self.dev_losses.append((global_step, dev_loss))
self.dev_scores.append(dev_score)
return dev_score, dev_loss
# 模型评估阶段,使用'torch.no_grad()'控制不计算和存储梯度
@torch.no_grad()
def predict(self, x, **kwargs):
# 将模型设置为评估模式
self.model.eval()
# 运行模型前向计算,得到预测值
logits = self.model(x)
return logits
def save_model(self, save_path):
torch.save(self.model.state_dict(), save_path)
def load_model(self, model_path):
model_state_dict = torch.load(model_path)
self.model.load_state_dict(model_state_dict)
由于这里使用随机梯度下降法对参数优化,所以数据以批次的形式输入到模型中进行训练,那么评价指标计算也是分别在每个批次进行的,要想获得每个epoch整体的评价结果,需要对历史评价结果进行累积。这里定义Accuracy
类实现该功能。
# Accuracy代码
class Accuracy(object):
def __init__(self, is_logist=True):
# 用于统计正确的样本个数
self.num_correct = 0
# 用于统计样本的总数
self.num_count = 0
self.is_logist = is_logist
def update(self, outputs, labels):
# 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务
if outputs.shape[1] == 1: # 二分类
outputs = torch.squeeze(outputs, axis=-1)
if self.is_logist:
# logist判断是否大于0
preds = (outputs >= 0).to(torch.float32)
else:
# 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
preds = (outputs >= 0.5).to(torch.float32)
else:
# 多分类时,使用'torch.argmax'计算最大元素索引作为类别
preds = torch.argmax(outputs, dim=1)
labels = labels.long() # 确保标签是 Long 类型
# 获取本批数据中预测正确的样本个数
labels = torch.squeeze(labels, axis=-1)
batch_correct = torch.sum((preds == labels).clone().detach()).numpy()
batch_count = len(labels)
# 更新num_correct 和 num_count
self.num_correct += batch_correct
self.num_count += batch_count
def accumulate(self):
# 使用累计的数据,计算总的指标
if self.num_count == 0:
return 0
return self.num_correct / self.num_count
def reset(self):
# 重置正确的数目和总数
self.num_correct = 0
self.num_count = 0
def __call__(self, outputs, labels):
# 在调用时更新准确率
self.update(outputs, labels)
return self.accumulate()
def name(self):
return "Accuracy"
-
5 .模型训练
- 实例化RunnerV3类,并传入训练配置,使用训练集和验证集进行模型训练,共训练150个epoch。在实验中,保存准确率最高的模型作为最佳模型。
-
# =====================================模型训练=============================== lr = 0.2 # 定义学习率 # 定义网络 model = fnn_model # 模型fnn_model已在上面代码定义 # 定义优化器SGD 随机梯度下降优化器,将模型参数传给优化器 optimizer = SGD(model.parameters(), lr=lr) # 定义损失函数交叉熵 loss_fn = nn.CrossEntropyLoss() # 定义评价指标 准确率 metric = Accuracy(is_logist=True) runner = RunnerV3(model, optimizer, metric,loss_fn,) # 启动训练 训练轮数为150轮,每隔100步记录一次,每隔50步进行一次评估 log_steps = 100 eval_steps = 50 runner.train(train_loader, dev_loader, num_epochs=150, log_steps=log_steps, eval_steps=eval_steps, save_path="best_model.pdparams")
运行结果:
[Train] epoch: 0/150, step: 0/1200, loss: 1.09898
[Evaluate] dev score: 0.33333, dev loss: 1.09582
[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.33333
[Train] epoch: 12/150, step: 100/1200, loss: 1.13891
[Evaluate] dev score: 0.46667, dev loss: 1.10749
[Evaluate] best accuracy performence has been updated: 0.33333 --> 0.46667
[Evaluate] dev score: 0.20000, dev loss: 1.10089
[Train] epoch: 25/150, step: 200/1200, loss: 1.10158
[Evaluate] dev score: 0.20000, dev loss: 1.12477
[Evaluate] dev score: 0.46667, dev loss: 1.09090
[Train] epoch: 37/150, step: 300/1200, loss: 1.09982
[Evaluate] dev score: 0.46667, dev loss: 1.07537
[Evaluate] dev score: 0.53333, dev loss: 1.04453
[Evaluate] best accuracy performence has been updated: 0.46667 --> 0.53333
[Train] epoch: 50/150, step: 400/1200, loss: 1.01054
[Evaluate] dev score: 1.00000, dev loss: 1.00635
[Evaluate] best accuracy performence has been updated: 0.53333 --> 1.00000
[Evaluate] dev score: 0.86667, dev loss: 0.86850
[Train] epoch: 62/150, step: 500/1200, loss: 0.63702
[Evaluate] dev score: 0.80000, dev loss: 0.66986
[Evaluate] dev score: 0.86667, dev loss: 0.57089
[Train] epoch: 75/150, step: 600/1200, loss: 0.56490
[Evaluate] dev score: 0.93333, dev loss: 0.52392
[Evaluate] dev score: 0.86667, dev loss: 0.45410
[Train] epoch: 87/150, step: 700/1200, loss: 0.41929
[Evaluate] dev score: 0.86667, dev loss: 0.46156
[Evaluate] dev score: 0.93333, dev loss: 0.41593
[Train] epoch: 100/150, step: 800/1200, loss: 0.41047
[Evaluate] dev score: 0.93333, dev loss: 0.40600
[Evaluate] dev score: 0.93333, dev loss: 0.37672
[Train] epoch: 112/150, step: 900/1200, loss: 0.42777
[Evaluate] dev score: 0.93333, dev loss: 0.34534
[Evaluate] dev score: 0.93333, dev loss: 0.33552
[Train] epoch: 125/150, step: 1000/1200, loss: 0.30734
[Evaluate] dev score: 0.93333, dev loss: 0.31958
[Evaluate] dev score: 0.93333, dev loss: 0.32091
[Train] epoch: 137/150, step: 1100/1200, loss: 0.28321
[Evaluate] dev score: 0.93333, dev loss: 0.28383
[Evaluate] dev score: 0.93333, dev loss: 0.27171
[Evaluate] dev score: 0.93333, dev loss: 0.25447
[Train] Training done!
可视化观察训练集损失和训练集loss变化情况
# ==============可视化===================
# 绘制训练集和验证集的损失变化以及验证集上的准确率变化曲线
def plot_training_loss_acc(runner, fig_name,
fig_size=(16, 6),
sample_step=20, # 取数据的步长,每隔20取一个
loss_legend_loc="upper right",
acc_legend_loc="lower right",
train_color="#8E004D",
dev_color='#E20079',
fontsize='x-large',
train_linestyle="--",
dev_linestyle='-',
train_linewidth=2,
dev_linewidth=2):
plt.figure(figsize=fig_size)
# 绘制损失曲线
plt.subplot(1, 2, 1)
train_items = runner.train_step_losses[::sample_step]
train_steps = [x[0] for x in train_items]
train_losses = [x[1] for x in train_items]
plt.plot(train_steps, train_losses, color=train_color, linestyle=train_linestyle,
linewidth=train_linewidth, label="Train loss", marker='o', markersize=4)
if len(runner.dev_losses) > 0:
dev_steps = [x[0] for x in runner.dev_losses]
dev_losses = [x[1] for x in runner.dev_losses]
plt.plot(dev_steps, dev_losses, color=dev_color, linestyle=dev_linestyle,
linewidth=dev_linewidth, label="Dev loss", marker='s', markersize=4)
plt.title("Training and Validation Loss", fontsize=fontsize)
plt.ylabel("Loss", fontsize=fontsize)
plt.xlabel("Step", fontsize=fontsize)
plt.grid(True, linestyle='--', alpha=0.7)
plt.legend(loc=loss_legend_loc, fontsize='x-large')
# 绘制准确率曲线
if len(runner.dev_scores) > 0:
plt.subplot(1, 2, 2)
plt.plot(dev_steps, runner.dev_scores,
color=dev_color, linestyle=dev_linestyle,
linewidth=dev_linewidth, label="Dev Accuracy", marker='^', markersize=4)
plt.title("Validation Accuracy", fontsize=fontsize)
plt.ylabel("Accuracy", fontsize=fontsize)
plt.xlabel("Step", fontsize=fontsize)
plt.grid(True, linestyle='--', alpha=0.7)
plt.legend(loc=acc_legend_loc, fontsize='x-large')
plt.tight_layout() # 自动调整子图间距
plt.savefig(fig_name) # 保存图像
plt.show()
plot_training_loss_acc(runner, 'fw-loss.pdf') # 调用绘制图形的函数
可以看出准确率随着迭代次数增加逐渐上升,损失函数下降。
6 .模型评价
使用测试数据对在训练过程中保存的最佳模型进行评价,观察模型在测试集上的准确率以及Loss情况
# ==================模型评价==================
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
运行结果:
[Test] accuracy/loss: 1.0000/1.0183
7.模型预测
使用保存好的模型,对测试集中的某一个数据进行模型预测,观察模型效果
# ============ 模型预测 ====================
# 获取测试集中第一条数据
X ,label = next(iter(test_loader))
logits = runner.predict(X)
pred_class = torch.argmax(logits[0]).numpy()
# 输出真实类别与预测类别
print("The true category is {} and the predicted category is {}".format(label[0], pred_class))
运行结果:
The true category is 2 and the predicted category is 2
完整代码
'''
@Author: lxy
@function: Classification of the Iris Dataset Based on FNN
@date: 2024/11/1
'''
import numpy as np
import torch
from torch.utils.data import DataLoader, Dataset
from sklearn.datasets import load_iris
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import SGD
from torch.nn.init import normal_,constant_
from Runner3 import RunnerV3
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
class IrisDataset(Dataset):
# mode 用于区分数据集类型,训练集train,验证集dev
def __init__(self, mode='train', num_train=120, num_dev=15):
super(IrisDataset, self).__init__()
# 调用名为load_data的函数,此函数负责加载鸢尾花数据集
# shuffle=True表示在加载数据时会被随机打乱,确保每个epoch的顺序是随机的
X, y = load_data(shuffle=True)
# ========================分割数据集=============================
# 作为训练集
if mode == 'train':
self.X, self.y = X[:num_train], y[:num_train]
# 作为验证集
elif mode == 'dev':
self.X, self.y = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]
# 去剩余样本作为测试集
else:
self.X, self.y = X[num_train + num_dev:], y[num_train + num_dev:]
# 从数据集中获取一个样本,返回一个样本的输入X和标签y
def __getitem__(self, idx):
return self.X[idx], self.y[idx]
# 返回数据集标签的数量(即数据集的大小)
def __len__(self):
return len(self.y)
# 加载数据集
def load_data(shuffle=True):
"""
加载鸢尾花数据
输入:
- shuffle:是否打乱数据,数据类型为bool
输出:
- X:特征数据,shape=[150,4]
- y:标签数据, shape=[150]
"""
# 加载原始数据
iris = load_iris()
X = np.array(iris.data, dtype=np.float32)
# 注意int 64
y = np.array(iris.target, dtype=np.int64)
X = torch.tensor(X)
y = torch.tensor(y)
# 数据归一化
X_min,_ = torch.min(X, axis=0)
X_max,_ = torch.max(X, axis=0)
X = (X-X_min) / (X_max-X_min)
# 如果shuffle为True,随机打乱数据
if shuffle:
idx = torch.randperm(X.shape[0])
X = X[idx]
y = y[idx]
return X, y
# 设置随机种子以保证结果的可重复性
torch.random.manual_seed(12)
# 创建了训练集、验证集和测试集的实例
train_dataset = IrisDataset(mode='train')
dev_dataset = IrisDataset(mode='dev')
test_dataset = IrisDataset(mode='test')
# 打印训练集长度
print("length of train set: ", len(train_dataset))
# ===========================用DataLoader进行封装=============================
# 将数据集封装成DataLoader,方便批量加载和打乱数据
batch_size =16 #设置批次大小
train_loader = DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True)
dev_loader = DataLoader(dataset=dev_dataset,batch_size=batch_size,shuffle=False)
test_loader = DataLoader(dataset=test_dataset,batch_size=batch_size,shuffle=False)
# ================================模型构建====================================
# 模型构建
# 实现一个两层前馈神经网络
class IrisSort(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(IrisSort, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
normal_(self.fc1.weight, mean=0, std=0.01)
constant_(self.fc1.bias, val=1.0)
self.fc2 = nn.Linear(hidden_size, output_size)
normal_(self.fc2.weight, mean=0, std=0.01)
constant_(self.fc2.bias, val=1.0)
self.act = torch.sigmoid
def forward(self, inputs):
outputs = self.fc1(inputs)
outputs = self.act(outputs)
outputs = self.fc2(outputs)
return outputs
fnn_model = IrisSort(input_size=4, hidden_size=6, output_size=3)
# Accuracy代码
class Accuracy(object):
def __init__(self, is_logist=True):
# 用于统计正确的样本个数
self.num_correct = 0
# 用于统计样本的总数
self.num_count = 0
self.is_logist = is_logist
def update(self, outputs, labels):
# 判断是二分类任务还是多分类任务,shape[1]=1时为二分类任务,shape[1]>1时为多分类任务
if outputs.shape[1] == 1: # 二分类
outputs = torch.squeeze(outputs, axis=-1)
if self.is_logist:
# logist判断是否大于0
preds = (outputs >= 0).to(torch.float32)
else:
# 如果不是logist,判断每个概率值是否大于0.5,当大于0.5时,类别为1,否则类别为0
preds = (outputs >= 0.5).to(torch.float32)
else:
# 多分类时,使用'torch.argmax'计算最大元素索引作为类别
preds = torch.argmax(outputs, dim=1)
labels = labels.long() # 确保标签是 Long 类型
# 获取本批数据中预测正确的样本个数
labels = torch.squeeze(labels, axis=-1)
batch_correct = torch.sum((preds == labels).clone().detach()).numpy()
batch_count = len(labels)
# 更新num_correct 和 num_count
self.num_correct += batch_correct
self.num_count += batch_count
def accumulate(self):
# 使用累计的数据,计算总的指标
if self.num_count == 0:
return 0
return self.num_correct / self.num_count
def reset(self):
# 重置正确的数目和总数
self.num_correct = 0
self.num_count = 0
def __call__(self, outputs, labels):
# 在调用时更新准确率
self.update(outputs, labels)
return self.accumulate()
def name(self):
return "Accuracy"
# =====================================模型训练===============================
lr = 0.2 # 定义学习率
# 定义网络
model = fnn_model # 模型fnn_model已在上面代码定义
# 定义优化器SGD 随机梯度下降优化器,将模型参数传给优化器
optimizer = SGD(model.parameters(), lr=lr)
# 定义损失函数交叉熵
loss_fn = nn.CrossEntropyLoss()
# 定义评价指标 准确率
metric = Accuracy(is_logist=True)
runner = RunnerV3(model, optimizer, metric,loss_fn,)
# 启动训练 训练轮数为150轮,每隔100步记录一次,每隔50步进行一次评估
log_steps = 100
eval_steps = 50
runner.train(train_loader, dev_loader,
num_epochs=150, log_steps=log_steps, eval_steps=eval_steps,
save_path="best_model.pdparams")
# ==============可视化===================
# 绘制训练集和验证集的损失变化以及验证集上的准确率变化曲线
def plot_training_loss_acc(runner, fig_name,
fig_size=(16, 6),
sample_step=20, # 取数据的步长,每隔20取一个
loss_legend_loc="upper right",
acc_legend_loc="lower right",
train_color="#8E004D",
dev_color='#E20079',
fontsize='x-large',
train_linestyle="--",
dev_linestyle='-',
train_linewidth=2,
dev_linewidth=2):
plt.figure(figsize=fig_size)
# 绘制损失曲线
plt.subplot(1, 2, 1)
train_items = runner.train_step_losses[::sample_step]
train_steps = [x[0] for x in train_items]
train_losses = [x[1] for x in train_items]
plt.plot(train_steps, train_losses, color=train_color, linestyle=train_linestyle,
linewidth=train_linewidth, label="Train loss", marker='o', markersize=4)
if len(runner.dev_losses) > 0:
dev_steps = [x[0] for x in runner.dev_losses]
dev_losses = [x[1] for x in runner.dev_losses]
plt.plot(dev_steps, dev_losses, color=dev_color, linestyle=dev_linestyle,
linewidth=dev_linewidth, label="Dev loss", marker='s', markersize=4)
plt.title("Training and Validation Loss", fontsize=fontsize)
plt.ylabel("Loss", fontsize=fontsize)
plt.xlabel("Step", fontsize=fontsize)
plt.grid(True, linestyle='--', alpha=0.7)
plt.legend(loc=loss_legend_loc, fontsize='x-large')
# 绘制准确率曲线
if len(runner.dev_scores) > 0:
plt.subplot(1, 2, 2)
plt.plot(dev_steps, runner.dev_scores,
color=dev_color, linestyle=dev_linestyle,
linewidth=dev_linewidth, label="Dev Accuracy", marker='^', markersize=4)
plt.title("Validation Accuracy", fontsize=fontsize)
plt.ylabel("Accuracy", fontsize=fontsize)
plt.xlabel("Step", fontsize=fontsize)
plt.grid(True, linestyle='--', alpha=0.7)
plt.legend(loc=acc_legend_loc, fontsize='x-large')
plt.tight_layout() # 自动调整子图间距
plt.savefig(fig_name) # 保存图像
plt.show()
plot_training_loss_acc(runner, 'fw-loss.pdf') # 调用绘制图形的函数
# ==================模型评价==================
# 加载最优模型
runner.load_model('best_model.pdparams')
# 模型评价
score, loss = runner.evaluate(test_loader)
print("[Test] accuracy/loss: {:.4f}/{:.4f}".format(score, loss))
# ============ 模型预测 ====================
# 获取测试集中第一条数据
X ,label = next(iter(test_loader))
logits = runner.predict(X)
pred_class = torch.argmax(logits[0]).numpy()
# 输出真实类别与预测类别
print("The true category is {} and the predicted category is {}".format(label[0], pred_class))
8 .与实验四“基于Softmax回归完成鸢尾花分类”比较,谈谈自己的看法
实验四:线性分类
Softmax分类(逻辑回归的多分类版本)
通过一个线性层加上Softmax函数来实现,最终输出结果y是通过Softmax函数转换后的概率分布。
前馈神经网络分类(FNN)
前馈神经网络包含多个层,包括输入层、隐藏层和输出层,每个层由多个神经元组成,神经元之间通过权重连接,通过在隐藏层中使用激活函数,FNN能够学习数据中的非线性关系。
我的看法
针对鸢尾花分类任务,实验四中的结果写到“最终输出:[Test] score/loss: 0.7333/0.5928”,而前馈神经网络的准确率达到了1.000,远远要高于Softmax分类,更好的学到数据集中的非线性关特征,而softmax分类结构简单,多适用于小规模数据集和线性可分的问题。
两个模型各有各的缺点和优点,softmax分类虽然简单不易学习到非线性特征,但是结构简单,训练时不需要使用反向传播算法,同时使得它在某些计算资源有限的环境中更为适用。FNN虽然结构复杂能很好的学习复杂的非线性关系并提高分类精度,但是训练起来比较复杂且需要更多资源和时间。
所以,我的看法是,FNN虽然好用,但是在一些简单的问题上,使用较为传统的方式如线性回归、SVM等等,不仅可以很好的解决问题也能节省资源和时间。像SVM支持向量机,在特定问题上又快又好用,这时候就没有必要去选择稍微复杂一点的FNN模型了。
9.基于MNIST手写数字识别数据集,设计合适的前馈神经网络进行实验
这个实验参考我的另外一篇博客 基于前馈神经网络模型和卷积神经网络的MINIST数据集训练
(之前10.25号写的一篇~正好实验用到啦)
参考链接
批量梯度下降法(BGD)、随机梯度下降法(SGD)和小批量梯度下降法(MBGD)
全梯度下降算法、随机梯度下降算法、小批量梯度下降算法、随机平均梯度下降算法、梯度下降算法总结
Pytorch的torch.utils.data中Dataset以及DataLoader等详解
激活函数、正向传播、反向传播及softmax分类器,一篇就够了!------强推,总结的很详细
机器学习、NLP面试中常考到的知识点和代码实现-----github仓库
PyTorch的 nn.CrossEntropyLoss()报错 长整型int64的问题
loss训练记录 博客总结了训练过程中的几种报错(还挺常见的~,帮我解决了很多问题)
基于前馈神经网络模型和卷积神经网络的MINIST数据集训练 我的另外一篇博客~