基于WIN10的64位系统演示
一、写在前面
(1)Inception V1
Inception是一种深度学习模型,也被称为GoogLeNet,因为它是由Google的研究人员开发的。Inception模型的主要特点是它的“网络中的网络”结构,也就是说,它在一个大网络中嵌入了很多小网络。Inception模型中的每个小网络都有自己的任务,它们可以处理不同尺度的特征。然后,这些小网络的输出被合并在一起,形成模型的最终输出。这种结构使得Inception模型能够更有效地处理复杂的图像识别任务。
(2)Inception V2和V3
这两个版本引入了两个重要的概念:分解(Factorization)和批标准化(Batch Normalization)。分解是指将大的卷积核分解成几个小的卷积核,这样可以减少模型的复杂度,提高计算效率。批标准化是一种技术,可以使模型的训练更稳定,加快训练速度。
(3)Inception V3的迁移模型
这里我们演示的是Inception V3,刚好Keras有它的预训练模型,省事:
二、Inception V3迁移学习代码实战
我们继续:修猫和修狗的识别。其中,修猫5011张,修狗5017张,分别存入单独的文件夹中。
(a)导入包
from tensorflow import keras
import tensorflow as tf
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, MaxPool2D, Dropout, Activation, Reshape, Softmax, GlobalAveragePooling2D
from tensorflow.python.keras.layers.convolutional import Convolution2D, MaxPooling2D
from tensorflow.python.keras import Sequential
from tensorflow.python.keras import Model
from tensorflow.python.keras.optimizers import adam_v2
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator, image_dataset_from_directory
from tensorflow.python.keras.layers.preprocessing.image_preprocessing import RandomFlip, RandomRotation, RandomContrast, RandomZoom, RandomTranslation
import os,PIL,pathlib
import warnings
#设置GPU
gpus = tf.config.list_physical_devices("GPU")
if gpus:
gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
tf.config.set_visible_devices([gpu0],"GPU")
warnings.filterwarnings("ignore") #忽略警告信息
plt.rcParams['font.sans-serif'] = ['SimHei'] # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False # 用来正常显示负号
(b)导入数据集
#1.导入数据
data_dir = "./cat_dog"
data_dir = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)
batch_size = 16
img_height = 150
img_width = 150
train_ds = image_dataset_from_directory(
data_dir,
validation_split=0.3,
subset="training",
seed=12,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = image_dataset_from_directory(
data_dir,
validation_split=0.3,
subset="validation",
seed=12,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)
#2.检查数据
for image_batch, labels_batch in train_ds:
print(image_batch.shape)
print(labels_batch.shape)
break
#3.配置数据
AUTOTUNE = tf.data.AUTOTUNE
def train_preprocessing(image,label):
return (image/255.0,label)
train_ds = (
train_ds.cache()
.shuffle(1000)
.map(train_preprocessing)
.prefetch(buffer_size=AUTOTUNE)
)
val_ds = (
val_ds.cache()
.shuffle(1000)
.map(train_preprocessing)
.prefetch(buffer_size=AUTOTUNE)
)
#4. 数据可视化
plt.figure(figsize=(10, 8))
plt.suptitle("数据展示")
class_names = ["Dog","Cat"]
for images, labels in train_ds.take(1):
for i in range(15):
plt.subplot(4, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(images[i])
plt.xlabel(class_names[labels[i]-1])
plt.show()
(c)数据增强
data_augmentation = Sequential([
RandomFlip("horizontal_and_vertical"),
RandomRotation(0.2),
#RandomContrast(1.0),
#RandomZoom(0.5,0.2),
#RandomTranslation(0.3,0.5),
])
def prepare(ds):
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y), num_parallel_calls=AUTOTUNE)
return ds
train_ds = prepare(train_ds)
(d)导入Inception V3
#模型
x = base_model(inputs, training=False) #参数不变化
#全局池化
x = GlobalAveragePooling2D()(x)
#BatchNormalization
x = BatchNormalization()(x)
#Dropout
x = Dropout(0.3)(x)
#Dense
x = Dense(512)(x)
#BatchNormalization
x = BatchNormalization()(x)
#激活函数
x = Activation('relu')(x)
#输出层
outputs = Dense(2)(x)
#BatchNormalization
outputs = BatchNormalization()(outputs)
#激活函数
outputs = Activation('sigmoid')(outputs)
#整体封装
model = Model(inputs, outputs)
#打印模型结构
print(model.summary())
然后打印出模型的结构:
(e)编译模型
#定义优化器
from tensorflow.python.keras.optimizers import adam_v2, rmsprop_v2
from tensorflow.python.keras.optimizer_v2.gradient_descent import SGD
optimizer = adam_v2.Adam()
#optimizer = SGD(learning_rate=0.001)
#optimizer = rmsprop_v2.RMSprop()
#编译模型
model.compile(optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#训练模型
from tensorflow.python.keras.callbacks import ModelCheckpoint, Callback, EarlyStopping, ReduceLROnPlateau, LearningRateScheduler
NO_EPOCHS = 50
PATIENCE = 10
VERBOSE = 1
# 设置动态学习率
annealer = LearningRateScheduler(lambda x: 1e-3 * 0.99 ** (x+NO_EPOCHS))
# 设置早停
earlystopper = EarlyStopping(monitor='loss', patience=PATIENCE, verbose=VERBOSE)
#
checkpointer = ModelCheckpoint('best_model.h5',
monitor='val_accuracy',
verbose=VERBOSE,
save_best_only=True,
save_weights_only=True)
train_model = model.fit(train_ds,
epochs=NO_EPOCHS,
verbose=1,
validation_data=val_ds,
callbacks=[earlystopper, checkpointer, annealer])
有一说一,模型训练速度比VGG19快多了。然而,观察迭代过程,可以发现准确性呈现一个先高后低的现象:
第20次迭代,准确率高达85%,然后,直接跌到了50%不到。接着,我把整个迭代过程丢进GPT,问GPT发生了什么:
GPT也发现了这个现象,并且给出了一些可能的原因和优化的方案。大家可以试着去调整。我懒,就像直接用里面最好的模型即可:
代码如下:
#定义优化器
from tensorflow.python.keras.optimizers import adam_v2, rmsprop_v2
from tensorflow.python.keras.optimizer_v2.gradient_descent import SGD
optimizer = adam_v2.Adam()
optimizer = SGD(learning_rate=0.001)
optimizer = rmsprop_v2.RMSprop()
#编译模型
model.compile(optimizer=optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#训练模型
from tensorflow.python.keras.callbacks import ModelCheckpoint, Callback, EarlyStopping, ReduceLROnPlateau, LearningRateScheduler
NO_EPOCHS = 50
PATIENCE = 10
VERBOSE = 1
# 设置动态学习率
annealer = LearningRateScheduler(lambda x: 1e-3 * 0.99 ** (x+NO_EPOCHS))
# 设置早停
earlystopper = EarlyStopping(monitor='loss', patience=PATIENCE, verbose=VERBOSE)
#
checkpointer = ModelCheckpoint('cat_dog_jet_best_model_inceptionv3.h5',
monitor='val_accuracy',
verbose=VERBOSE,
save_best_only=True,
save_weights_only=True)
train_model = model.fit(train_ds,
epochs=NO_EPOCHS,
verbose=1,
validation_data=val_ds,
callbacks=[earlystopper, checkpointer, annealer])
# 加载权重
model.load_weights('cat_dog_jet_best_model_inceptionv3.h5')
#保存模型
model.save('cat_dog_jet_best_model_inceptionv3.h5')
print("The trained model has been saved.")
from tensorflow.python.keras.models import load_model
train_model=load_model('cat_dog_jet_best_model_inceptionv3.h5')
这一步不影响步骤(f),直接运行步骤(g)。
(f)Accuracy和Loss可视化
import matplotlib.pyplot as plt
loss = train_model.history['loss']
acc = train_model.history['accuracy']
val_loss = train_model.history['val_loss']
val_acc = train_model.history['val_accuracy']
epoch = range(1, len(loss)+1)
fig, ax = plt.subplots(1, 2, figsize=(10,4))
ax[0].plot(epoch, loss, label='Train loss')
ax[0].plot(epoch, val_loss, label='Validation loss')
ax[0].set_xlabel('Epochs')
ax[0].set_ylabel('Loss')
ax[0].legend()
ax[1].plot(epoch, acc, label='Train acc')
ax[1].plot(epoch, val_acc, label='Validation acc')
ax[1].set_xlabel('Epochs')
ax[1].set_ylabel('Accuracy')
ax[1].legend()
plt.show()
通过这个图,观察模型训练情况:
蓝色为训练集,橙色为验证集。可以看到,验证集在第14次迭代时,效果最好,其他都是悲剧啊。
(g)混淆矩阵可视化以及模型参数
没啥好说的,都跟之前的ML模型类似:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.python.keras.models import load_model
from matplotlib.pyplot import imshow
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
import pandas as pd
import math
# 定义一个绘制混淆矩阵图的函数
def plot_cm(labels, predictions):
# 生成混淆矩阵
conf_numpy = confusion_matrix(labels, predictions)
# 将矩阵转化为 DataFrame
conf_df = pd.DataFrame(conf_numpy, index=class_names ,columns=class_names)
plt.figure(figsize=(8,7))
sns.heatmap(conf_df, annot=True, fmt="d", cmap="BuPu")
plt.title('混淆矩阵',fontsize=15)
plt.ylabel('真实值',fontsize=14)
plt.xlabel('预测值',fontsize=14)
val_pre = []
val_label = []
for images, labels in val_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
for image, label in zip(images, labels):
# 需要给图片增加一个维度
img_array = tf.expand_dims(image, 0)
# 使用模型预测图片中的人物
prediction = model.predict(img_array)
val_pre.append(np.argmax(prediction))
val_label.append(label)
plot_cm(val_label, val_pre)
cm_val = confusion_matrix(val_label, val_pre)
a_val = cm_val[0,0]
b_val = cm_val[0,1]
c_val = cm_val[1,0]
d_val = cm_val[1,1]
acc_val = (a_val+d_val)/(a_val+b_val+c_val+d_val) #准确率:就是被分对的样本数除以所有的样本数
error_rate_val = 1 - acc_val #错误率:与准确率相反,描述被分类器错分的比例
sen_val = d_val/(d_val+c_val) #灵敏度:表示的是所有正例中被分对的比例,衡量了分类器对正例的识别能力
sep_val = a_val/(a_val+b_val) #特异度:表示的是所有负例中被分对的比例,衡量了分类器对负例的识别能力
precision_val = d_val/(b_val+d_val) #精确度:表示被分为正例的示例中实际为正例的比例
F1_val = (2*precision_val*sen_val)/(precision_val+sen_val) #F1值:P和R指标有时候会出现的矛盾的情况,这样就需要综合考虑他们,最常见的方法就是F-Measure(又称为F-Score)
MCC_val = (d_val*a_val-b_val*c_val) / (math.sqrt((d_val+b_val)*(d_val+c_val)*(a_val+b_val)*(a_val+c_val))) #马修斯相关系数(Matthews correlation coefficient):当两个类别具有非常不同的大小时,可以使用MCC
print("验证集的灵敏度为:",sen_val,
"验证集的特异度为:",sep_val,
"验证集的准确率为:",acc_val,
"验证集的错误率为:",error_rate_val,
"验证集的精确度为:",precision_val,
"验证集的F1为:",F1_val,
"验证集的MCC为:",MCC_val)
train_pre = []
train_label = []
for images, labels in train_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
for image, label in zip(images, labels):
# 需要给图片增加一个维度
img_array = tf.expand_dims(image, 0)
# 使用模型预测图片中的人物
prediction = model.predict(img_array)
train_pre.append(np.argmax(prediction))
train_label.append(label)
plot_cm(train_label, train_pre)
cm_train = confusion_matrix(train_label, train_pre)
a_train = cm_train[0,0]
b_train = cm_train[0,1]
c_train = cm_train[1,0]
d_train = cm_train[1,1]
acc_train = (a_train+d_train)/(a_train+b_train+c_train+d_train)
error_rate_train = 1 - acc_train
sen_train = d_train/(d_train+c_train)
sep_train = a_train/(a_train+b_train)
precision_train = d_train/(b_train+d_train)
F1_train = (2*precision_train*sen_train)/(precision_train+sen_train)
MCC_train = (d_train*a_train-b_train*c_train) / (math.sqrt((d_train+b_train)*(d_train+c_train)*(a_train+b_train)*(a_train+c_train)))
print("训练集的灵敏度为:",sen_train,
"训练集的特异度为:",sep_train,
"训练集的准确率为:",acc_train,
"训练集的错误率为:",error_rate_train,
"训练集的精确度为:",precision_train,
"训练集的F1为:",F1_train,
"训练集的MCC为:",MCC_train)
效果还可以,那是不可能的:
惨不忍睹!!!
仔细一看,灵敏度太低,求助GPT:
之前说过这个方案,改阈值:
修改后的代码如下:
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.python.keras.models import load_model
from matplotlib.pyplot import imshow
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
import pandas as pd
import math
# 定义一个绘制混淆矩阵图的函数
def plot_cm(labels, predictions):
# 生成混淆矩阵
conf_numpy = confusion_matrix(labels, predictions)
# 将矩阵转化为 DataFrame
conf_df = pd.DataFrame(conf_numpy, index=class_names ,columns=class_names)
plt.figure(figsize=(8,7))
sns.heatmap(conf_df, annot=True, fmt="d", cmap="BuPu")
plt.title('混淆矩阵',fontsize=15)
plt.ylabel('真实值',fontsize=14)
plt.xlabel('预测值',fontsize=14)
# 定义阈值
threshold = 0.3
val_pre = []
val_label = []
for images, labels in val_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
for image, label in zip(images, labels):
# 需要给图片增加一个维度
img_array = tf.expand_dims(image, 0)
# 使用模型预测图片中的人物
prediction = model.predict(img_array)
# 根据阈值调整预测结果
prediction = (prediction[:, 1] >= threshold).astype(int)
val_pre.append(prediction)
val_label.append(label)
plot_cm(val_label, val_pre)
cm_val = confusion_matrix(val_label, val_pre)
a_val = cm_val[0,0]
b_val = cm_val[0,1]
c_val = cm_val[1,0]
d_val = cm_val[1,1]
acc_val = (a_val+d_val)/(a_val+b_val+c_val+d_val) #准确率:就是被分对的样本数除以所有的样本数
error_rate_val = 1 - acc_val #错误率:与准确率相反,描述被分类器错分的比例
sen_val = d_val/(d_val+c_val) #灵敏度:表示的是所有正例中被分对的比例,衡量了分类器对正例的识别能力
sep_val = a_val/(a_val+b_val) #特异度:表示的是所有负例中被分对的比例,衡量了分类器对负例的识别能力
precision_val = d_val/(b_val+d_val) #精确度:表示被分为正例的示例中实际为正例的比例
F1_val = (2*precision_val*sen_val)/(precision_val+sen_val) #F1值:P和R指标有时候会出现的矛盾的情况,这样就需要综合考虑他们,最常见的方法就是F-Measure(又称为F-Score)
MCC_val = (d_val*a_val-b_val*c_val) / (math.sqrt((d_val+b_val)*(d_val+c_val)*(a_val+b_val)*(a_val+c_val))) #马修斯相关系数(Matthews correlation coefficient):当两个类别具有非常不同的大小时,可以使用MCC
print("验证集的灵敏度为:",sen_val,
"验证集的特异度为:",sep_val,
"验证集的准确率为:",acc_val,
"验证集的错误率为:",error_rate_val,
"验证集的精确度为:",precision_val,
"验证集的F1为:",F1_val,
"验证集的MCC为:",MCC_val)
train_pre = []
train_label = []
for images, labels in train_ds:#这里可以取部分验证数据(.take(1))生成混淆矩阵
for image, label in zip(images, labels):
# 需要给图片增加一个维度
img_array = tf.expand_dims(image, 0)
# 使用模型预测图片中的人物
prediction = model.predict(img_array)
# 根据阈值调整预测结果
prediction = (prediction[:, 1] >= threshold).astype(int)
train_pre.append(prediction)
train_label.append(label)
plot_cm(train_label, train_pre)
cm_train = confusion_matrix(train_label, train_pre)
a_train = cm_train[0,0]
b_train = cm_train[0,1]
c_train = cm_train[1,0]
d_train = cm_train[1,1]
acc_train = (a_train+d_train)/(a_train+b_train+c_train+d_train)
error_rate_train = 1 - acc_train
sen_train = d_train/(d_train+c_train)
sep_train = a_train/(a_train+b_train)
precision_train = d_train/(b_train+d_train)
F1_train = (2*precision_train*sen_train)/(precision_train+sen_train)
MCC_train = (d_train*a_train-b_train*c_train) / (math.sqrt((d_train+b_train)*(d_train+c_train)*(a_train+b_train)*(a_train+c_train)))
print("训练集的灵敏度为:",sen_train,
"训练集的特异度为:",sep_train,
"训练集的准确率为:",acc_train,
"训练集的错误率为:",error_rate_train,
"训练集的精确度为:",precision_train,
"训练集的F1为:",F1_train,
"训练集的MCC为:",MCC_train)
阈值改为0.3,看看效果,完美:
(g)AUC曲线绘制
from sklearn import metrics
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.python.keras.models import load_model
from matplotlib.pyplot import imshow
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
import pandas as pd
import math
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = metrics.roc_curve(labels, predictions)
plt.plot(fp, tp, label=name, linewidth=2, **kwargs)
plt.plot([0, 1], [0, 1], color='orange', linestyle='--')
plt.xlabel('False positives rate')
plt.ylabel('True positives rate')
ax = plt.gca()
ax.set_aspect('equal')
val_pre_auc = []
val_label_auc = []
for images, labels in val_ds:
for image, label in zip(images, labels):
img_array = tf.expand_dims(image, 0)
prediction_auc = model.predict(img_array)
val_pre_auc.append((prediction_auc)[:,1])
val_label_auc.append(label)
auc_score_val = metrics.roc_auc_score(val_label_auc, val_pre_auc)
train_pre_auc = []
train_label_auc = []
for images, labels in train_ds:
for image, label in zip(images, labels):
img_array_train = tf.expand_dims(image, 0)
prediction_auc = model.predict(img_array_train)
train_pre_auc.append((prediction_auc)[:,1])#输出概率而不是标签!
train_label_auc.append(label)
auc_score_train = metrics.roc_auc_score(train_label_auc, train_pre_auc)
plot_roc('validation AUC: {0:.4f}'.format(auc_score_val), val_label_auc , val_pre_auc , color="red", linestyle='--')
plot_roc('training AUC: {0:.4f}'.format(auc_score_train), train_label_auc, train_pre_auc, color="blue", linestyle='--')
plt.legend(loc='lower right')
#plt.savefig("roc.pdf", dpi=300,format="pdf")
print("训练集的AUC值为:",auc_score_train, "验证集的AUC值为:",auc_score_val)
注意,阈值的调整不影响ROC和AUC:
三、测试模型
既然构建了模型,那么就得拿来试一试自家的修猫:
#保存模型
model.save('cat_dog_jet_best_model_vgg19.h5')
print("The trained model has been saved.")
#测试模型
from tensorflow.python.keras.models import load_model
from tensorflow.python.keras.preprocessing import image
from tensorflow.python.keras.preprocessing.image import img_to_array
from PIL import Image
import os, shutil, pathlib
label=np.array(["Dog","Cat"])#0、1赋值给标签
#载入模型
model=load_model('cat_dog_jet_best_model_inceptionv3.h5')
#导入图片
image=image.load_img('E:/ML/Deep Learning/laola.jpg')#手动修改路径,删除隐藏字符
plt.imshow(image)
plt.show()
image=image.resize((img_width,img_height))
image=img_to_array(image)
image=image/255#数值归一化,转为0-1
image=np.expand_dims(image,0)
print(image.shape)
# 使用模型进行预测
predictions = model.predict(image)
threshold = 0.3
predicted_class = (predictions[0][1] >= threshold).astype(int)
# 打印预测的类别
print(label[predicted_class])
这里代码有变动,阈值改为0.3了哦:
四、Inception V3和VGG19的对比
五、数据
链接:https://pan.baidu.com/s/1iPiKFaMbIPwKC-dgkChO3w?pwd=y6c9
提取码:y6c9