机器学习之实战篇——MNIST手写数字0~9识别(全连接神经网络模型)

news2024/9/22 19:28:17

机器学习之实战篇——Mnist手写数字0~9识别(全连接神经网络模型)

  • 文章传送
  • MNIST数据集介绍:
  • 实验过程
    • 实验环境
    • 导入模块
    • 导入MNIST数据集
    • 创建神经网络模型进行训练,测试,评估
    • 模型优化

文章传送

机器学习之监督学习(一)线性回归、多项式回归、算法优化[巨详细笔记]
机器学习之监督学习(二)二元逻辑回归
机器学习之监督学习(三)神经网络基础
机器学习之实战篇——预测二手房房价(线性回归)
机器学习之实战篇——肿瘤良性/恶性分类器(二元逻辑回归)

MNIST数据集介绍:

MNIST数据集是机器学习和计算机视觉领域中最知名和广泛使用的数据集之一。它是一个大型手写数字数据库,包含 70,000 张手写数字图像,60,000 张训练图像,10,000 张测试图像。每张图像是 28x28 像素的灰度图像素值范围从 0(白色)到 255(黑色),每张图像对应一个 0 到 9 的数字标签。

在实验开始前,为了熟悉这个伟大的数据集,读者可以先做一下下面的小实验,测验你的手写数字识别能力。尽管识别手写数字对于人类来说小菜一碟,但由于图像分辨率比较低同时有些数字写的比较抽象,因此想要达到100%准确率还是很难的,实验表明类的平均准确率约为97.5%到98.5%,实验代码如下:

import numpy as np
from tensorflow.keras.datasets import mnist
import matplotlib.pyplot as plt
from random import sample

# 加载MNIST数据集
(_, _), (x_test, y_test) = mnist.load_data()

# 随机选择100个样本
indices = sample(range(len(x_test)), 100)

correct = 0
total = 100

for i, idx in enumerate(indices, 1):
    # 显示图像
    plt.imshow(x_test[idx], cmap='gray')
    plt.axis('off')
    plt.show()

    # 获取用户输入
    user_answer = input(f"问题 {i}/100: 这个数字是什么? ")

    # 检查答案
    if int(user_answer) == y_test[idx]:
        correct += 1
        print("正确!")
    else:
        print(f"错误. 正确答案是 {y_test[idx]}")

    print(f"当前正确率: {correct}/{i} ({correct/i*100:.2f}%)")

print(f"\n最终正确率: {correct}/{total} ({correct/total*100:.2f}%)")

实验过程

实验环境

pycharm+jupyter notebook

导入模块

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.metrics import accuracy_score
from tensorflow.keras.layers import Input,Dense,Dropout
from tensorflow.keras.regularizers import l2
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Sequential
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.callbacks import EarlyStopping

import matplotlib
matplotlib.rcParams['font.family'] = 'SimHei'  # 或者 'Microsoft YaHei'
matplotlib.rcParams['axes.unicode_minus'] = False  # 解决负号 '-'

导入MNIST数据集

导入mnist手写数字集(包括训练集和测试集)


from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

查看训练、测试数据集的规模

print(f'x_train.shape:{x_train.shape}')
print(f'y_train.shape:{y_train.shape}')
print(f'x_test.shape:{x_test.shape}')
print(f'y_test.shape:{y_test.shape}')
x_train.shape:(60000, 28, 28)
y_train.shape:(60000,)
x_test.shape:(10000, 28, 28)
y_test.shape:(10000,)

查看64张手写图片

#查看64张训练手写图片内容
#获取训练集规模
m=x_train.shape[0]
#创建4*4子图布局
fig,axes=plt.subplots(8,8,figsize=(8,8))
#每张子图随机呈现一张手写图片
for i,ax in enumerate(axes.flat):
    idx=np.random.randint(m)
    #imshow():传入图片的像素矩阵,cmap='gray',显示黑白图片
    ax.imshow(x_train[idx],cmap='gray')
    #设置子图标题,将图片标签显示在图片上方
    ax.set_title(y_train[idx])
     # 移除坐标轴
    ax.axis('off')
#调整子图之间的间距
plt.tight_layout()

(由于空间限制,没有展现全64张图片)
在这里插入图片描述

将图片灰度像素矩阵转为灰度像素向量[展平],同时进行归一化[/255](0-255->0-1)

x_train_flat=x_train.reshape(60000,28*28).astype('float32')/255
x_test_flat=x_test.reshape(10000,28*28).astype('float32')/255

查看展平后数据集规模

print(f'x_train.shape:{x_train_flat.shape}')
print(f'x_test.shape:{x_test_flat.shape}')
x_train.shape:(60000, 784)
x_test.shape:(10000, 784)

创建神经网络模型进行训练,测试,评估

初步创建第一个三层全连接神经网络,隐层中采用‘relu’激活函数,使用分类交叉熵损失函数(设置from_logits=True,减少训练过程计算误差),Adam学习率自适应器(设置初始学习率0.001)

#创建神经网络
model1=Sequential(
    [
        Input(shape=(784,)),
        Dense(128,activation='relu',name='L1'),
        Dense(32,activation='relu',name='L2'),
        Dense(10,activation='linear',name='L3'),
    ],name='model1',
)
#编译模型
model1.compile(loss=SparseCategoricalCrossentropy(from_logits=True),optimizer=Adam(learning_rate=0.001))
#查看模型总结
model1.summary()
Model: "model1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 L1 (Dense)                  (None, 128)               100480    
                                                                 
 L2 (Dense)                  (None, 32)                4128      
                                                                 
 L3 (Dense)                  (None, 10)                330       
                                                                 
=================================================================
Total params: 104,938
Trainable params: 104,938
Non-trainable params: 0

model1拟合训练集开始训练,迭代次数初步设置为20

model1.fit(x_train_flat,y_train,epochs=20)
Epoch 1/20
1875/1875 [==============================] - 12s 5ms/step - loss: 0.2502
Epoch 2/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.1057
Epoch 3/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0748
Epoch 4/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0547
Epoch 5/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0438
Epoch 6/20
1875/1875 [==============================] - 8s 5ms/step - loss: 0.0360
Epoch 7/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0300
Epoch 8/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0237
Epoch 9/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0223
Epoch 10/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0201
Epoch 11/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0166
Epoch 12/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0172
Epoch 13/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0131
Epoch 14/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0124
Epoch 15/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0133
Epoch 16/20
1875/1875 [==============================] - 10s 5ms/step - loss: 0.0108
Epoch 17/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0095
Epoch 18/20
1875/1875 [==============================] - 10s 5ms/step - loss: 0.0116
Epoch 19/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0090
Epoch 20/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0084

查看model1训练结果,由于模型直接输出Logits,需要通过softmax函数激活输出概率向量,然后通过最大概率索引得出模型识别的手写数字

#查看训练结果
z_train_hat=model1.predict(x_train_flat)
#经过softmax激活后得到概率向量构成的矩阵
p_train_hat=tf.nn.softmax(z_train_hat).numpy()
#找出每个概率向量最大概率对应的索引,即识别的数字
y_train_hat=np.argmax(p_train_hat,axis=1)
print(y_train_hat)

可以将上述代码编写为函数:

#神经网络输出->最终识别结果
def get_result(z):
    p=tf.nn.softmax(z)
    y=np.argmax(p,axis=1)
    return y

为了理解上面的输出处理过程,查看第一个样本的逻辑输出、概率向量和识别数字

print(f'Logits:{z_train_hat[0]}')
print(f'Probabilities:{p_train_hat[0]}')
print(f'targe:{y_train_hat[0]}')
Logits:[-21.427883  -11.558845  -15.150495   15.6205845 -58.351833   29.704205
 -23.925339  -30.009314  -11.389831  -14.521982 ]
Probabilities:[6.2175050e-23 1.2013921e-18 3.3101813e-20 7.6482343e-07 0.0000000e+00
 9.9999928e-01 5.1166414e-24 1.1661356e-26 1.4226123e-18 6.2059749e-20]
targe:5

输出model1训练准确率,准确率达到99.8%

print(f'model1训练集准确率:{accuracy_score(y_train,y_train_hat)}')
model1训练集准确率:0.998133

测试model1,准确率达到97.9%,相当不戳

z_test_hat=model1.predict(x_test_flat)
y_test_hat=get_result(z_test_hat)
print(f'model1测试集准确率:{accuracy_score(y_test,y_test_hat)}')
313/313 [==============================] - 1s 3ms/step
model1测试集准确率:0.9789

为了方便后续神经网络模型的实验,编写run_model函数包含训练、测试模型的整个过程,引入早停机制,即当10个epoch内训练损失没有改善,则停止训练

early_stopping = EarlyStopping(
    monitor='loss', 
    patience=10,  # 如果10个epoch内训练损失没有改善,则停止训练
    restore_best_weights=True  # 恢复最佳权重
)

def run_model(model,epochs):
    model.fit(x_train_flat,y_train,epochs=epochs,callbacks=[early_stopping]) 
    z_train_hat=model.predict(x_train_flat)
    y_train_hat=get_result(z_train_hat)
    print(f'{model.name}训练准确率:{accuracy_score(y_train,y_train_hat)}')

    z_test_hat=model.predict(x_test_flat)
    y_test_hat=get_result(z_test_hat)
    print(f'{model.name}测试准确率:{accuracy_score(y_test,y_test_hat)}')

查看模型在哪些图片上栽了跟头:

#显示n张错误识别图片的函数

def show_error_pic(x, y, y_pred, n=64):
    wrong_idx = (y != y_pred)
    
    # 获取错误识别的图片和标签
    x_wrong = x[wrong_idx]
    y_wrong = y[wrong_idx]
    y_pred_wrong = y_pred[wrong_idx]
    
    # 选择前n张错误图片
    n = min(n, len(x_wrong))
    x_wrong = x_wrong[:n]
    y_wrong = y_wrong[:n]
    y_pred_wrong = y_pred_wrong[:n]
    
    # 设置图片网格
    rows = int(np.ceil(n / 8))
    fig, axes = plt.subplots(rows, 8, figsize=(20, 2.5*rows))
    axes = axes.flatten()
    
    for i in range(n):
        ax = axes[i]
        ax.imshow(x_wrong[i].reshape(28, 28), cmap='gray')
        ax.set_title(f'True: {y_wrong[i]}, Pred: {y_pred_wrong[i]}')
        ax.axis('off')
    
    # 隐藏多余的子图
    for i in range(n, len(axes)):
        axes[i].axis('off')
    
    plt.tight_layout()
    plt.show()

show_error_pic(x_test,y_test,y_test_hat)

(出于空间限制,只展示部分图片)
在这里插入图片描述

模型优化

目前来看我们第一个较简单的神经网络表现得非常不错,训练准确率达到99.8%,测试准确率达到97.9%,而人类的平均准确率约为97.5%到98.5%,因此我们诊断模型存在一定高方差的问题,可以考虑引入正则化技术或增加数据量来优化模型,或者从另一方面,考虑采用更加大型的神经网络看看能否达到更优的准确率。

model2:model1基础上,增加迭代次数至40次

#创建神经网络
model2=Sequential(
    [
        Input(shape=(784,)),
        Dense(128,activation='relu',name='L1'),
        Dense(32,activation='relu',name='L2'),
        Dense(10,activation='linear',name='L3'),
    ],name='model2',
)
#编译模型
model2.compile(loss=SparseCategoricalCrossentropy(from_logits=True),optimizer=Adam(learning_rate=0.001))
#查看模型总结
model2.summary()

run_model(model2,40)
Model: "model2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 L1 (Dense)                  (None, 128)               100480    
                                                                 
 L2 (Dense)                  (None, 32)                4128      
                                                                 
 L3 (Dense)                  (None, 10)                330       
                                                                 
=================================================================
Total params: 104,938
Trainable params: 104,938
Non-trainable params: 0
_________________________________________________________________
Epoch 1/40
1875/1875 [==============================] - 10s 5ms/step - loss: 0.2670
Epoch 2/40
1875/1875 [==============================] - 10s 5ms/step - loss: 0.1124
Epoch 3/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0786
Epoch 4/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0593
Epoch 5/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0468
Epoch 6/40
1875/1875 [==============================] - 8s 5ms/step - loss: 0.0377
Epoch 7/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0310
Epoch 8/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0266
Epoch 9/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0246
Epoch 10/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0183
Epoch 11/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0180
Epoch 12/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0160
Epoch 13/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0170
Epoch 14/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0133
Epoch 15/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0135
Epoch 16/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0117
Epoch 17/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0108
Epoch 18/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0110
Epoch 19/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0107
Epoch 20/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0086
Epoch 21/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0096
Epoch 22/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0101
Epoch 23/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0083
Epoch 24/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0079
Epoch 25/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0095
Epoch 26/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0087
Epoch 27/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0063
Epoch 28/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0087
Epoch 29/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0080
Epoch 30/40
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0069
Epoch 31/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0053
Epoch 32/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0071
Epoch 33/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0056
Epoch 34/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0089
Epoch 35/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0062
Epoch 36/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0084
Epoch 37/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0051
Epoch 38/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0063
Epoch 39/40
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0074
Epoch 40/40
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0063
1875/1875 [==============================] - 5s 3ms/step
model2训练准确率:0.9984166666666666
313/313 [==============================] - 1s 3ms/step
model2测试准确率:0.98

可以看到测试准确率达到98%,略有提升,但考虑到运行时间翻倍,收益并不明显

model3:采用宽度和厚度更大的神经网络,迭代次数20

#增加模型宽度和厚度
model3 = Sequential([
    Input(shape=(784,)),
    Dense(256, activation='relu', name='L1'),
    Dense(128, activation='relu', name='L2'),
    Dense(64, activation='relu', name='L3'),
    Dense(10, activation='linear', name='L4'),
], name='model3')

#编译模型
model3.compile(loss=SparseCategoricalCrossentropy(from_logits=True),optimizer=Adam(learning_rate=0.001))
#查看模型总结
model3.summary()

run_model(model3,20)
Model: "model3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 L1 (Dense)                  (None, 256)               200960    
                                                                 
 L2 (Dense)                  (None, 128)               32896     
                                                                 
 L3 (Dense)                  (None, 64)                8256      
                                                                 
 L4 (Dense)                  (None, 10)                650       
                                                                 
=================================================================
Total params: 242,762
Trainable params: 242,762
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.2152
Epoch 2/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0908
Epoch 3/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0623
Epoch 4/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0496
Epoch 5/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0390
Epoch 6/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0341
Epoch 7/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0291
Epoch 8/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0244
Epoch 9/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0223
Epoch 10/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0187
Epoch 11/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0206
Epoch 12/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0145
Epoch 13/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0176
Epoch 14/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0153
Epoch 15/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0120
Epoch 16/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0148
Epoch 17/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0125
Epoch 18/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0123
Epoch 19/20
1875/1875 [==============================] - 13s 7ms/step - loss: 0.0120
Epoch 20/20
1875/1875 [==============================] - 13s 7ms/step - loss: 0.0094
1875/1875 [==============================] - 6s 3ms/step
model3训练准确率:0.9989333333333333
313/313 [==============================] - 1s 4ms/step
model3测试准确率:0.9816

model3训练准确率达到99.9%,测试准确率也取得了目前为止的新高98.2%

model4:model1基础上,加入Dropout层引入正则化

#Dropout正则化
model4 = Sequential([
    Input(shape=(784,)),
    Dense(128, activation='relu', name='L1'),
    Dropout(0.3),
    Dense(64, activation='relu', name='L2'),
    Dropout(0.2),
    Dense(10, activation='linear', name='L3'),
], name='model4')

#编译模型
model4.compile(loss=SparseCategoricalCrossentropy(from_logits=True),optimizer=Adam(learning_rate=0.001))
#查看模型总结
model4.summary()

run_model(model4,20)
Model: "model5"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 L1 (Dense)                  (None, 128)               100480    
                                                                 
 dropout_2 (Dropout)         (None, 128)               0         
                                                                 
 L2 (Dense)                  (None, 64)                8256      
                                                                 
 dropout_3 (Dropout)         (None, 64)                0         
                                                                 
 L3 (Dense)                  (None, 10)                650       
                                                                 
=================================================================
Total params: 109,386
Trainable params: 109,386
Non-trainable params: 0
_________________________________________________________________
Epoch 1/20
1875/1875 [==============================] - 15s 7ms/step - loss: 0.3686
Epoch 2/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.1855
Epoch 3/20
1875/1875 [==============================] - 17s 9ms/step - loss: 0.1475
Epoch 4/20
1875/1875 [==============================] - 17s 9ms/step - loss: 0.1289
Epoch 5/20
1875/1875 [==============================] - 20s 11ms/step - loss: 0.1124
Epoch 6/20
1875/1875 [==============================] - 19s 10ms/step - loss: 0.1053
Epoch 7/20
1875/1875 [==============================] - 22s 12ms/step - loss: 0.0976
Epoch 8/20
1875/1875 [==============================] - 15s 8ms/step - loss: 0.0907
Epoch 9/20
1875/1875 [==============================] - 12s 6ms/step - loss: 0.0861
Epoch 10/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0807
Epoch 11/20
1875/1875 [==============================] - 10s 5ms/step - loss: 0.0794
Epoch 12/20
1875/1875 [==============================] - 11s 6ms/step - loss: 0.0744
Epoch 13/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0733
Epoch 14/20
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0734
Epoch 15/20
1875/1875 [==============================] - 8s 4ms/step - loss: 0.0691
Epoch 16/20
1875/1875 [==============================] - 10s 5ms/step - loss: 0.0656
Epoch 17/20
1875/1875 [==============================] - 11s 6ms/step - loss: 0.0674
Epoch 18/20
1875/1875 [==============================] - 12s 7ms/step - loss: 0.0614
Epoch 19/20
1875/1875 [==============================] - 11s 6ms/step - loss: 0.0601
Epoch 20/20
1875/1875 [==============================] - 9s 5ms/step - loss: 0.0614
1875/1875 [==============================] - 5s 3ms/step
model5训练准确率:0.9951833333333333
313/313 [==============================] - 1s 2ms/step
model5测试准确率:0.98

model5训练准确率下降到了99.5%,但是相比model1测试准确率98%略有提升,Dropout正则化的确有效降低了模型方差,增强了模型的泛化能力

综上考虑,使用model3的框架同时引入Dropout正则化,迭代训练40次,构建model7

#最终全连接神经网络
model7 = Sequential([
    Input(shape=(784,)),
    Dense(256, activation='relu', name='L1'),
    Dropout(0.3),
    Dense(128, activation='relu', name='L2'),
    Dropout(0.2),
    Dense(64, activation='relu', name='L3'),
    Dropout(0.1),
    Dense(10, activation='linear', name='L4'),
], name='model7')

#编译模型
model7.compile(loss=SparseCategoricalCrossentropy(from_logits=True),optimizer=Adam(learning_rate=0.001))
#查看模型总结
model7.summary()

run_model(model7,40)
Model: "model7"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 L1 (Dense)                  (None, 256)               200960    
                                                                 
 dropout_4 (Dropout)         (None, 256)               0         
                                                                 
 L2 (Dense)                  (None, 128)               32896     
                                                                 
 dropout_5 (Dropout)         (None, 128)               0         
                                                                 
 L3 (Dense)                  (None, 64)                8256      
                                                                 
 dropout_6 (Dropout)         (None, 64)                0         
                                                                 
 L4 (Dense)                  (None, 10)                650       
                                                                 
=================================================================
Total params: 242,762
Trainable params: 242,762
Non-trainable params: 0
_________________________________________________________________
Epoch 1/40
1875/1875 [==============================] - 16s 8ms/step - loss: 0.3174
Epoch 2/40
1875/1875 [==============================] - 14s 7ms/step - loss: 0.1572
Epoch 3/40
1875/1875 [==============================] - 16s 9ms/step - loss: 0.1255
Epoch 4/40
1875/1875 [==============================] - 23s 12ms/step - loss: 0.1047
Epoch 5/40
1875/1875 [==============================] - 19s 10ms/step - loss: 0.0935
Epoch 6/40
1875/1875 [==============================] - 30s 16ms/step - loss: 0.0839
Epoch 7/40
1875/1875 [==============================] - 20s 11ms/step - loss: 0.0776
Epoch 8/40
1875/1875 [==============================] - 21s 11ms/step - loss: 0.0728
Epoch 9/40
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0661
Epoch 10/40
1875/1875 [==============================] - 14s 8ms/step - loss: 0.0629
Epoch 11/40
1875/1875 [==============================] - 16s 8ms/step - loss: 0.0596
Epoch 12/40
1875/1875 [==============================] - 26s 14ms/step - loss: 0.0566
Epoch 13/40
1875/1875 [==============================] - 22s 12ms/step - loss: 0.0533
Epoch 14/40
1875/1875 [==============================] - 16s 8ms/step - loss: 0.0520
Epoch 15/40
1875/1875 [==============================] - 14s 7ms/step - loss: 0.0467
Epoch 16/40
1875/1875 [==============================] - 15s 8ms/step - loss: 0.0458
Epoch 17/40
1875/1875 [==============================] - 15s 8ms/step - loss: 0.0451
Epoch 18/40
1875/1875 [==============================] - 19s 10ms/step - loss: 0.0443
Epoch 19/40
1875/1875 [==============================] - 43s 23ms/step - loss: 0.0417
Epoch 20/40
1875/1875 [==============================] - 38s 20ms/step - loss: 0.0409
Epoch 21/40
1875/1875 [==============================] - 21s 11ms/step - loss: 0.0392
Epoch 22/40
1875/1875 [==============================] - 16s 9ms/step - loss: 0.0396
Epoch 23/40
1875/1875 [==============================] - 20s 11ms/step - loss: 0.0355
Epoch 24/40
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0368
Epoch 25/40
1875/1875 [==============================] - 18s 10ms/step - loss: 0.0359
Epoch 26/40
1875/1875 [==============================] - 18s 10ms/step - loss: 0.0356
Epoch 27/40
1875/1875 [==============================] - 16s 8ms/step - loss: 0.0360
Epoch 28/40
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0326
Epoch 29/40
1875/1875 [==============================] - 19s 10ms/step - loss: 0.0335
Epoch 30/40
1875/1875 [==============================] - 19s 10ms/step - loss: 0.0310
Epoch 31/40
1875/1875 [==============================] - 21s 11ms/step - loss: 0.0324
Epoch 32/40
1875/1875 [==============================] - 16s 9ms/step - loss: 0.0301
Epoch 33/40
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0303
Epoch 34/40
1875/1875 [==============================] - 15s 8ms/step - loss: 0.0319
Epoch 35/40
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0300
Epoch 36/40
1875/1875 [==============================] - 17s 9ms/step - loss: 0.0305
Epoch 37/40
1875/1875 [==============================] - 14s 7ms/step - loss: 0.0290
Epoch 38/40
1875/1875 [==============================] - 19s 10ms/step - loss: 0.0288
Epoch 39/40
1875/1875 [==============================] - 20s 11ms/step - loss: 0.0272
Epoch 40/40
1875/1875 [==============================] - 38s 20ms/step - loss: 0.0264
1875/1875 [==============================] - 18s 9ms/step
model7训练准确率:0.9984333333333333
313/313 [==============================] - 2s 5ms/step
model7测试准确率:0.9831

model7训练准确率99.8%,测试准确率达到了98.3%,相比model1的97.9%,取得了接近0.4%的提升。

本实验是学习了神经网络基础后的一个实验练习,因此只采用全连接神经网络模型。我们知道CNN模型在图像识别上能力更强,因此在实验最后创建一个CNN网络进行测试(gpt生成网络框架)。

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten

model8 = Sequential([
    Input(shape=(28, 28, 1)),
    Conv2D(32, kernel_size=(3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),
    Conv2D(64, kernel_size=(3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),
    Flatten(),
    Dense(128, activation='relu'),
    Dense(10, activation='linear')
], name='cnn_model')

#编译模型
model8.compile(loss=SparseCategoricalCrossentropy(from_logits=True),optimizer=Adam(learning_rate=0.001))
#查看模型总结
model8.summary()


model8.fit(x_train,y_train,epochs=20,callbacks=[early_stopping]) 
z_train_hat=model8.predict(x_train)
y_train_hat=get_result(z_train_hat)
print(f'{model8.name}训练准确率:{accuracy_score(y_train,y_train_hat)}')

z_test_hat=model8.predict(x_test)
y_test_hat=get_result(z_test_hat)
print(f'{model8.name}测试准确率:{accuracy_score(y_test,y_test_hat)}')

cnn网络:
cnn_model训练准确率:0.9982333333333333
cnn_model测试准确率:0.9878
可以看到测试准确率达到了98.8%,比我们上面的全连接神经网络要优异。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2111183.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

别再羡慕别人啦,四种方法轻松打造自己的IP形象

大家好,我是宇航,10年技术专家,专注于AI绘画,AI视频 做自媒体的小伙伴第一件事儿就是起一个IP名称和制作IP图像。制作图像这件事儿对于很多小伙伴来说都不太容易,有的小伙伴制作了很久还是没有做出自己满意的图像。 …

使用Python本地搭建http.server文件共享服务并实现公网环境远程访问——“cpolar内网穿透”

前言 本文主要介绍如何在Windows系统电脑上使用python这样的简单程序语言,在自己的电脑上搭建一个共享文件服务器,并通过cpolar创建的公网地址,打造一个可以随时随地远程访问的私人云盘。 数据共享作为和连接作为互联网的基础应用&#xff…

Spring Boot项目更改项目名称

背景:新项目开始前,往往需要初始化功能,拿到基础版本后更改项目对应的名称等信息。 更改步骤如下: 1、修改目录名称。 打开本地项目,右键修改项目名称。 2、修改maven项目的pom依赖 修改parent及modules项目名称&…

C++语法知识点合集:7.string类

文章目录 一、标准库中的string类1.string类2.auto和范围for3.string类的常用接口说明 二、string类的模拟实现1. 经典的string类问题2.浅拷贝3.深拷贝 一、标准库中的string类 1.string类 string是表示字符串的字符串类该类的接口与常规容器的接口基本相同,再添加…

鸿蒙 HarmonyOS 下拉控件

✍️作者简介:小北编程(专注于HarmonyOS、Android、Java、Web、TCP/IP等技术方向) 🐳博客主页: 开源中国、稀土掘金、51cto博客、博客园、知乎、简书、慕课网、CSDN 🔔如果文章对您有一定的帮助请&#x1f…

Verilog和Matlab实现RGB888互转YUV444

文章目录 一、色彩空间1.1 RGB色彩空间1.2 CMYK色彩空间1.3 YUV色彩空间 二、色彩空间转换公式2.1 RGB转CMYK2.2 CMYK转RGB2.3 RGB888转YUV4442.4 YUV444转RGB888 三、MATLAB实现RGB888转YUV4443.1 matlab代码3.2 matlab结果 四、Verilog实现RGB888转YUV444 一、色彩空间 色彩空…

python_openCV_计算图片中的区域的黑色比例

希望对原始图片进行处理,然后计算图片上的黑色和白色的占比 上图, 原始图片 import numpy as np import cv2 import matplotlib.pyplot as pltdef cal_black(img_file):#功能: 计算图片中的区域的黑色比例#取图片中不同的位置进行计算,然后计算器数值#----------------p…

如何使用事件流相关操作

文章目录 1. 概念介绍2. 使用方法StreamControllerStreamBuilder 3. 示例代码 我们在上一章回中介绍了管理Stream事件流相关的内容,本章回中将介绍如何使用Stream事件流输入输出数据 。闲话休提,言归正传,让我们一起Talk Flutter吧。 1. 概念…

【VSCode v1.93.0】手动配置远程remote-ssh

开发环境 VS Code版本:1.93.0 (Windows) Ubuntu版本:20.04 使用VS Code 插件remote-ssh远程访问Ubuntu服务器中的代码,若Ubuntu无法联网,在连接的时候会报错: Could not establish connection to "xxxx": F…

前端玩Postgres数据库:Ai大法一把梭

大家好,我是程序员凌览。 前段时间分享如何白嫖一台服务器 👉🏼👉🏼白嫖不是梦,三分钟搞定一台服务器。 本文分享如何在平台Vercel白嫖服务器的同时蹭个postgres数据库。 创建数据库 切换到Storage&…

828华为云征文|基于Flexus云服务器X实例的应用场景-部署自己的博客系统

🔴大家好,我是雄雄,欢迎关注微信公众号:雄雄的小课堂 先看这里 写在前面效果图部署拾壹博客系统项目架构项目特点详细介绍部署博客系统更改redis的信息打包后端上传jar到服务器中打包前端项目 总结 写在前面 华为云828云服务器活…

【加密社】如何根据.dat文件恢复密钥

加密社 看了这篇指南,你将了解助记词和密钥地址(qianbao)背后的基本原理。 以及,如何找回你的大饼密钥。 Not your key, not your coin 如果你不掌握自己加密货币钱包的私钥,那么你实际上并不能完全控制你的资产 在当今…

每日OJ_牛客_走迷宫(简单bfs)

目录 牛客_走迷宫(简单bfs) 解析代码: 牛客_走迷宫(简单bfs) 走迷宫__牛客网 解析代码: 采用一个二维数组,不断的接受迷宫地图(因为有多个地图),获取到迷宫地图后,采…

智能匹配新高度:相亲交友系统如何运用AI技术提升用户体验

在数字化时代,相亲交友系统正逐渐融入人工智能(AI)技术,以提升用户体验和匹配效率。AI的引入不仅改变了传统的交友方式,还为用户带来了更加个性化和精准的交友体验。以下是一篇关于如何运用AI技术提升相亲交友系统用户…

第L8周:机器学习|K-means聚类算法

本文为🔗365天深度学习训练营中的学习记录博客 🍖 原作者:K同学啊 | 接辅导、项目定制 🚀 文章来源:K同学的学习圈子深度学习 聚类算法的定义: 聚类就是将一个庞杂数据集中具有相似特征的数据自动归类到一…

YOLOV5入门教学-common.py文件

在 YOLOv5 框架中,common.py 文件是一个核心组件,负责定义深度学习模型的基础模块和常用操作。无论是卷积层、激活函数、特征融合还是其他复杂的模型结构,common.py 都提供了灵活且高效的实现。在这篇文章中,我们将深入解析 commo…

【科普知识】一体化电机掉电后“位置精准复位“机制与规律

在工业自动化、机器人技术及精密控制领域,电机作为核心执行元件,其稳定运行和精确控制对于整个系统的性能至关重要。 然而,电机在运行过程中可能会遭遇突然断电的情况,这会导致电机失去驱动力并停止在当前位置,甚至在…

基于YOLOv10的垃圾检测系统

基于YOLOv10的垃圾检测系统 (价格90) 包含 [CardBoard, Glass, Metal, Paper, Plastic] 5个类 [纸板, 玻璃, 金属, 纸张, 塑料] 通过PYQT构建UI界面,包含图片检测,视频检测,摄像头实时检测。 (该系统可以根据数据训练出的…

Minimax-秋招正式批-面经(计网)

6. websocket和http区别 websocket知识点总结_防火墙 websocket-CSDN博客 相同点 都是基于TCP协议,都是可靠性传输协议都是应用层协议 不同点 HTTP 类型: 请求-响应式的无状态协议,半双工通信,同一时刻只能一个方向上有动作通…

变阻箱和负载箱的区别

变阻箱和负载箱是两种常见的电力设备,它们在电力系统中起着重要的作用。虽然它们都是用来调节电流的,但是它们的工作原理和用途有很大的区别。 首先,我们来看看变阻箱。变阻箱是一种可以改变电阻值的设备,它的主要作用是调节电流…