【Python实战】——神经网络识别手写数字

news2024/9/23 20:11:55

🍉CSDN小墨&晓末:https://blog.csdn.net/jd1813346972

   个人介绍: 研一|统计学|干货分享
         擅长Python、Matlab、R等主流编程软件
         累计十余项国家级比赛奖项,参与研究经费10w、40w级横向

文章目录

  • 1 探索数据集
    • 1.1 读取并显示数据示例
    • 1.2 数据集大小
    • 1.3 自变量因变量构建
    • 1.4 One-hot编码
    • 1.5 图像数据示例
    • 1.6 pickle包保存python对象
  • 2 构建神经网络并训练
    • 2.1 读取pickle文件
    • 2.2 神经网络核心关键函数定义
    • 2.3 神经网络模型定义
    • 2.4 模型训练
      • 2.4.1 预测概率
      • 2.4.2 训练集正确率
      • 2.4.3 测试集正确率
      • 2.4.4 训练集判别矩阵
      • 2.4.5 不同数字预测精确率
    • 2.5 结果可视化
      • 2.5.1 每次epoch训练预测情况
      • 2.5.2 迭代30次正确率绘图
  • 3 模型优化
    • 3.1 调整神经元数量
      • 3.1.1 每次epoch训练预测情况
      • 3.1.2 正确率绘图
    • 3.2 更换隐藏层层数
      • 3.2.1 每次epoch训练预测情况
      • 3.2.2 正确率绘图

该篇文章以Python实战的形式利用神经网络识别mnist手写数字数据集,包括pickle操作,神经网络关键模型关键函数定义,识别效果评估及可视化等内容,建议收藏练手!

1 探索数据集

1.1 读取并显示数据示例

  运行程序:

import numpy as np
import matplotlib.pyplot as plt

image_size = 28 # width and length
num_of_different_labels = 10 #  i.e. 0, 1, 2, 3, ..., 9
image_pixels = image_size * image_size

train_data = np.loadtxt("D:\\mnist_train.csv", delimiter=",")
test_data = np.loadtxt("D:\\mnist_test.csv", delimiter=",") 
test_data[:10]#测试集前十行

  运行结果:

array([[7., 0., 0., ..., 0., 0., 0.],
       [2., 0., 0., ..., 0., 0., 0.],
       [1., 0., 0., ..., 0., 0., 0.],
       ...,
       [9., 0., 0., ..., 0., 0., 0.],
       [5., 0., 0., ..., 0., 0., 0.],
       [9., 0., 0., ..., 0., 0., 0.]])

1.2 数据集大小

  运行程序:

print(test_data.shape)
print(train_data.shape)

  运行结果:

(10000, 785)
(60000, 785)

  该mnist数据集训练集共10000个数据,有785维,测试集有60000个数据,785维。

1.3 自变量因变量构建

  运行程序:

##第一列为预测类别
train_imgs = np.asfarray(train_data[:, 1:]) / 255
test_imgs = np.asfarray(test_data[:, 1:]) / 255 
train_labels = np.asfarray(train_data[:, :1])
test_labels = np.asfarray(test_data[:, :1])

1.4 One-hot编码

  运行程序

import numpy as np

lable_range = np.arange(10)

for label in range(10):
    one_hot = (lable_range==label).astype(int)
    print("label: ", label, " in one-hot representation: ", one_hot)
    
    
# 将数据集的标签转换为one-hot label

label_range = np.arange(num_of_different_labels)

train_labels_one_hot = (label_range==train_labels).astype(float)
test_labels_one_hot = (label_range==test_labels).astype(float)

1.5 图像数据示例

  运行程序:

# 示例
for i in range(10):
    img = train_imgs[i].reshape((28,28))
    plt.imshow(img, cmap="Greys")
    plt.show()

  运行结果:

1.6 pickle包保存python对象

因为csv文件读取到内存比较慢,我们用pickle这个包来保存python对象(这里面python对象指的是numpy array格式的train_imgs, test_imgs, train_labels, test_labels)

  运行程序:

import pickle

with open("D:\\pickled_mnist.pkl", "bw") as fh:
    data = (train_imgs, 
            test_imgs, 
            train_labels,
            test_labels)
    pickle.dump(data, fh)

2 构建神经网络并训练

2.1 读取pickle文件

  运行程序:

import pickle

with open("D:\\19实验\\实验课大作业\\pickled_mnist.pkl", "br") as fh:
    data = pickle.load(fh)

train_imgs = data[0]
test_imgs = data[1]
train_labels = data[2]
test_labels = data[3]

train_labels_one_hot = (lable_range==train_labels).astype(float)
test_labels_one_hot = (label_range==test_labels).astype(float)


image_size = 28 # width and length
num_of_different_labels = 10 #  i.e. 0, 1, 2, 3, ..., 9
image_pixels = image_size * image_size

2.2 神经网络核心关键函数定义

  运行程序:

import numpy as np

def sigmoid(x):
    return 1 / (1 + np.e ** -x)
##激活函数
activation_function = sigmoid

from scipy.stats import truncnorm
##数据标准化
def truncated_normal(mean=0, sd=1, low=0, upp=10):
    return truncnorm((low - mean) / sd, 
                     (upp - mean) / sd, 
                     loc=mean, 
                     scale=sd)
##构建神经网络模型
class NeuralNetwork:
    
    def __init__(self, 
                 num_of_in_nodes, #输入节点数
                 num_of_out_nodes, #输出节点数
                 num_of_hidden_nodes,#隐藏节点数
                 learning_rate):#学习率
        self.num_of_in_nodes = num_of_in_nodes
        self.num_of_out_nodes = num_of_out_nodes
        self.num_of_hidden_nodes = num_of_hidden_nodes
        self.learning_rate = learning_rate 
        self.create_weight_matrices()
    #初始为一个隐藏节点    
    def create_weight_matrices(self):#创建权重矩阵
 
       # A method to initialize the weight 
        #matrices of the neural network#一种初始化神经网络权重矩阵的方法

        rad = 1 / np.sqrt(self.num_of_in_nodes)  
        X = truncated_normal(mean=0, sd=1, low=-rad, upp=rad)  #形成指定分布
        self.weight_1 = X.rvs((self.num_of_hidden_nodes, self.num_of_in_nodes)) #rvs:产生服从指定分布的随机数
        
        rad = 1 / np.sqrt(self.num_of_hidden_nodes)
        X = truncated_normal(mean=0, sd=1, low=-rad, upp=rad)
        self.weight_2 = X.rvs((self.num_of_out_nodes, self.num_of_hidden_nodes)) #rvs: 产生服从指定分布的随机数
        
    
    def train(self, input_vector, target_vector):
      #
       # input_vector and target_vector can 
        #be tuple, list or ndarray
        #
        
        input_vector = np.array(input_vector, ndmin=2).T#输入
        target_vector = np.array(target_vector, ndmin=2).T#输出
        
        output_vector1 = np.dot(self.weight_1, input_vector) #隐藏层值
        output_hidden = activation_function(output_vector1)#删除不激活
        
        output_vector2 = np.dot(self.weight_2, output_hidden)#输出
        output_network = activation_function(output_vector2)##删除不激活
        
        # calculate output errors:计算输出误差
        output_errors = target_vector - output_network
        
        # update the weights:更新权重
        tmp = output_errors * output_network * (1.0 - output_network)     
        self.weight_2 += self.learning_rate  * np.dot(tmp, output_hidden.T)


        # calculate hidden errors:计算隐藏层误差
        hidden_errors = np.dot(self.weight_2.T, output_errors)
        
        # update the weights:
        tmp = hidden_errors * output_hidden * (1.0 - output_hidden)
        self.weight_1 += self.learning_rate * np.dot(tmp, input_vector.T)
        
    #测试集
    def run(self, input_vector):
        # input_vector can be tuple, list or ndarray
        input_vector = np.array(input_vector, ndmin=2).T
        

        output_vector = np.dot(self.weight_1, input_vector)
        output_vector = activation_function(output_vector)
        
        output_vector = np.dot(self.weight_2, output_vector)
        output_vector = activation_function(output_vector)
    
        return output_vector
    #判别矩阵
    def confusion_matrix(self, data_array, labels):
        cm = np.zeros((10, 10), int)
        for i in range(len(data_array)):
            res = self.run(data_array[i])
            res_max = res.argmax()
            target = labels[i][0]
            cm[res_max, int(target)] += 1
        return cm    
     #精确度
    def precision(self, label, confusion_matrix):
        col = confusion_matrix[:, label]
        return confusion_matrix[label, label] / col.sum()
    #评估
    def evaluate(self, data, labels):
        corrects, wrongs = 0, 0
        for i in range(len(data)):
            res = self.run(data[i])
            res_max = res.argmax()
            if res_max == labels[i]:
                corrects += 1
            else:
                wrongs += 1
        return corrects, wrongs

2.3 神经网络模型定义

  运行程序:

ANN = NeuralNetwork(num_of_in_nodes = image_pixels, #输入
                    num_of_out_nodes = 10, #输出节点数
                    num_of_hidden_nodes = 100,#隐藏节点
                    learning_rate = 0.1)#学习率

2.4 模型训练

2.4.1 预测概率

  运行程序:

for i in range(len(train_imgs)):
    ANN.train(train_imgs[i], train_labels_one_hot[i])


for i in range(20):
    res = ANN.run(test_imgs[i])
    print(test_labels[i], np.argmax(res), np.max(res))

  运行结果:

[7.] 7 0.9992648448921
[2.] 2 0.9040034245332168
[1.] 1 0.9992201001324703
[0.] 0 0.9923701545281887
[4.] 4 0.989297708155559
[1.] 1 0.9984582148795715
[4.] 4 0.9957673752296046
[9.] 9 0.9889417895800644
[5.] 6 0.5009071817613537
[9.] 9 0.9879513019542627
[0.] 0 0.9932950902790246
[6.] 6 0.9387061553685657
[9.] 9 0.9962530965286298
[0.] 0 0.9974524110371016
[1.] 1 0.9991354417269441
[5.] 5 0.7607733657668813
[9.] 9 0.9968080255475414
[7.] 7 0.9967748204232602
[3.] 3 0.8820920415159276
[4.] 4 0.9978584850755227

2.4.2 训练集正确率

  运行程序:

corrects, wrongs = ANN.evaluate(train_imgs, train_labels)#训练集判别正确和错误数量
print("accuracy train: ", corrects / ( corrects + wrongs))##正确率

  运行结果:

accuracy train:  0.9425333333333333

2.4.3 测试集正确率

  运行程序:

corrects, wrongs = ANN.evaluate(test_imgs, test_labels)
print("accuracy: test", corrects / ( corrects + wrongs))#测试集正确率

  运行结果:

accuracy: test 0.9412

2.4.4 训练集判别矩阵

  运行程序:

cm = ANN.confusion_matrix(train_imgs, train_labels)
print(cm)   #训练集判别矩阵

  运行结果:

[[5822    1   54   35   15   41   47   12   31   31]
 [   2 6638   62   31   17   24   21   64  163   14]
 [   6   19 5487   57   16    9    2   45   16    4]
 [   7   27   87 5773    3  130    3   16  148   67]
 [  11   11   68    8 5332   34   12   48   28   44]
 [  10    4    6   69    0 4952   34    5   32    5]
 [  31    5   53   19   49   96 5782    5   37    2]
 [   1    9   45   35    6    6    0 5812    5   28]
 [  20    9   70   32    9   37   15   11 5209    9]
 [  13   19   26   72  395   92    2  247  182 5745]]

2.4.5 不同数字预测精确率

  运行程序:

for i in range(10):
    print("digit: ", i, "precision: ", ANN.precision(i, cm))

  运行结果:

digit:  0 precision:  0.9829478304913051
digit:  1 precision:  0.9845743102936814
digit:  2 precision:  0.9209466263846928
digit:  3 precision:  0.9416082205186755
digit:  4 precision:  0.9127011297500855
digit:  5 precision:  0.9134845969378343
digit:  6 precision:  0.9770192632646164
digit:  7 precision:  0.9276935355147645
digit:  8 precision:  0.8902751666381815
digit:  9 precision:  0.9657085224407463

2.5 结果可视化

2.5.1 每次epoch训练预测情况

  运行程序:

epochs = 30
train_acc=[]
test_acc=[]
NN = NeuralNetwork(num_of_in_nodes = image_pixels, 
                   num_of_out_nodes = 10, 
                   num_of_hidden_nodes = 100,
                   learning_rate = 0.1)

for epoch in range(epochs):  
    print("epoch: ", epoch)
    for i in range(len(train_imgs)):
        NN.train(train_imgs[i], 
                 train_labels_one_hot[i])
  
    corrects, wrongs = NN.evaluate(train_imgs, train_labels)
    print("accuracy train: ", corrects / ( corrects + wrongs))
    train_acc.append(corrects / ( corrects + wrongs))
    corrects, wrongs = NN.evaluate(test_imgs, test_labels)
    print("accuracy: test", corrects / ( corrects + wrongs))
    test_acc.append(corrects / ( corrects + wrongs))

运行结果:

epoch:  0
accuracy train:  0.94455
accuracy: test 0.9422
epoch:  1
accuracy train:  0.9628
accuracy: test 0.9579
epoch:  2
accuracy train:  0.9699
accuracy: test 0.9637
epoch:  3
accuracy train:  0.9761166666666666
accuracy: test 0.9649
epoch:  4
accuracy train:  0.979
accuracy: test 0.9662
epoch:  5
accuracy train:  0.9820833333333333
accuracy: test 0.9679
epoch:  6
accuracy train:  0.9838166666666667
accuracy: test 0.9697
epoch:  7
accuracy train:  0.9845666666666667
accuracy: test 0.97
epoch:  8
accuracy train:  0.9855333333333334
accuracy: test 0.9703
epoch:  9
accuracy train:  0.9868166666666667
accuracy: test 0.97
epoch:  10
accuracy train:  0.9878166666666667
accuracy: test 0.9714
epoch:  11
accuracy train:  0.98845
accuracy: test 0.9716
epoch:  12
accuracy train:  0.98905
accuracy: test 0.9721
epoch:  13
accuracy train:  0.9898166666666667
accuracy: test 0.9723
epoch:  14
accuracy train:  0.9903
accuracy: test 0.9722
epoch:  15
accuracy train:  0.9907666666666667
accuracy: test 0.9719
epoch:  16
accuracy train:  0.9910833333333333
accuracy: test 0.9715
epoch:  17
accuracy train:  0.9918
accuracy: test 0.9714
epoch:  18
accuracy train:  0.9924166666666666
accuracy: test 0.971
epoch:  19
accuracy train:  0.99265
accuracy: test 0.9712
epoch:  20
accuracy train:  0.9932833333333333
accuracy: test 0.972
epoch:  21
accuracy train:  0.9939333333333333
accuracy: test 0.9716
epoch:  22
accuracy train:  0.9944333333333333
accuracy: test 0.972
epoch:  23
accuracy train:  0.9948
accuracy: test 0.9719
epoch:  24
accuracy train:  0.9950833333333333
accuracy: test 0.9718
epoch:  25
accuracy train:  0.9950833333333333
accuracy: test 0.9722
epoch:  26
accuracy train:  0.99525
accuracy: test 0.9725
epoch:  27
accuracy train:  0.9955833333333334
accuracy: test 0.972
epoch:  28
accuracy train:  0.9958166666666667
accuracy: test 0.9717
epoch:  29
accuracy train:  0.9962666666666666
accuracy: test 0.9717

2.5.2 迭代30次正确率绘图

  运行程序:

#正确率绘图
# matplotlib其实是不支持显示中文的 显示中文需要一行代码设置字体  
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['font.family'] = 'SimHei'  
plt.rcParams['axes.unicode_minus'] = False   

import matplotlib.pyplot as plt 

x=np.arange(1,31,1)

plt.title('迭代30次正确率')
plt.plot(x, train_acc, color='green', label='训练集')
plt.plot(x, test_acc, color='red', label='测试集')

plt.legend() # 显示图例
plt.show()

  运行结果:

3 模型优化

3.1 调整神经元数量

3.1.1 每次epoch训练预测情况

  运行程序:

##更换隐藏神经元数量为50
epochs = 50
train_acc=[]
test_acc=[]
NN = NeuralNetwork(num_of_in_nodes = image_pixels, 
                   num_of_out_nodes = 10, 
                   num_of_hidden_nodes = 50,
                   learning_rate = 0.1)

for epoch in range(epochs):  
    print("epoch: ", epoch)
    for i in range(len(train_imgs)):
        NN.train(train_imgs[i], 
                 train_labels_one_hot[i])
  
    corrects, wrongs = NN.evaluate(train_imgs, train_labels)
    print("accuracy train: ", corrects / ( corrects + wrongs))
    train_acc.append(corrects / ( corrects + wrongs))
    corrects, wrongs = NN.evaluate(test_imgs, test_labels)
    print("accuracy: test", corrects / ( corrects + wrongs))
    test_acc.append(corrects / ( corrects + wrongs))

  运行结果:

epoch:  0
accuracy train:  0.93605
accuracy: test 0.935
epoch:  1
accuracy train:  0.95185
accuracy: test 0.9501
epoch:  2
accuracy train:  0.9570333333333333
accuracy: test 0.9526
epoch:  3
accuracy train:  0.9630833333333333
accuracy: test 0.9556
epoch:  4
accuracy train:  0.9640166666666666
accuracy: test 0.9556
epoch:  5
accuracy train:  0.9668333333333333
accuracy: test 0.957
epoch:  6
accuracy train:  0.96765
accuracy: test 0.957
epoch:  7
accuracy train:  0.9673166666666667
accuracy: test 0.9566
epoch:  8
accuracy train:  0.96875
accuracy: test 0.9559
epoch:  9
accuracy train:  0.97145
accuracy: test 0.957
epoch:  10
accuracy train:  0.974
accuracy: test 0.9579
epoch:  11
accuracy train:  0.9730666666666666
accuracy: test 0.9569
epoch:  12
accuracy train:  0.9730166666666666
accuracy: test 0.9581
epoch:  13
accuracy train:  0.9747666666666667
accuracy: test 0.959
epoch:  14
accuracy train:  0.9742166666666666
accuracy: test 0.9581
epoch:  15
accuracy train:  0.97615
accuracy: test 0.9596
epoch:  16
accuracy train:  0.9759
accuracy: test 0.9586
epoch:  17
accuracy train:  0.9773166666666666
accuracy: test 0.9596
epoch:  18
accuracy train:  0.9778833333333333
accuracy: test 0.9606
epoch:  19
accuracy train:  0.9789166666666667
accuracy: test 0.9589
epoch:  20
accuracy train:  0.9777333333333333
accuracy: test 0.9582
epoch:  21
accuracy train:  0.9774
accuracy: test 0.9573
epoch:  22
accuracy train:  0.9796166666666667
accuracy: test 0.9595
epoch:  23
accuracy train:  0.9792666666666666
accuracy: test 0.959
epoch:  24
accuracy train:  0.9804333333333334
accuracy: test 0.9591
epoch:  25
accuracy train:  0.9806
accuracy: test 0.9589
epoch:  26
accuracy train:  0.98105
accuracy: test 0.9596
epoch:  27
accuracy train:  0.9806833333333334
accuracy: test 0.9587
epoch:  28
accuracy train:  0.9809833333333333
accuracy: test 0.9595
epoch:  29
accuracy train:  0.9813333333333333
accuracy: test 0.9595

3.1.2 正确率绘图

  运行程序:

#正确率绘图
# matplotlib其实是不支持显示中文的 显示中文需要一行代码设置字体  
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['font.family'] = 'SimHei'  
plt.rcParams['axes.unicode_minus'] = False   # 步骤二(解决坐标轴负数的负号显示问题)  

import matplotlib.pyplot as plt 

x=np.arange(1,31,1)

plt.title('神经元数量为50时正确率')
plt.plot(x, train_acc, color='green', label='训练集')
plt.plot(x, test_acc, color='red', label='测试集')

plt.legend() # 显示图例
plt.show()

  运行结果:

3.2 更换隐藏层层数

3.2.1 每次epoch训练预测情况

  运行程序:

#隐藏层层数为2
class NeuralNetwork:
    
    def __init__(self, 
                 num_of_in_nodes, #输入节点数
                 num_of_out_nodes, #输出节点数
                 num_of_hidden_nodes1,#隐藏第一层节点数
                 num_of_hidden_nodes2,#隐藏第二层节点数
                 learning_rate):#学习率
        self.num_of_in_nodes = num_of_in_nodes
        self.num_of_out_nodes = num_of_out_nodes
        self.num_of_hidden_nodes1 = num_of_hidden_nodes1
        self.num_of_hidden_nodes2 = num_of_hidden_nodes2
        self.learning_rate = learning_rate 
        self.create_weight_matrices()
    #初始为一个隐藏节点    
    def create_weight_matrices(self):#创建权重矩阵
       
        #A method to initialize the weight 
        #matrices of the neural network#一种初始化神经网络权重矩阵的方法
        
        rad = 1 / np.sqrt(self.num_of_in_nodes)  
        X = truncated_normal(mean=0, sd=1, low=-rad, upp=rad)  #形成指定分布
        self.weight_1 = X.rvs((self.num_of_hidden_nodes1, self.num_of_in_nodes)) #rvs:产生服从指定分布的随机数
        
        rad = 1 / np.sqrt(self.num_of_hidden_nodes1)
        X = truncated_normal(mean=0, sd=1, low=-rad, upp=rad)
        self.weight_2 = X.rvs((self.num_of_hidden_nodes2, self.num_of_hidden_nodes1)) #rvs: 产生服从指定分布的随机数
        
        rad = 1 / np.sqrt(self.num_of_hidden_nodes2)
        X = truncated_normal(mean=0, sd=1, low=-rad, upp=rad)
        self.weight_3 = X.rvs((self.num_of_out_nodes, self.num_of_hidden_nodes2)) #rvs: 产生服从指定分布的随机数
    def train(self, input_vector, target_vector):
        
        #input_vector and target_vector can 
        #be tuple, list or ndarray
      
        
        input_vector = np.array(input_vector, ndmin=2).T#输入
        target_vector = np.array(target_vector, ndmin=2).T#输出
        
        output_vector1 = np.dot(self.weight_1, input_vector) #隐藏层值
        output_hidden1 = activation_function(output_vector1)#删除不激活
        
        output_vector2 = np.dot(self.weight_2, output_hidden1)#输出
        output_hidden2 = activation_function(output_vector2)#删除不激活
        
        output_vector3 = np.dot(self.weight_3, output_hidden2)#输出
        output_network = activation_function(output_vector3)##删除不激活
        
        
        # calculate output errors:计算输出误差
        output_errors = target_vector - output_network
        
        # update the weights:更新权重
        tmp = output_errors * output_network * (1.0 - output_network)     
        self.weight_3 += self.learning_rate  * np.dot(tmp, output_hidden2.T)
        
        hidden1_errors = np.dot(self.weight_3.T, output_errors)
        
        tmp = hidden1_errors * output_hidden2 * (1.0 - output_hidden2)     
        self.weight_2 += self.learning_rate  * np.dot(tmp, output_hidden1.T)


        # calculate hidden errors:计算隐藏层误差
        hidden_errors = np.dot(self.weight_2.T, hidden1_errors)
        
        # update the weights:
        tmp = hidden_errors * output_hidden1 * (1.0 - output_hidden1)
        self.weight_1 += self.learning_rate * np.dot(tmp, input_vector.T)
        
    #测试集
    def run(self, input_vector):
        # input_vector can be tuple, list or ndarray
        input_vector = np.array(input_vector, ndmin=2).T
        

        output_vector = np.dot(self.weight_1, input_vector)
        output_vector = activation_function(output_vector)
        
        output_vector = np.dot(self.weight_2, output_vector)
        output_vector = activation_function(output_vector)
        
        output_vector = np.dot(self.weight_3, output_vector)
        output_vector = activation_function(output_vector)
        return output_vector
    #判别矩阵
    def confusion_matrix(self, data_array, labels):
        cm = np.zeros((10, 10), int)
        for i in range(len(data_array)):
            res = self.run(data_array[i])
            res_max = res.argmax()
            target = labels[i][0]
            cm[res_max, int(target)] += 1
        return cm    
     #精确度
    def precision(self, label, confusion_matrix):
        col = confusion_matrix[:, label]
        return confusion_matrix[label, label] / col.sum()
    #评估
    def evaluate(self, data, labels):
        corrects, wrongs = 0, 0
        for i in range(len(data)):
            res = self.run(data[i])
            res_max = res.argmax()
            if res_max == labels[i]:
                corrects += 1
            else:
                wrongs += 1
        return corrects, wrongs
        
##迭代30次
epochs = 30
train_acc=[]
test_acc=[]
NN = NeuralNetwork(num_of_in_nodes = image_pixels, 
                   num_of_out_nodes = 10, 
                   num_of_hidden_nodes1 = 100,
                   num_of_hidden_nodes2 = 100,
                   learning_rate = 0.1)

for epoch in range(epochs):  
    print("epoch: ", epoch)
    for i in range(len(train_imgs)):
        NN.train(train_imgs[i], 
                 train_labels_one_hot[i])
  
    corrects, wrongs = NN.evaluate(train_imgs, train_labels)
    print("accuracy train: ", corrects / ( corrects + wrongs))
    train_acc.append(corrects / ( corrects + wrongs))
    corrects, wrongs = NN.evaluate(test_imgs, test_labels)
    print("accuracy: test", corrects / ( corrects + wrongs))
    test_acc.append(corrects / ( corrects + wrongs))

  运行结果:

epoch:  0
accuracy train:  0.8972333333333333
accuracy: test 0.9005
epoch:  1
accuracy train:  0.8891833333333333
accuracy: test 0.8936
epoch:  2
accuracy train:  0.9146833333333333
accuracy: test 0.9182
epoch:  3
D:\ananconda\lib\site-packages\ipykernel_launcher.py:5: RuntimeWarning: overflow encountered in power
  """
accuracy train:  0.8974833333333333
accuracy: test 0.894
epoch:  4
accuracy train:  0.8924166666666666
accuracy: test 0.8974
epoch:  5
accuracy train:  0.91295
accuracy: test 0.914
epoch:  6
accuracy train:  0.9191166666666667
accuracy: test 0.9205
epoch:  7
accuracy train:  0.9117666666666666
accuracy: test 0.9162
epoch:  8
accuracy train:  0.9220333333333334
accuracy: test 0.9222
epoch:  9
accuracy train:  0.9113833333333333
accuracy: test 0.9112
epoch:  10
accuracy train:  0.9134333333333333
accuracy: test 0.911
epoch:  11
accuracy train:  0.9112166666666667
accuracy: test 0.9103
epoch:  12
accuracy train:  0.914
accuracy: test 0.9126
epoch:  13
accuracy train:  0.9206833333333333
accuracy: test 0.9214
epoch:  14
accuracy train:  0.90945
accuracy: test 0.9073
epoch:  15
accuracy train:  0.9225166666666667
accuracy: test 0.9287
epoch:  16
accuracy train:  0.9226
accuracy: test 0.9205
epoch:  17
accuracy train:  0.9239833333333334
accuracy: test 0.9202
epoch:  18
accuracy train:  0.91925
accuracy: test 0.9191
epoch:  19
accuracy train:  0.9223166666666667
accuracy: test 0.92
epoch:  20
accuracy train:  0.9113
accuracy: test 0.9084
epoch:  21
accuracy train:  0.9241666666666667
accuracy: test 0.925
epoch:  22
accuracy train:  0.9236333333333333
accuracy: test 0.9239
epoch:  23
accuracy train:  0.9301166666666667
accuracy: test 0.9259
epoch:  24
accuracy train:  0.9195166666666666
accuracy: test 0.9186
epoch:  25
accuracy train:  0.9200833333333334
accuracy: test 0.9144
epoch:  26
accuracy train:  0.9204833333333333
accuracy: test 0.9186
epoch:  27
accuracy train:  0.9288666666666666
accuracy: test 0.9259
epoch:  28
accuracy train:  0.9293
accuracy: test 0.9282
epoch:  29
accuracy train:  0.9254666666666667
accuracy: test 0.9242

3.2.2 正确率绘图

  运行程序:

#正确率绘图
# matplotlib其实是不支持显示中文的 显示中文需要一行代码设置字体  
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['font.family'] = 'SimHei'  
plt.rcParams['axes.unicode_minus'] = False   # 步骤二(解决坐标轴负数的负号显示问题)  

import matplotlib.pyplot as plt 

x=np.arange(1,31,1)

plt.title('隐藏层数为2时正确率')
plt.plot(x, train_acc, color='green', label='训练集')
plt.plot(x, test_acc, color='red', label='测试集')

plt.legend() # 显示图例
plt.show()

  运行结果:

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1542926.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

学习Python的第二天

下载工具 PyCharm Community Edition 2023.3.4 下载环境 Python3.10.4 目录 1.初识python 1.1 Python的起源 1.2 为什么学习Python 2.什么是编程语言 2.1 语言的概念 2.2 为什么不使用中文来与计算机交流呢 3.python环境安装 4.第一个python程序 5.python解释器 5…

人才推荐 | 纺织科学与工程博士,经验丰富的面料专家和采购专家

编辑 / 木子 审核 / 朝阳 伟骅英才 伟骅英才致力于以大数据、区块链、AI人工智能等前沿技术打造开放的人力资本生态,用科技解决职业领域问题,提升行业数字化服务水平,提供创新型的产业与人才一体化服务的人力资源解决方案和示范平台&#x…

一款炫酷的python形状绘制动画库

这个库让复杂数学概念的可视化变得既简单又有趣,无论是线性代数、微积分,还是更高级的数学主题,Manim都能让它们栩栩如生,特别适合于制作数学视频和演示文稿。 特点 动画生成: Manim库提供了一套丰富的工具和方法&…

C++ unordered_set和unordered_map

哈希 1. unordered_set/unordered_map1.1 背景1.2 unordered_set1.2.1 特性1.2.2 常用方法 1.3 unordered_map1.3.1 特性1.3.2 常用方法 2. 哈希2.1概念2.2 哈希冲突2.2.1哈希函数2.2.2 解决哈希冲突2.2.2.1 闭散列2.2.2.2 开散列 1. unordered_set/unordered_map 1.1 背景 之…

新台阶——蓝桥杯单片机省赛第十四届程序设计题目

在做十四届题目之前,常常听学长说,十四届以前拿省一真的是右手就行,并不相信,在经历十四届痛苦的大量修bug和优化之后,或许学长的话真说对了几分。话不多说,我们开始一起完成单片机第十四届程序设计题目。 …

【】(综合练习)博客系统

在之前的学些中,我们掌握了Spring框架和MyBatis的基本使用,接下来 我们就要结合之前我们所学的知识,做出一个项目出来 1.前期准备 当我们接触到一个项目时,我们需要对其作出准备,那么正规的准备是怎么样的呢 1.了解需求…

基于傅里叶描述子的手势动作识别,Matlab实现

博主简介: 专注、专一于Matlab图像处理学习、交流,matlab图像代码代做/项目合作可以联系(QQ:3249726188) 个人主页:Matlab_ImagePro-CSDN博客 原则:代码均由本人编写完成,非中介,提供…

ubuntu22.04物理机双系统手动分区

ubuntu22.04物理机双系统手动分区 文章目录 ubuntu22.04物理机双系统手动分区1. EFI系统分区2. 交换分区3. /根分区4. /home分区分区后的信息 手动分区顺序:EFI系统分区(/boot/efi)、交换分区(/swap)、/根分区、/home分区。 具体参数设置: 1. EFI系统分…

02. 【Android教程】开发环境搭建

在学习 Android 应用开发之前,我们先要完成环境的搭建,它将帮助我们将 Java 代码编译打包生成最终的 Android 安装包。本教程在 Mac 下完成安装,Windows 和 Linux 步骤类似,不同之处会着重区分。 1. 文件清单 Java SE Developmen…

JVM的知识

什么是JVM 1.JVM: JVM其实就是运行在 操作系统之上的一个特殊的软件。 2.JVM的内部结构: (1)因为栈会将执行的程序弹出栈。 (2)垃圾99%的都是在堆和方法区中产生的。 类加载器:加载class文件。…

芯片中小公司ERP系统的业务流程:揭秘数字化管理的新篇章

随着信息技术的飞速发展,ERP(企业资源规划)系统已成为众多企业实现数字化管理的重要工具。对于芯片中小公司而言,ERP系统更是提升运营效率、优化资源配置的关键所在。那么,芯片中小公司的ERP系统究竟是如何运作的呢?让我们一同揭开其业务流程…

Spatialite坐标投影并计算面积

将坐标转为WGS_1984_UTM_Zone_48N(32648)后再计算其面积: -- 转换坐标系并计算面积(平方米) SELECT ST_Area(ST_Transform(GeomFromText(POLYGON((106.763 26.653, 106.763 26.626, 106.815 26.625, 106.809 26.666, …

我们使用 Postgres 构建多租户 SaaS 服务时踩的坑

原文 Our Multi-tenancy Journey with Postgres Schemas and Apartment。这篇和之前发出的「如何使用 Postgres 对一个多租户应用分片」相呼应。 多租户 (Multip-tenancy) 是当下的热门话题。我对多租户应用程序的定义是一个能够服务于多个客户的软件系统,每个客户都…

有名的爬虫框架 colly 的特性及2个详细采集案例

一. Colly概述 前言:colly 是 Go 实现的比较有名的一款爬虫框架,而且 Go 在高并发和分布式场景的优势也正是爬虫技术所需要的。它的主要特点是轻量、快速,设计非常优雅,并且分布式的支持也非常简单,易于扩展。 框架简…

javaSSM游泳馆日常管理系统IDEA开发mysql数据库web结构计算机java编程maven项目

一、源码特点 IDEA开发SSM游泳馆日常管理系统是一套完善的完整企业内部系统,结合SSM框架和bootstrap完成本系统,对理解JSP java编程开发语言有帮助系统采用SSM框架(MVC模式开发)MAVEN方式加载,系统具有完整的源代码和…

疫情居家办公OA系统设计与实现| Mysql+Java+ B/S结构(可运行源码+数据库+设计文档)

本项目包含可运行源码数据库LW,文末可获取本项目的所有资料。 推荐阅读100套最新项目 最新ssmjava项目文档视频演示可运行源码分享 最新jspjava项目文档视频演示可运行源码分享 最新Spring Boot项目文档视频演示可运行源码分享 2024年56套包含java,…

day04套餐管理模块所有业务功能代码开发

目录 1. 新增套餐1.1 需求分析和设计1.2 代码实现1.2.1 DishController1.2.2 DishService1.2.3 DishServiceImpl1.2.4 DishMapper1.2.5 DishMapper.xml1.2.6 SetmealController1.2.7 SetmealService1.2.8 SetmealServiceImpl1.2.9 SetmealMapper1.2.10 SetmealMapper.xml1.2.11…

shell脚本入门练习(非常详细)零基础入门到精通,收藏这一篇就够了

【脚本1】打印形状 打印等腰三角形、直角三角形、倒直角三角形、菱形 #!/bin/bash \# 等腰三角形 read \-p "Please input the length: " n for i in \seq 1 $n\ do for ((j\$n;j>i;j--)) do echo \-n " " done for m in \seq 1 $i\ do…

希尔伯特-黄变换(Hilbert-Huang Transform, HHT)详解

目录 经验模态分解(EMD) 希尔伯特谱分析(HSA) 定义 连续时信号的Hilbert变换定义 离散时信号的Hilbert变换定义 解析信号定义: 解析信号的傅里叶变换 解析信号的重要意义 解析信号的属性 希尔伯特--黄变换(…

LabVIEW电动汽车直流充电桩监控系统

LabVIEW电动汽车直流充电桩监控系统 随着电动汽车的普及,充电桩的安全运行成为重要议题。通过集成传感器监测、单片机技术与LabVIEW开发平台,设计了一套电动汽车直流充电桩监控系统,能实时监测充电桩的温度、电压和电流,并进行数…