tensorflow案例2--猴痘病识别,一道激活函数的bug

news2024/10/18 23:52:10
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

    文章目录

    • 1、bug
    • 2、模型构建
      • 1、数据处理
        • 1、导入库
        • 2、查看数据目录
        • 3、加载数据
        • 4、数据展示
      • 2、内存优化
      • 3、模型构建
      • 4、模型训练
        • 1、超参数设置
        • 2、模型训练
      • 5、结果展示
      • 6、图片预测
      • 7、尝试优化

1、bug

🤔 思路:

首先采用:tf.keras.losses.BinaryCrossentropy(from_logits=False),作为激活函数,但是没有在修改输出层,对于二分类问题来说,输出层这个时候应该变成1个神经元,并且最后一层采用sigmoid激活函数,但是,但是🔲,我没有改,输出层依然是2个神经元导致我的准确率一直上不去,一直在0.6只有徘徊,后面换了不少神经网络模型😢😢😢😢,最后才发现激活函数用错了,激活函数换成处理多分类tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)就可以了:happy::happy::happy::happy:。


📖 积累:

tf.keras.losses.BinaryCrossentropy

tf.keras.losses.BinaryCrossentropy(from_logits=False) 是 TensorFlow 中用于二分类任务的损失函数。这个损失函数计算的是二元交叉熵损失,它是衡量模型预测的概率分布与真实标签之间的差异的一种方式。

  • from_logits=False

    • 默认值False
    • 含义:表示模型的输出已经经过了激活函数(如 sigmoid),即输出是概率值(范围在 0 到 1 之间)。
    • 作用:在这种情况下,损失函数直接使用模型的输出值来计算二元交叉熵损失。

二元交叉熵公式

from_logits=False 时,二元交叉熵损失的计算公式为:

​ loss=−(y⋅log⁡§+(1−y)⋅log⁡(1−p))loss=−(y⋅log(p)+(1−y)⋅log(1−p))

其中:

  • yy 是真实的标签(0 或 1)。
  • pp 是模型的预测概率(范围在 0 到 1 之间)。

tf.keras.losses.SparseCategoricalCrossentropy

tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) 是 TensorFlow 中用于多分类任务的损失函数。这个损失函数计算的是稀疏分类交叉熵损失,适用于标签为整数的情况(而不是 one-hot 编码)。

参数解释

  • from_logits=True
    • 默认值False
    • 含义:表示模型的输出是没有经过激活函数的原始值(即 logits)。
    • 作用:在这种情况下,损失函数内部会先对输出值应用 softmax 激活函数,然后再计算分类交叉熵损失。

分类交叉熵公式

from_logits=True 时,分类交叉熵损失的计算公式为:

loss = − ∑ i y i log ⁡ ( softmax ( z i ) ) \text{loss} = -\sum_{i} y_i \log(\text{softmax}(z_i)) loss=iyilog(softmax(zi))

其中:

  • y i y_i yi是真实的标签(整数,表示类别索引)。
  • z i z_i zi 是模型的输出值(logits,未经过激活函数)。
  • softmax ( z i ) \text{softmax}(z_i) softmax(zi)是经过 softmax 激活函数后的概率分布。

分类交叉熵损失

是一种常用的损失函数,特别适用于多分类任务。它用于衡量模型预测的概率分布与真实标签之间的差异。

2、模型构建

1、数据处理

1、导入库

import tensorflow as tf 
from tensorflow.keras import datasets, models, layers 
import numpy as np 

# 查看是否支持gpu
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

gpus

输出:

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

2、查看数据目录

import os, PIL, pathlib 

data_dir = './data/'
data_dir = pathlib.Path(data_dir)  # 转化成 pathlib 对象

data_paths = data_dir.glob('*')  # 获取对象下的文件
classnames = [str(path).split('/')[1] for path in data_paths]
classnames

输出:

['Monkeypox', 'Others']

3、加载数据

batch_size = 32 
heights = 224
widths = 224 

# 训练集
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    validation_split=0.2,
    batch_size=batch_size,
    image_size=(widths, heights),
    subset='training',
    seed=42,
    shuffle=True
)

# 验证集
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    validation_split=0.2,
    batch_size=batch_size,
    image_size=(widths, heights),
    subset='validation',
    seed=42,
    shuffle=True
)
Found 2142 files belonging to 2 classes.
Using 1714 files for training.
Found 2142 files belonging to 2 classes.
Using 428 files for validation.

4、数据展示

import matplotlib.pyplot as plt 

plt.figure(figsize=(20, 10))
for images, labels in train_ds.take(1):
    for i in range(20):
        plt.subplot(5, 10, i + 1)
        
        plt.imshow(images[i].numpy().astype('uint8'))
        plt.title(classnames[labels[i]])
        
        plt.axis('off')


在这里插入图片描述

# 查看数据格式
for images, labels in train_ds:
    print('(C, N, H, W): ',images.shape)
    print('class_labels: ', labels)
    break
(C, N, H, W):  (32, 224, 224, 3)
class_labels:  tf.Tensor([0 0 1 0 1 0 0 1 1 0 0 0 1 1 0 0 1 1 0 1 1 1 1 1 1 0 0 0 1 1 1 1], shape=(32,), dtype=int32)

2、内存优化

from tensorflow.data.experimental import AUTOTUNE

AUTOTUNE = tf.data.experimental.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
vals_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

3、模型构建

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(heights, widths, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(heights, widths, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(len(classnames))               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 rescaling_1 (Rescaling)     (None, 224, 224, 3)       0         
                                                                 
 conv2d_3 (Conv2D)           (None, 222, 222, 16)      448       
                                                                 
 average_pooling2d_2 (Averag  (None, 111, 111, 16)     0         
 ePooling2D)                                                     
                                                                 
 conv2d_4 (Conv2D)           (None, 109, 109, 32)      4640      
                                                                 
 average_pooling2d_3 (Averag  (None, 54, 54, 32)       0         
 ePooling2D)                                                     
                                                                 
 dropout_2 (Dropout)         (None, 54, 54, 32)        0         
                                                                 
 conv2d_5 (Conv2D)           (None, 52, 52, 64)        18496     
                                                                 
 dropout_3 (Dropout)         (None, 52, 52, 64)        0         
                                                                 
 flatten_1 (Flatten)         (None, 173056)            0         
                                                                 
 dense_2 (Dense)             (None, 128)               22151296  
                                                                 
 dense_3 (Dense)             (None, 2)                 258       
                                                                 
=================================================================
Total params: 22,175,138
Trainable params: 22,175,138
Non-trainable params: 0
_________________________________________________________________

4、模型训练

1、超参数设置

# 学习率
opt = tf.keras.optimizers.Adam(learning_rate=1e-4)

model.compile(
    optimizer=opt,
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),  # 二分类任务
    metrics=['accuracy']  # 准确率
)

2、模型训练

from tensorflow.keras.callbacks import ModelCheckpoint

epochs = 50

checkpointer = ModelCheckpoint('best_model.h5',
                               monitor='val_accuracy',
                               verbose=1,
                               save_best_only=True,
                               save_weights_only=True)

result = model.fit(
    x=train_ds,
    validation_data=vals_ds,
    epochs=epochs,
    batch_size=batch_size,
    callbacks=[checkpointer]   # 回调函数,保存最好模型
)
Epoch 1/50
52/54 [===========================>..] - ETA: 0s - loss: 0.7122 - accuracy: 0.5418
Epoch 1: val_accuracy improved from -inf to 0.60280, saving model to best_model.h5
54/54 [==============================] - 4s 38ms/step - loss: 0.7114 - accuracy: 0.5420 - val_loss: 0.6610 - val_accuracy: 0.6028
Epoch 2/50
54/54 [==============================] - ETA: 0s - loss: 0.6555 - accuracy: 0.6429
Epoch 2: val_accuracy improved from 0.60280 to 0.61449, saving model to best_model.h5
54/54 [==============================] - 3s 65ms/step - loss: 0.6555 - accuracy: 0.6429 - val_loss: 0.6723 - val_accuracy: 0.6145
Epoch 3/50
53/54 [============================>.] - ETA: 0s - loss: 0.6227 - accuracy: 0.6736
Epoch 3: val_accuracy did not improve from 0.61449
54/54 [==============================] - 1s 24ms/step - loss: 0.6238 - accuracy: 0.6727 - val_loss: 0.7243 - val_accuracy: 0.6145
Epoch 4/50
53/54 [============================>.] - ETA: 0s - loss: 0.5910 - accuracy: 0.6813
Epoch 4: val_accuracy improved from 0.61449 to 0.63785, saving model to best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.5907 - accuracy: 0.6820 - val_loss: 0.6972 - val_accuracy: 0.6379
Epoch 5/50
52/54 [===========================>..] - ETA: 0s - loss: 0.6150 - accuracy: 0.6618
Epoch 5: val_accuracy improved from 0.63785 to 0.65421, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.6157 - accuracy: 0.6622 - val_loss: 0.6427 - val_accuracy: 0.6542
Epoch 6/50
52/54 [===========================>..] - ETA: 0s - loss: 0.5473 - accuracy: 0.7200
Epoch 6: val_accuracy improved from 0.65421 to 0.67523, saving model to best_model.h5
54/54 [==============================] - 2s 29ms/step - loss: 0.5468 - accuracy: 0.7205 - val_loss: 0.6319 - val_accuracy: 0.6752
Epoch 7/50
52/54 [===========================>..] - ETA: 0s - loss: 0.5197 - accuracy: 0.7412
Epoch 7: val_accuracy improved from 0.67523 to 0.68458, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.5226 - accuracy: 0.7363 - val_loss: 0.5572 - val_accuracy: 0.6846
Epoch 8/50
53/54 [============================>.] - ETA: 0s - loss: 0.5101 - accuracy: 0.7384
Epoch 8: val_accuracy improved from 0.68458 to 0.68925, saving model to best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.5118 - accuracy: 0.7375 - val_loss: 0.6184 - val_accuracy: 0.6893
Epoch 9/50
52/54 [===========================>..] - ETA: 0s - loss: 0.4747 - accuracy: 0.7679
Epoch 9: val_accuracy improved from 0.68925 to 0.78037, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.4743 - accuracy: 0.7695 - val_loss: 0.4770 - val_accuracy: 0.7804
Epoch 10/50
53/54 [============================>.] - ETA: 0s - loss: 0.4504 - accuracy: 0.7895
Epoch 10: val_accuracy did not improve from 0.78037
54/54 [==============================] - 1s 22ms/step - loss: 0.4528 - accuracy: 0.7870 - val_loss: 0.4698 - val_accuracy: 0.7640
Epoch 11/50
53/54 [============================>.] - ETA: 0s - loss: 0.4583 - accuracy: 0.7753
Epoch 11: val_accuracy did not improve from 0.78037
54/54 [==============================] - 1s 24ms/step - loss: 0.4571 - accuracy: 0.7760 - val_loss: 0.4528 - val_accuracy: 0.7734
Epoch 12/50
53/54 [============================>.] - ETA: 0s - loss: 0.4225 - accuracy: 0.8044
Epoch 12: val_accuracy improved from 0.78037 to 0.79206, saving model to best_model.h5
54/54 [==============================] - 2s 36ms/step - loss: 0.4219 - accuracy: 0.8057 - val_loss: 0.4540 - val_accuracy: 0.7921
Epoch 13/50
54/54 [==============================] - ETA: 0s - loss: 0.4011 - accuracy: 0.8291
Epoch 13: val_accuracy improved from 0.79206 to 0.80140, saving model to best_model.h5
54/54 [==============================] - 3s 48ms/step - loss: 0.4011 - accuracy: 0.8291 - val_loss: 0.4250 - val_accuracy: 0.8014
Epoch 14/50
52/54 [===========================>..] - ETA: 0s - loss: 0.3779 - accuracy: 0.8339
Epoch 14: val_accuracy did not improve from 0.80140
54/54 [==============================] - 1s 23ms/step - loss: 0.3813 - accuracy: 0.8326 - val_loss: 0.4555 - val_accuracy: 0.7850
Epoch 15/50
52/54 [===========================>..] - ETA: 0s - loss: 0.3603 - accuracy: 0.8442
Epoch 15: val_accuracy improved from 0.80140 to 0.82944, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.3605 - accuracy: 0.8454 - val_loss: 0.3814 - val_accuracy: 0.8294
Epoch 16/50
53/54 [============================>.] - ETA: 0s - loss: 0.3405 - accuracy: 0.8561
Epoch 16: val_accuracy improved from 0.82944 to 0.85047, saving model to best_model.h5
54/54 [==============================] - 1s 28ms/step - loss: 0.3387 - accuracy: 0.8576 - val_loss: 0.3755 - val_accuracy: 0.8505
Epoch 17/50
54/54 [==============================] - ETA: 0s - loss: 0.3223 - accuracy: 0.8658
Epoch 17: val_accuracy did not improve from 0.85047
54/54 [==============================] - 1s 22ms/step - loss: 0.3223 - accuracy: 0.8658 - val_loss: 0.4021 - val_accuracy: 0.8364
Epoch 18/50
54/54 [==============================] - ETA: 0s - loss: 0.3203 - accuracy: 0.8611
Epoch 18: val_accuracy did not improve from 0.85047
54/54 [==============================] - 1s 24ms/step - loss: 0.3203 - accuracy: 0.8611 - val_loss: 0.3645 - val_accuracy: 0.8458
Epoch 19/50
52/54 [===========================>..] - ETA: 0s - loss: 0.3138 - accuracy: 0.8780
Epoch 19: val_accuracy did not improve from 0.85047
54/54 [==============================] - 1s 22ms/step - loss: 0.3111 - accuracy: 0.8792 - val_loss: 0.3717 - val_accuracy: 0.8505
Epoch 20/50
54/54 [==============================] - ETA: 0s - loss: 0.2977 - accuracy: 0.8810
Epoch 20: val_accuracy improved from 0.85047 to 0.86916, saving model to best_model.h5
54/54 [==============================] - 2s 29ms/step - loss: 0.2977 - accuracy: 0.8810 - val_loss: 0.3575 - val_accuracy: 0.8692
Epoch 21/50
53/54 [============================>.] - ETA: 0s - loss: 0.2802 - accuracy: 0.8960
Epoch 21: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 23ms/step - loss: 0.2775 - accuracy: 0.8979 - val_loss: 0.3989 - val_accuracy: 0.8505
Epoch 22/50
52/54 [===========================>..] - ETA: 0s - loss: 0.2712 - accuracy: 0.9012
Epoch 22: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 23ms/step - loss: 0.2691 - accuracy: 0.9020 - val_loss: 0.4104 - val_accuracy: 0.8248
Epoch 23/50
53/54 [============================>.] - ETA: 0s - loss: 0.2792 - accuracy: 0.8930
Epoch 23: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 22ms/step - loss: 0.2763 - accuracy: 0.8950 - val_loss: 0.3594 - val_accuracy: 0.8668
Epoch 24/50
52/54 [===========================>..] - ETA: 0s - loss: 0.2571 - accuracy: 0.9000
Epoch 24: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 24ms/step - loss: 0.2557 - accuracy: 0.9008 - val_loss: 0.3951 - val_accuracy: 0.8318
Epoch 25/50
54/54 [==============================] - ETA: 0s - loss: 0.2302 - accuracy: 0.9137
Epoch 25: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 28ms/step - loss: 0.2302 - accuracy: 0.9137 - val_loss: 0.3504 - val_accuracy: 0.8692
Epoch 26/50
53/54 [============================>.] - ETA: 0s - loss: 0.2428 - accuracy: 0.9132
Epoch 26: val_accuracy did not improve from 0.86916
54/54 [==============================] - 2s 32ms/step - loss: 0.2410 - accuracy: 0.9137 - val_loss: 0.4068 - val_accuracy: 0.8505
Epoch 27/50
53/54 [============================>.] - ETA: 0s - loss: 0.2375 - accuracy: 0.9078
Epoch 27: val_accuracy did not improve from 0.86916
54/54 [==============================] - 1s 24ms/step - loss: 0.2353 - accuracy: 0.9090 - val_loss: 0.3579 - val_accuracy: 0.8668
Epoch 28/50
53/54 [============================>.] - ETA: 0s - loss: 0.2171 - accuracy: 0.9257
Epoch 28: val_accuracy improved from 0.86916 to 0.88551, saving model to best_model.h5
54/54 [==============================] - 2s 38ms/step - loss: 0.2174 - accuracy: 0.9247 - val_loss: 0.3274 - val_accuracy: 0.8855
Epoch 29/50
53/54 [============================>.] - ETA: 0s - loss: 0.2106 - accuracy: 0.9233
Epoch 29: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 23ms/step - loss: 0.2109 - accuracy: 0.9230 - val_loss: 0.3738 - val_accuracy: 0.8715
Epoch 30/50
53/54 [============================>.] - ETA: 0s - loss: 0.2144 - accuracy: 0.9251
Epoch 30: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 22ms/step - loss: 0.2170 - accuracy: 0.9236 - val_loss: 0.3435 - val_accuracy: 0.8808
Epoch 31/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1972 - accuracy: 0.9376
Epoch 31: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 23ms/step - loss: 0.1988 - accuracy: 0.9352 - val_loss: 0.3614 - val_accuracy: 0.8738
Epoch 32/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1830 - accuracy: 0.9352
Epoch 32: val_accuracy did not improve from 0.88551
54/54 [==============================] - 1s 23ms/step - loss: 0.1833 - accuracy: 0.9341 - val_loss: 0.3529 - val_accuracy: 0.8808
Epoch 33/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1834 - accuracy: 0.9315
Epoch 33: val_accuracy improved from 0.88551 to 0.89019, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.1845 - accuracy: 0.9306 - val_loss: 0.3385 - val_accuracy: 0.8902
Epoch 34/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1749 - accuracy: 0.9370
Epoch 34: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1786 - accuracy: 0.9358 - val_loss: 0.3647 - val_accuracy: 0.8855
Epoch 35/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1767 - accuracy: 0.9358
Epoch 35: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1764 - accuracy: 0.9358 - val_loss: 0.3402 - val_accuracy: 0.8855
Epoch 36/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1593 - accuracy: 0.9442
Epoch 36: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1614 - accuracy: 0.9434 - val_loss: 0.3344 - val_accuracy: 0.8879
Epoch 37/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1565 - accuracy: 0.9370
Epoch 37: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 22ms/step - loss: 0.1590 - accuracy: 0.9370 - val_loss: 0.4124 - val_accuracy: 0.8785
Epoch 38/50
53/54 [============================>.] - ETA: 0s - loss: 0.1798 - accuracy: 0.9293
Epoch 38: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 22ms/step - loss: 0.1816 - accuracy: 0.9277 - val_loss: 0.3567 - val_accuracy: 0.8762
Epoch 39/50
53/54 [============================>.] - ETA: 0s - loss: 0.1399 - accuracy: 0.9590
Epoch 39: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1445 - accuracy: 0.9551 - val_loss: 0.3856 - val_accuracy: 0.8832
Epoch 40/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1514 - accuracy: 0.9479
Epoch 40: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1507 - accuracy: 0.9487 - val_loss: 0.3333 - val_accuracy: 0.8879
Epoch 41/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1339 - accuracy: 0.9564
Epoch 41: val_accuracy did not improve from 0.89019
54/54 [==============================] - 1s 23ms/step - loss: 0.1322 - accuracy: 0.9562 - val_loss: 0.3422 - val_accuracy: 0.8832
Epoch 42/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1304 - accuracy: 0.9539
Epoch 42: val_accuracy improved from 0.89019 to 0.89252, saving model to best_model.h5
54/54 [==============================] - 2s 30ms/step - loss: 0.1350 - accuracy: 0.9522 - val_loss: 0.3840 - val_accuracy: 0.8925
Epoch 43/50
54/54 [==============================] - ETA: 0s - loss: 0.1250 - accuracy: 0.9580
Epoch 43: val_accuracy did not improve from 0.89252
54/54 [==============================] - 2s 37ms/step - loss: 0.1250 - accuracy: 0.9580 - val_loss: 0.4118 - val_accuracy: 0.8832
Epoch 44/50
53/54 [============================>.] - ETA: 0s - loss: 0.1283 - accuracy: 0.9518
Epoch 44: val_accuracy did not improve from 0.89252
54/54 [==============================] - 2s 33ms/step - loss: 0.1293 - accuracy: 0.9504 - val_loss: 0.4486 - val_accuracy: 0.8668
Epoch 45/50
53/54 [============================>.] - ETA: 0s - loss: 0.1331 - accuracy: 0.9548
Epoch 45: val_accuracy improved from 0.89252 to 0.89486, saving model to best_model.h5
54/54 [==============================] - 2s 33ms/step - loss: 0.1337 - accuracy: 0.9545 - val_loss: 0.3383 - val_accuracy: 0.8949
Epoch 46/50
53/54 [============================>.] - ETA: 0s - loss: 0.1126 - accuracy: 0.9655
Epoch 46: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 22ms/step - loss: 0.1125 - accuracy: 0.9650 - val_loss: 0.3808 - val_accuracy: 0.8832
Epoch 47/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1270 - accuracy: 0.9576
Epoch 47: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 23ms/step - loss: 0.1263 - accuracy: 0.9574 - val_loss: 0.3838 - val_accuracy: 0.8808
Epoch 48/50
52/54 [===========================>..] - ETA: 0s - loss: 0.0988 - accuracy: 0.9642
Epoch 48: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 22ms/step - loss: 0.0987 - accuracy: 0.9638 - val_loss: 0.3463 - val_accuracy: 0.8925
Epoch 49/50
52/54 [===========================>..] - ETA: 0s - loss: 0.1000 - accuracy: 0.9697
Epoch 49: val_accuracy did not improve from 0.89486
54/54 [==============================] - 1s 23ms/step - loss: 0.0979 - accuracy: 0.9702 - val_loss: 0.3449 - val_accuracy: 0.8855
Epoch 50/50
53/54 [============================>.] - ETA: 0s - loss: 0.0835 - accuracy: 0.9703
Epoch 50: val_accuracy improved from 0.89486 to 0.89720, saving model to best_model.h5
54/54 [==============================] - 2s 31ms/step - loss: 0.0863 - accuracy: 0.9697 - val_loss: 0.3432 - val_accuracy: 0.8972

5、结果展示

acc = result.history['accuracy']
val_acc = result.history['val_accuracy']

loss = result.history['loss']
val_loss = result.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()


在这里插入图片描述

6、图片预测

# 加载最佳模型
model.load_weights('best_model.h5')
# 选一个图片预测
from PIL import Image
import numpy as np

img = Image.open("./data/Monkeypox/M01_02_11.jpg")  #这里选择你需要预测的图片
plt.imshow(img)
image = tf.image.resize(img, [heights, widths])

img_array = tf.expand_dims(image, 0) 

predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",classnames[np.argmax(predictions)])
1/1 [==============================] - 0s 30ms/step
预测结果为: Monkeypox

在这里插入图片描述

7、尝试优化

🤔 思路:添加正则化层,代码如下:

num_classes = 2

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(heights, widths, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(heights, widths, 3)), # 卷积层1,卷积核3*3  
    layers.BatchNormalization(),
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.BatchNormalization(),
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.BatchNormalization(),
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

🤞 结果:训练集确实准确率更快提升,但是验证集没有,所以总的来说没有什么太大的变化,损失率反而提升了,效果反而变差了。

提示:这张图片不知道为什么上传不了,大家可以换成这个代码试一下,看一下结果

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2218097.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

通过前端UI界面创建VUE项目

通过前端UI界面创建VUE项目,是比较方面的一种方式,下面我们详细分析一下流程: 1、找到合适目录 右键鼠标,点击在终端打开 2、开始创建 输入 vue ui 浏览器弹出页面 3、点击Create项目 显示已有文件列表,另外可以点击…

Docker部署一款小巧又强大的的自托管网站监控工具Uptime Kuma

文章目录 前言1.关于Uptime Kuma2.安装Docker3.本地部署Uptime Kuma4.使用Uptime Kuma5.cpolar内网穿透工具安装6.创建远程连接公网地址7.固定Uptime Kuma公网地址 💡 推荐 前些天发现了一个巨牛的人工智能学习网站,通俗易懂,风趣幽默&#…

CVE-2024-36971漏洞修复----Debian 10.13 内核升级

CVE-2024-36971漏洞修复---Debian 10.13 内核升级 1. 下载内核2. 安装依赖包3. 二进制安装3.1 上传3.2 解压3.3 修改配置文件3.4 编译3.5 安装内核及模块 4. 重启服务器并确认升级成功 1. 下载内核 到kernel.org下载新版的Kernel 由于开发那边不想让Kernel跨大版本,所以就升级…

【优选算法】——双指针(上篇)!

🌈个人主页:秋风起,再归来~🔥系列专栏:C刷题算法总结🔖克心守己,律己则安 目录 前言:双指针 1. 移动零(easy) 2. 复写零(easy) 3…

VSCode C/C++跳转到定义、自动补全、悬停提示突然失效

昨天像往常一样用vscode连接云服务器写代码,突然发现跳转到定义、自动补全、悬停提示功能全部不能正常使用了,今天折腾了一上午,看了一大堆教程,最后可算是解决了,因为大家说不定会遇到和我一样的问题,所以…

【工具篇】MLU运行XInference部署手册

文章目录 前言一、平台环境准备二、代码下载三、安装部署1.正常pip 安装 四、运行结果展示1.如果界面404或没有东西请这样做2.运行效果 前言 Xorbits Inference(Xinference)是一个功能强大、用途广泛的库,旨在为语言、语音识别和多模态模型提…

自监督学习:引领机器学习的新革命

引言 自监督学习(Self-Supervised Learning)近年来在机器学习领域取得了显著进展,成为人工智能研究的热门话题。不同于传统的监督学习和无监督学习,自监督学习通过利用未标注数据生成标签,从而大幅降低对人工标注数据…

数据库-01MYSQL-001MySQL知识点查漏补缺

MySQL知识点查漏补缺 数据库常识不常见知识点: 数据库常识 知识点001: between…and … 包含临界值。 知识点002:任何内容与null相加等于null。 知识点003:模糊查询涉及的函数有:like,between…and…, in/…

机器的“眼睛“:计算机视觉技术背后的魔法

计算机视觉,作为人工智能领域中的一颗璀璨明珠,正逐步改变着我们的生活方式。它赋予了机器“看”的能力,使得计算机能够从图像和视频中提取信息并进行分析,就像人类用眼睛和大脑来理解世界一样。本文将带你走进计算机视觉的世界&a…

解决linux服务器磁盘占满问题(详细,有效,100%解决)

应用场景: 在我们的日常开发中,我们的服务器总是在不知不觉中磁盘莫名奇妙少了很多空间,或者被占满了,如果这时候要想要存储什么文件,突然发现空间不够了。但我们通常也不知道那些文件占用的空间大,这时候…

ANSYS Workbench纤维混凝土3D

在ANSYS Workbench建立三维纤维混凝土模型可采用CAD随机几何3D插件建模后导入,模型包含球体粗骨料、圆柱体长纤维、水泥砂浆基体等不同组分。 在CAD随机几何3D插件内设置模型参数后运行,即可在AutoCAD内建立三维纤维混凝土模型,插件支持任意…

牛客习题—线性DP 【mari和shiny】C++

你好,欢迎阅读我的文章~ 个人主页:Mike 所属专栏:动态规划 mari和shiny mari和shiny ​ 分析: 使用动态规划的思路来解决。 思路: 分别统计s,sh,shy的数量即可。使用ss来统计字符s的数量,使…

LC1523.在区间范围内统计奇数数目

一开始没审题,居然构造了一个数组去做… 然后重新看,首先先想到的暴力解就是遍历low到high,然后每一个数都对二取余。但是这样的暴力解就没什么锻炼 那肯定再想一个思路,Low和high都有两种情况,要么是奇数&#xff0c…

30.第二阶段x86游戏实战2-遍历周围-C++遍历二叉树(玩家角色基址)

免责声明:内容仅供学习参考,请合法利用知识,禁止进行违法犯罪活动! 本次游戏没法给 内容参考于:微尘网络安全 本人写的内容纯属胡编乱造,全都是合成造假,仅仅只是为了娱乐,请不要…

衡石分析平台系统分析人员手册-应用查看

应用查看​ 应用创作界面展示了用户可以查看的所有应用。 用户可以使用平铺视图或列表视图查看应用。同时支持通过搜索、过滤、排序等方式快速查找应用。 应用视图​ 应用创作支持平铺视图和列表视图两种展示方式,默认以平铺视图的方式展示应用,用户可…

2024 蚂蚁SEO蜘蛛池对网站收录的帮助

《2024 蜘蛛池对网站收录还有效果吗?》 在网站优化的领域中,蜘蛛池曾经是一个备受关注的工具。然而,随着搜索引擎算法的不断演进,人们对于 2024 年蜘蛛池对网站收录是否还有效果产生了疑问。 一、什么是蜘蛛池? 蜘蛛池…

APQP在制造行业的应用:搭上数字化项目管理平台很nice

APQP(Advanced Product Quality Planning,即产品质量先期策划)最早由汽车行业引入,并因其在质量管理方面的显著效果而逐渐被其他制造业领域所采纳。 APQP提供了一种从产品设计的最初阶段到生产过程的全面质量管理框架,…

使用fpm工具制作Vim.rpm包

背景:生产环境中的CentOS 7在安全扫描中被扫描出vim存在堆缓冲区溢出(CVE-2024-45306)等漏洞。根据漏洞说明,需要升级到最新版。 奈何CentOS 7已经停止维护了,所以,想在网上找一个最新版的vim.rpm相当不容易…

数字图像处理:图像复原应用

数字图像处理:图像复原应用 1.1 什么是图像复原? 图像复原是图像处理中的一个重要领域,旨在从退化(例如噪声、模糊等)图像中恢复出尽可能接近原始图像的结果。图像复原与图像增强不同,复原更多地依赖于图…

ES6 Promise的用法

学习链接:ES6 Promise的用法,ES7 async/await异步处理同步化,异步处理进化史_哔哩哔哩_bilibili 一、同步与异步区别 1.JavaScript代码是单线程的程序,即通过一行一行代码顺序执行,即同步概念。 2.若处理一些简短、…