tensorflow案例5--基于改进VGG16模型的马铃薯识别,准确率提升0.6%,计算量降低78.07%

news2024/11/23 15:37:01
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • 本次采用VGG16模型进行预测,准确率达到了98.875,但是修改VGG16网络结构, 准确率达到了0.9969,并且计算量下降78.07%

1、API积累

VGG16简介

VGG优缺点分析:

  • VGG优点

VGG的结构非常简洁,整个网络都使用了同样大小的卷积核尺寸(3x3)和最大池化尺寸(2x2)

  • VGG缺点

1)训练时间过长,调参难度大。2)需要的存储容量大,不利于部署。例如存储VGG-16权重值文件的大小为500多MB,不利于安装到嵌入式系统中。

后面优化也是基于VGG的缺点来进行

VGG结构图如下(PPT绘制):

在这里插入图片描述

API积累

🚄 优化

  • shuffle() :打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
  • prefetch() :预取数据,加速运行,TensorFlow的prefetch方法用于在GPU执行计算时,由CPU预处理下一个批次的数据,实现生产者/消费者重叠,提高训练效率,参考本专栏案例一:https://yxzbk.blog.csdn.net/article/details/142862154
  • cache() :将数据集缓存到内存当中,加速运行

💂 像素归一化

讲像素映射到—> [0, 1]中,代码如下:

# 归一化数据
normalization_layer = layers.experimental.preprocessing.Rescaling(1.0 / 255)

# 训练集和验证集归一化
train_ds = train_ds.map(lambda x, y : (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y : (normalization_layer(x), y))

💛 优化器

本文全连接层最后一层采用softmax,故优化器为SparseCategoricalCrossentropy

SparseCategoricalCrossentropy函数注意事项:

from_logits参数:

  • 布尔值,默认值为 False
  • 当为 True 时,函数假设传入的预测值是未经过激活函数处理的原始 logits 值。如果模型的最后一层没有使用 softmax 激活函数(即返回 logits),需要将 from_logits 设置为 True
  • 当为 False 时,函数假设传入的预测值已经是经过 softmax 处理的概率分布。

2、案例

1、数据处理

1、导入库

import tensorflow as tf 
from tensorflow.keras import models, layers, datasets
import matplotlib.pyplot as plt 
import numpy as np 

# 判断支持gpu
gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0]
    tf.config.experimental.set_memory_growth(gpu0, True)
    tf.config.set_visible_devices([gpu0], "GPU")
    
gpus
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

2、查看数据目录,获取类别

数据存储格式:data/ 下每个类别分别存储在不同模块中

import os, pathlib

data_dir = './data/'
data_dir = pathlib.Path(data_dir)

# 查看data_dir下的所有文件名
classnames = os.listdir(data_dir)
classnames
['Dark', 'Green', 'Light', 'Medium']

3、导入数据与划分数据集

# 训练集 : 测试集 = 8 :2

batch_size = 32 
img_width, img_height = 224, 224

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    validation_split = 0.2,
    batch_size=batch_size,
    image_size = (img_width, img_height),
    shuffle = True,
    subset='training',
    seed=42
)

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    './data/',
    validation_split = 0.2,
    batch_size=batch_size,
    image_size = (img_width, img_height),
    shuffle = True,
    subset='validation',
    seed=42
)
Found 1200 files belonging to 4 classes.
Using 960 files for training.
Found 1200 files belonging to 4 classes.
Using 240 files for validation.
# 查看数据格式
for X, y in train_ds.take(1):
    print("[N, W, H, C]", X.shape)
    print("lables: ", y)
    break
[N, W, H, C] (32, 224, 224, 3)
lables:  tf.Tensor([0 0 2 3 1 1 1 3 0 1 2 2 2 1 0 2 0 2 1 0 0 1 2 1 3 2 2 2 1 0 2 3], shape=(32,), dtype=int32)
# 查看原始数据像素
imgs, labelss = next(iter(train_ds))  # 获取一批数据
first = imgs[0]
print(first.shape)
print(np.min(first), np.max(first))
(224, 224, 3)
0.0 255.0

4、展示一批数据

plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1):
    for i in range(20):
        plt.subplot(5, 10, i + 1)  # H, W
        
        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(classnames[labels[i]])
        
        plt.axis('off')
        
plt.show()


在这里插入图片描述

5、配置数据集与归一化数据

  • shuffle() :打乱数据,关于此函数的详细介绍可以参考:https://zhuanlan.zhihu.com/p/42417456
  • prefetch() :预取数据,加速运行,TensorFlow的prefetch方法用于在GPU执行计算时,由CPU预处理下一个批次的数据,实现生产者/消费者重叠,提高训练效率,参考本专栏案例一:https://yxzbk.blog.csdn.net/article/details/142862154
  • cache() :将数据集缓存到内存当中,加速运行
# 加速
# 变量名比较复杂,但是代码比较固定
from tensorflow.data.experimental import AUTOTUNE

AUTOTUNE = tf.data.experimental.AUTOTUNE

# 打乱加速
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
# 归一化数据
normalization_layer = layers.experimental.preprocessing.Rescaling(1.0 / 255)

# 训练集和验证集归一化
train_ds = train_ds.map(lambda x, y : (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y : (normalization_layer(x), y))
# 查看归一化数据
image_batch, label_batch = next(iter(val_ds))
# 取一个元素
first_image = image_batch[0]

# 查看
print(np.min(first_image), np.max(first_image))   # 查看像素最大值,最小值
print(image_batch.shape)
print(first_image.shape)
0.0 1.0
(32, 224, 224, 3)
(224, 224, 3)

2024-11-08 18:37:15.334784: W tensorflow/core/kernels/data/cache_dataset_ops.cc:856] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset  will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.

2、手动搭建VGG16网络

def VGG16(class_num, input_shape):
    inputs = layers.Input(input_shape)
    
     # 1st block
    x = layers.Conv2D(64, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(inputs)
    x = layers.Conv2D(64, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.MaxPooling2D((2, 2), strides=(2, 2))(x)

    # 2nd block
    x = layers.Conv2D(128, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.Conv2D(128, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.MaxPooling2D((2, 2), strides=(2, 2))(x)

    # 3rd block
    x = layers.Conv2D(256, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.Conv2D(256, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.Conv2D(256, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.MaxPooling2D((2, 2), strides=(2, 2))(x)

    # 4th block
    x = layers.Conv2D(512, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.Conv2D(512, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.Conv2D(512, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.MaxPooling2D((2, 2), strides=(2, 2))(x)

    # 5th block
    x = layers.Conv2D(512, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.Conv2D(512, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.Conv2D(512, kernel_size=(3, 3), activation='relu', strides=(1, 1), padding='same')(x)
    x = layers.MaxPooling2D((2, 2), strides=(2, 2))(x)
    
    # 全连接层, 这里修改以下
    x = layers.Flatten()(x)
    x = layers.Dense(4096, activation='relu')(x)
    x = layers.Dense(4096, activation='relu')(x)
    # 最后一层用激活函数:softmax
    out_shape = layers.Dense(class_num, activation='softmax')(x)
    
    # 创建模型
    model = models.Model(inputs=inputs, outputs=out_shape)
    
    return model
    
model = VGG16(len(classnames), (img_width, img_height, 3))
model.summary()
    

Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 224, 224, 3)]     0         
                                                                 
 conv2d (Conv2D)             (None, 224, 224, 64)      1792      
                                                                 
 conv2d_1 (Conv2D)           (None, 224, 224, 64)      36928     
                                                                 
 max_pooling2d (MaxPooling2D  (None, 112, 112, 64)     0         
 )                                                               
                                                                 
 conv2d_2 (Conv2D)           (None, 112, 112, 128)     73856     
                                                                 
 conv2d_3 (Conv2D)           (None, 112, 112, 128)     147584    
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 56, 56, 128)      0         
 2D)                                                             
                                                                 
 conv2d_4 (Conv2D)           (None, 56, 56, 256)       295168    
                                                                 
 conv2d_5 (Conv2D)           (None, 56, 56, 256)       590080    
                                                                 
 conv2d_6 (Conv2D)           (None, 56, 56, 256)       590080    
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 28, 28, 256)      0         
 2D)                                                             
                                                                 
 conv2d_7 (Conv2D)           (None, 28, 28, 512)       1180160   
                                                                 
 conv2d_8 (Conv2D)           (None, 28, 28, 512)       2359808   
                                                                 
 conv2d_9 (Conv2D)           (None, 28, 28, 512)       2359808   
                                                                 
 max_pooling2d_3 (MaxPooling  (None, 14, 14, 512)      0         
 2D)                                                             
                                                                 
 conv2d_10 (Conv2D)          (None, 14, 14, 512)       2359808   
                                                                 
 conv2d_11 (Conv2D)          (None, 14, 14, 512)       2359808   
                                                                 
 conv2d_12 (Conv2D)          (None, 14, 14, 512)       2359808   
                                                                 
 max_pooling2d_4 (MaxPooling  (None, 7, 7, 512)        0         
 2D)                                                             
                                                                 
 flatten (Flatten)           (None, 25088)             0         
                                                                 
 dense (Dense)               (None, 4096)              102764544 
                                                                 
 dense_1 (Dense)             (None, 4096)              16781312  
                                                                 
 dense_2 (Dense)             (None, 4)                 16388     
                                                                 
=================================================================
Total params: 134,276,932
Trainable params: 134,276,932
Non-trainable params: 0
_________________________________________________________________

3、模型的训练

1、设置超参数

learn_rate = 1e-4

# 动态学习率
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
    learn_rate,
    decay_steps=20,
    decay_rate=0.95,
    staircase=True
)

# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=learn_rate)

# 设置超参数
model.compile(
    optimizer=opt,
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
    metrics=['accuracy']
)

2、模型训练

from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping

# 设置训练次数
epochs = 20

# 设置早停
earlystopper = EarlyStopping(monitor='val_accuracy',
                             min_delta=0.001,
                             patience=20,
                             verbose=1)

# 保存最佳模型
checkpointer = ModelCheckpoint('best_model.h5',
                               monitor='val_accuracy',
                               verbose=1,
                               save_best_only=True,
                               save_weight_only=True)

history = model.fit(
    x=train_ds,
    validation_data=val_ds,
    epochs=epochs,
    verbose=1,
    callbacks=[earlystopper, checkpointer]
)

Epoch 1/20

2024-11-08 18:37:27.650111: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8101
2024-11-08 18:37:31.754452: I tensorflow/stream_executor/cuda/cuda_blas.cc:1786] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.

30/30 [==============================] - ETA: 0s - loss: 1.3401 - accuracy: 0.3094
Epoch 1: val_accuracy improved from -inf to 0.55833, saving model to best_model.h5
30/30 [==============================] - 17s 255ms/step - loss: 1.3401 - accuracy: 0.3094 - val_loss: 0.9073 - val_accuracy: 0.5583
Epoch 2/20
30/30 [==============================] - ETA: 0s - loss: 0.9208 - accuracy: 0.5406
Epoch 2: val_accuracy improved from 0.55833 to 0.63333, saving model to best_model.h5
30/30 [==============================] - 7s 223ms/step - loss: 0.9208 - accuracy: 0.5406 - val_loss: 0.6053 - val_accuracy: 0.6333
Epoch 3/20
30/30 [==============================] - ETA: 0s - loss: 0.6325 - accuracy: 0.6594
Epoch 3: val_accuracy did not improve from 0.63333
30/30 [==============================] - 4s 128ms/step - loss: 0.6325 - accuracy: 0.6594 - val_loss: 0.7538 - val_accuracy: 0.5542
Epoch 4/20
30/30 [==============================] - ETA: 0s - loss: 0.5219 - accuracy: 0.7115
Epoch 4: val_accuracy improved from 0.63333 to 0.82083, saving model to best_model.h5
30/30 [==============================] - 7s 246ms/step - loss: 0.5219 - accuracy: 0.7115 - val_loss: 0.4044 - val_accuracy: 0.8208
Epoch 5/20
30/30 [==============================] - ETA: 0s - loss: 0.3322 - accuracy: 0.8771
Epoch 5: val_accuracy improved from 0.82083 to 0.86667, saving model to best_model.h5
30/30 [==============================] - 7s 238ms/step - loss: 0.3322 - accuracy: 0.8771 - val_loss: 0.3286 - val_accuracy: 0.8667
Epoch 6/20
30/30 [==============================] - ETA: 0s - loss: 0.1433 - accuracy: 0.9573
Epoch 6: val_accuracy improved from 0.86667 to 0.95417, saving model to best_model.h5
30/30 [==============================] - 7s 230ms/step - loss: 0.1433 - accuracy: 0.9573 - val_loss: 0.1310 - val_accuracy: 0.9542
Epoch 7/20
30/30 [==============================] - ETA: 0s - loss: 0.0982 - accuracy: 0.9594
Epoch 7: val_accuracy improved from 0.95417 to 0.97917, saving model to best_model.h5
30/30 [==============================] - 7s 233ms/step - loss: 0.0982 - accuracy: 0.9594 - val_loss: 0.0739 - val_accuracy: 0.9792
Epoch 8/20
30/30 [==============================] - ETA: 0s - loss: 0.0630 - accuracy: 0.9802
Epoch 8: val_accuracy did not improve from 0.97917
30/30 [==============================] - 4s 127ms/step - loss: 0.0630 - accuracy: 0.9802 - val_loss: 0.2461 - val_accuracy: 0.9250
Epoch 9/20
30/30 [==============================] - ETA: 0s - loss: 0.1089 - accuracy: 0.9625
Epoch 9: val_accuracy improved from 0.97917 to 0.98333, saving model to best_model.h5
30/30 [==============================] - 6s 217ms/step - loss: 0.1089 - accuracy: 0.9625 - val_loss: 0.0717 - val_accuracy: 0.9833
Epoch 10/20
30/30 [==============================] - ETA: 0s - loss: 0.0392 - accuracy: 0.9885
Epoch 10: val_accuracy did not improve from 0.98333
30/30 [==============================] - 4s 126ms/step - loss: 0.0392 - accuracy: 0.9885 - val_loss: 0.0901 - val_accuracy: 0.9708
Epoch 11/20
30/30 [==============================] - ETA: 0s - loss: 0.0297 - accuracy: 0.9854
Epoch 11: val_accuracy improved from 0.98333 to 0.98750, saving model to best_model.h5
30/30 [==============================] - 7s 232ms/step - loss: 0.0297 - accuracy: 0.9854 - val_loss: 0.0629 - val_accuracy: 0.9875
Epoch 12/20
30/30 [==============================] - ETA: 0s - loss: 0.0331 - accuracy: 0.9885
Epoch 12: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 127ms/step - loss: 0.0331 - accuracy: 0.9885 - val_loss: 0.0384 - val_accuracy: 0.9875
Epoch 13/20
30/30 [==============================] - ETA: 0s - loss: 0.1043 - accuracy: 0.9708
Epoch 13: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 128ms/step - loss: 0.1043 - accuracy: 0.9708 - val_loss: 0.0445 - val_accuracy: 0.9833
Epoch 14/20
30/30 [==============================] - ETA: 0s - loss: 0.0352 - accuracy: 0.9833
Epoch 14: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 134ms/step - loss: 0.0352 - accuracy: 0.9833 - val_loss: 0.1387 - val_accuracy: 0.9500
Epoch 15/20
30/30 [==============================] - ETA: 0s - loss: 0.1128 - accuracy: 0.9594
Epoch 15: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 128ms/step - loss: 0.1128 - accuracy: 0.9594 - val_loss: 0.4397 - val_accuracy: 0.8125
Epoch 16/20
30/30 [==============================] - ETA: 0s - loss: 0.0949 - accuracy: 0.9646
Epoch 16: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 130ms/step - loss: 0.0949 - accuracy: 0.9646 - val_loss: 0.1068 - val_accuracy: 0.9500
Epoch 17/20
30/30 [==============================] - ETA: 0s - loss: 0.0618 - accuracy: 0.9781
Epoch 17: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 128ms/step - loss: 0.0618 - accuracy: 0.9781 - val_loss: 0.1663 - val_accuracy: 0.9292
Epoch 18/20
30/30 [==============================] - ETA: 0s - loss: 0.0351 - accuracy: 0.9854
Epoch 18: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 128ms/step - loss: 0.0351 - accuracy: 0.9854 - val_loss: 0.0687 - val_accuracy: 0.9792
Epoch 19/20
30/30 [==============================] - ETA: 0s - loss: 0.0609 - accuracy: 0.9781
Epoch 19: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 128ms/step - loss: 0.0609 - accuracy: 0.9781 - val_loss: 0.0963 - val_accuracy: 0.9708
Epoch 20/20
30/30 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9896
Epoch 20: val_accuracy did not improve from 0.98750
30/30 [==============================] - 4s 127ms/step - loss: 0.0263 - accuracy: 0.9896 - val_loss: 0.2104 - val_accuracy: 0.9458

  • 最好效果:val_accuracy did not improve from 0.98750

4、结果显示

# 获取训练集和验证集损失率和准确率
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()


在这里插入图片描述

3、优化

讲全连接层进行优化,对其减少全连接层神经元的数量:

# 原来
x = layers.Flatten()(x)
x = layers.Dense(4096, activation='relu')(x)
x = layers.Dense(4096, activation='relu')(x)
# 最后一层用激活函数:softmax
out_shape = layers.Dense(class_num, activation='softmax')(x)

# 修改
x = layers.Flatten()(x)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dense(512, activation='relu')(x)
# 最后一层用激活函数:softmax
out_shape = layers.Dense(class_num, activation='softmax')(x)

修改效果:loss: 0.0166 - accuracy: 0.9969,准确率提升:0.6%个百分点,但是计算量确实大量减少

在这里插入图片描述

修改前的全连接层参数数量

  1. 第一个 Dense 层:输入 25088,输出 4096
    • 参数数量:( (25088 + 1) \times 4096 = 102764544 )
  2. 第二个 Dense 层:输入 4096,输出 4096
    • 参数数量:( (4096 + 1) \times 4096 = 16781312 )
  3. 输出层:输入 4096,输出 4
    • 参数数量:( (4096 + 1) \times 4 = 16388 )

总参数数量:( 102764544 + 16781312 + 16388 = 119562244 )

修改后的全连接层参数数量

  1. 第一个 Dense 层:输入 25088,输出 1024
    • 参数数量:( (25088 + 1) \times 1024 = 25690112 )
  2. 第二个 Dense 层:输入 1024,输出 512
    • 参数数量:( (1024 + 1) \times 512 = 524800 )
  3. 输出层:输入 512,输出 4
    • 参数数量:( (512 + 1) \times 4 = 2052 )

总参数数量:( 25690112 + 524800 + 2052 = 26216964 )

计算减少的百分比

减少的参数数量:
119562244−26216964=93345280

减少的百分比:
93345280 119562244 × 100 % ≈ 78.07 % \frac{93345280}{119562244}\times100\%\approx78.07\% 11956224493345280×100%78.07%

因此,修改后计算量(以参数数量衡量)减少了约 78.07%

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2239097.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

攻防世界38-FlatScience-CTFWeb

攻防世界38-FlatScience-Web 点开这个here看到一堆pdf,感觉没用&#xff0c;扫描一下 试试弱口令先 源码里有&#xff1a; 好吧0.0 试试存不存在sql注入 根本没回显&#xff0c;转战login.php先 输入1’,发现sql注入 看到提示 访问后得源码 <?php ob_start(); ?>…

数据分析-44-时间序列预测之深度学习方法TCN

文章目录 1 TCN简介1.1 网络示意图1.2 TCN优点2 模拟应用2.1 模拟数据2.2 预处理创建滞后特征2.3 划分训练集和测试集2.4 创建TCN模型2.5 模型训练2.6 模型预测3 自定义my_TCN模型3.1 my_TCN()函数3.2 训练模型3.3 模型预测3.4 改进4 参考附录1 TCN简介 时间卷积网络(TCN)是…

C++【STL容器系列(二)】vector的模拟实现

文章目录 1. vector的结构2. vector的默认成员函数2.1构造函数2.1.1 默认构造2.1.2 迭代器构造2.1.3 用n个val初始化构造 2.2 拷贝构造2.3 析构函数2.4 operator 3. vector iterator函数3.1 begin 和 cbegin函数3.2 end() 和 cend()函数 4. vector的小函数4.1 size函数4.2 capa…

【linux】网络基础 ---- 应用层

1. 再谈 "协议" 协议是一种 "约定"&#xff0c;在读写数据时, 都是按 "字符串" 的方式来发送接收的. 但是这里我们会遇到一些问题&#xff1a; 如何确保从网上读取的数据是否是完整的&#xff0c;区分缓冲区中的由不同客户端发来的数据 2. 网…

C语言PythonBash:空白(空格、水平制表符、换行符)与转义字符

C语言 空白 C语言中的空白&#xff08;空格、水平制表符、换行符&#xff09;被用于分隔Token&#xff0c;因此Token间可以有任意多个空白。 // 例1 printf("Hello, World!"); 例1中存在5个Token&#xff0c;分别是&#xff1a; printf("Hello, World! \n&qu…

Linux基础(十四)——BASH

BASH 1.BASH定义2.shell的种类3.bash的功能3.1 命令记录功能3.2 命令补全功能3.3 命令别名设置3.4 工作控制、 前景背景控制3.5 程序化脚本&#xff1a; &#xff08; shell scripts&#xff09;3.6 万用字符 4.bash的内置命令5.shell的变量功能5.1 变量的取用5.2 新建变量5.3 …

【重学 MySQL】八十二、深入探索 CASE 语句的应用

【重学 MySQL】八十二、深入探索 CASE 语句的应用 CASE语句的两种形式CASE语句的应用场景数据分类动态排序条件计算在 SELECT 子句中使用在 WHERE子句中使用在 ORDER BY 子句中使用 注意事项 在MySQL中&#xff0c;CASE 语句提供了一种强大的方式来实现条件分支逻辑&#xff0c…

由播客转向个人定制的音频频道(1)平台搭建

项目的背景 最近开始听喜马拉雅播客的内容&#xff0c;但是发现许多不方便的地方。 休息的时候收听喜马拉雅&#xff0c;但是还需要不断地选择喜马拉雅的内容&#xff0c;比较麻烦&#xff0c;而且黑灯操作反而伤眼睛。 喜马拉雅为代表的播客平台都是VOD 形式的&#xff0…

7+纯生信,单细胞识别细胞marker+100种机器学习组合建模,机器学习组合建模取代单独lasso回归势在必行!

影响因子&#xff1a;7.3 研究概述&#xff1a; 皮肤黑色素瘤&#xff08;SKCM&#xff09;是所有皮肤恶性肿瘤中最具侵袭性的类型。本研究从GEO数据库下载单细胞RNA测序&#xff08;scRNA-seq&#xff09;数据集&#xff0c;根据原始研究中定义的细胞标记重新注释各种免疫细胞…

uniapp解析蓝牙设备响应数据bug

本文章为了解决《uniapp 与蓝牙设备收发指令详细步骤(完整项目版)》中第十步的Array 解析成 number函数bug 1、原代码说明 function array16_to_number(arrayValue) {const newArray arrayValue.filter(item > String(item) ! 00 || String(item) ! 0)const _number16 ne…

【递归回溯与搜索算法篇】算法的镜花水月:在无尽的自我倒影中,递归步步生花

文章目录 递归回溯搜索专题&#xff08;一&#xff09;&#xff1a;递归前言第一章&#xff1a;递归基础及应用1.1 汉诺塔问题&#xff08;easy&#xff09;解法&#xff08;递归&#xff09;C 代码实现时间复杂度和空间复杂度易错点提示 1.2 合并两个有序链表&#xff08;easy…

大数据开发面试宝典

312个问题&#xff0c;问题涵盖广、从自我介绍到大厂实战、19大主题&#xff0c;一网打尽、真正提高面试成功率 一、Linux 1. 说⼀下linux的常⽤命令&#xff1f; 说一些高级命令即可 systemctl 设置系统参数 如&#xff1a;systemctl stop firewalld关闭防火墙 tail / hea…

链表归并与并集相关算法题|两递增归并为递减到原位|b表归并到a表|两递减归并到新链表(C)

两递增归并为递减到原位 假设有两个按元素递增次序排列的线性表&#xff0c;均以单链表形式存储。将这两个单链表归并为一个按元素递减次序排列的单链表&#xff0c;并要求利用原来两个单链表的节点存放归并后的单链表 算法思想 因为两链表已按元素值递增次序排列&#xff0…

【RabbitMQ】06-消费者的可靠性

1. 消费者确认机制 没有ack&#xff0c;mq就会一直保留消息。 spring:rabbitmq:listener:simple:acknowledge-mode: auto # 自动ack2. 失败重试机制 当消费者出现异常后&#xff0c;消息会不断requeue&#xff08;重入队&#xff09;到队列&#xff0c;再重新发送给消费者。…

【陕西】《陕西省省级政务信息化项目投资编制指南(建设类)(试行)》-省市费用标准解读系列07

《陕西省省级政务信息化项目投资编制指南&#xff08;建设类&#xff09;&#xff08;试行&#xff09;》规定了建设类项目的费用投资测算方法与计价标准&#xff0c;明确指出建设类项目费用包括项目建设费和项目建设其他费&#xff08;了解更多可直接关注咨询我们&#xff09;…

VB6.0桌面小程序(桌面音乐播放器)

干货源码 Imports System.IO Imports System.Security.Cryptography Public Class Form1 Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load Button1.Text “上一曲” Button4.Text “播放” Button3.Text “下一曲” Button2.Text “顺序播…

docker安装jdk8

1、拉取镜像 docker pull openjdk:82、运行镜像 docker run -d --restartalways --network portainer_network -it --name jdk8 openjdk:8命令 作用 docker run 创建并启动一个容器 –name jdk8 将容器取名为jdk8 -d 设置后台运行 –restartalways 随容器启动 –network port…

【人工智能】Transformers之Pipeline(二十三):文档视觉问答(document-question-answering)

​​​​​​​ 目录 一、引言 二、文档问答&#xff08;document-question-answering&#xff09; 2.1 概述 2.2 impira/layoutlm-document-qa 2.2.1 LayoutLM v1 2.2.2 LayoutLM v2 2.2.3 LayoutXLM 2.2.4 LayoutLM v3 2.3 pipeline参数 2.3.1 pipeline对象实例化…

微服务day06

MQ入门 同步处理业务&#xff1a; 异步处理&#xff1a; 将任务处理后交给MQ来进行分发处理。 MQ的相关知识 同步调用 同步调用的小结 异步调用 MQ技术选型 RabbitMQ 安装部署 其中包含几个概念&#xff1a; publisher&#xff1a;生产者&#xff0c;也就是发送消息的一方 …

[CKS] K8S RuntimeClass SetUp

最近准备花一周的时间准备CKS考试&#xff0c;在准备考试中发现有一个题目关于RuntimeClass创建和挂载的题目。 ​ 专栏其他文章: [CKS] Create/Read/Mount a Secret in K8S-CSDN博客[CKS] Audit Log Policy-CSDN博客 -[CKS] 利用falco进行容器日志捕捉和安全监控-CSDN博客[CKS…