使用pytorch构建resnet50-v2

news2024/9/20 20:45:51
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊|接辅导、项目定制

resnet-v2改进点以及和v1差别

在这里插入图片描述
🧲 改进点:

(a)original表示原始的ResNet的残差结构,(b)proposed表示新的ResNet的残差结构。

主要差别就是

(a)结构先卷积后进行BN和激活函数计算,最后执行addition后再进行ReLU计算;(b)结构先进性BN和激活函数计算后卷积,把addition后的ReLU计算放到了残差结构内部。

📌 改进结果:作者使用这两种不同的结构再CIFAR-10数据集上做测试,模型用的是1001层的ResNet模型。从图中的结果我们可以看出,(b)proposed的测试集错误率明显更低一些,达到了4.92%的错误率。(a)original的测试集错误率是7.61%

在相同的数据集来实现(之前是resnet50)

导入需要的包

import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import os, PIL
import numpy as np
from torch.utils.data import DataLoader,Subset
from torchvision import transforms
from torchvision.datasets import ImageFolder

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
plt.rcParams["font.sans-serif"] = ["SimHei"]
plt.rcParams["axes.unicode_minus"] = False

遍历并展示图片数量

path = "./data/bird_photos"
f = []
for root, dirs, files in os.walk(path):
    for name in files:
        f.append(os.path.join(root, name))
print("图片总数:",len(f))

图片总数: 565

导入数据

transform = transforms.Compose([
    transforms.Resize(224), #统一图片大小
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406],[0.229, 0.224, 0.225]) #标准化
])
data = ImageFolder(path, transform = transform)
print(data.class_to_idx)
print(data.classes)

{‘Bananaquit’: 0, ‘Black Skimmer’: 1, ‘Black Throated Bushtiti’: 2, ‘Cockatoo’: 3}
[‘Bananaquit’, ‘Black Skimmer’, ‘Black Throated Bushtiti’, ‘Cockatoo’]

可视化数据

def imageshow(data, idx, norm = None, label = False):
    plt.figure(dpi=100,figsize=(12,4))
    for i in range(15):
        plt.subplot(3, 5, i + 1)
        img = data[idx[i]][0].numpy().transpose((1, 2, 0))
        if norm is not None:
            mean = norm[0]
            std = norm[1]
            img = img * std + mean
        img = np.clip(img, a_min = 0, a_max=1)
        plt.imshow(img)
    
        if label:
            plt.title(data.classes[data[idx[i]][1]])
        plt.axis('off')
        plt.tight_layout(pad=0.5)
    plt.show()

norm = [[0.485, 0.456, 0.406],[0.229, 0.224, 0.225]]
np.random.seed(22)
demo_img_ids = np.random.randint(564,size = 15)
imageshow(data, demo_img_ids, norm = norm, label = True) 

在这里插入图片描述

配置数据集

torch.manual_seed(123)
train_ds, test_ds = torch.utils.data.random_split(data, [452,113])
train_dl = DataLoader(train_ds, batch_size = 8, shuffle=True, prefetch_factor=2) #prefetch_factor 加速训练, shffule 打乱数据
test_dl = DataLoader(train_ds, batch_size = 8, shuffle=False, prefetch_factor=2 )

再次检查数据

dataiter = iter(train_dl)
print("batch img shape: ", dataiter.next()[0].shape)
print("batch label shape:", dataiter.next()[1].shape)

构建模型

import torch
import torch.nn as nn
class BottleBlock(nn.Module):
    def __init__(self, in_c, out_c, stride = 1, use_maxpool = False, K=False):
        super().__init__()
        if in_c != out_c * 4:
            self.downsample = nn.Sequential(
                nn.Conv2d(in_c, out_c * 4, kernel_size=1,stride=stride),  
                nn.BatchNorm2d(out_c * 4)
                )
        else: self.downsample = nn.Identity()
        
        if use_maxpool:
            self.downsample = nn.MaxPool2d(kernel_size = 1,stride =2)
            stride = 2

        self.bn1 = nn.BatchNorm2d(in_c)
        self.ac = nn.ReLU(inplace=True)
        self.conv1 = nn.Conv2d(in_c, out_c, kernel_size=1)
        
        self.bn2 = nn.BatchNorm2d(out_c)
        self.conv2 = nn.Conv2d(out_c, out_c, kernel_size=3, padding=1,stride = stride)
        
        self.bn3 = nn.BatchNorm2d(out_c)
        self.conv3 = nn.Conv2d(out_c, out_c * 4, kernel_size=1)
        self.K = K


    def forward(self, x):
        out = self.bn1(x)
        out = self.ac(out)
        out = self.conv1(out)

        out = self.bn2(out)
        out = self.ac(out)
        out = self.conv2(out)
        
        out = self.bn3(out)
        out = self.ac(out)
        out = self.conv3(out)
        x = self.downsample(x)
        
        return x + out
    
class resnet_50_v2(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7,stride = 2,padding=3)
        self.bn = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.pool1 = nn.MaxPool2d(kernel_size=3, stride=2 ,padding=1)
        
        self.layer1 = self.make_layer(BottleBlock, [64,64], 3, use_maxpool_layer = 3,K=True)
        self.layer2 = self.make_layer(BottleBlock, [256,128], 4, use_maxpool_layer=4,K=True)
        self.layer3 = self.make_layer(BottleBlock, [512,256], 6, use_maxpool_layer=3,K=True)
        self.layer4 = self.make_layer(BottleBlock, [1024,512], 3,K=True)
        
        self.bn2 = nn.BatchNorm2d(2048)
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Sequential(
            nn.Flatten(),
            nn.Linear(2048,4))
        
    #创建一个拼接的模块    
    def make_layer(self, module, filters, n_layer, use_maxpool_layer=0, K=False):
        filter1, filter2 = filters
        layers = nn.Sequential()
        layers.add_module('0', module(filter1, filter2,K=K))
        
        filter1 = filter2*4
        for i in range(1, n_layer):
            if i == use_maxpool_layer-1:
                layers.add_module(str(i), module(filter1, filter2,use_maxpool=True,K=False))
            else:
                layers.add_module(str(i), module(filter1, filter2, K=False))
        return layers

    def forward(self,x):
        
        out = self.conv1(x)
        out = self.bn(out)
        out = self.relu(out)
        
        out = self.pool1(out)
        
        out = self.layer1(out)
        out = self.layer2(out)
        out = self.layer3(out)
        out = self.layer4(out)      
        
        out = self.bn2(out)
        out = self.relu(out)
        out = self.avgpool(out)
        return self.fc(out)
model = resnet_50_v2()
model

模型结构

resnet_50_v2(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
(bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(pool1): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BottleBlock(
(downsample): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
)
(1): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
)
(2): BottleBlock(
(downsample): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1))
)
)
(layer2): Sequential(
(0): BottleBlock(
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
)
(1): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
)
(2): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
)
(3): BottleBlock(
(downsample): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1))
)
)
(layer3): Sequential(
(0): BottleBlock(
(downsample): Sequential(
(0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
)
(1): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
)
(2): BottleBlock(
(downsample): MaxPool2d(kernel_size=1, stride=2, padding=0, dilation=1, ceil_mode=False)
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
)
(3): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
)
(4): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
)
(5): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1))
)
)
(layer4): Sequential(
(0): BottleBlock(
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1))
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(bn1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
)
(1): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
)
(2): BottleBlock(
(downsample): Identity()
(bn1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(ac): ReLU(inplace=True)
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1))
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1))
)
)
(bn2): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Sequential(
(0): Flatten(start_dim=1, end_dim=-1)
(1): Linear(in_features=2048, out_features=4, bias=True)
)
)

训练

import copy
def train_model(epoch, model, optim, loss_function, train_dl, test_dl=None,lr_scheduler = None):
    for k in range(epoch):
        model.train()
        train_acc,train_len = 0, 0 
        history={"train_loss":[],
                "train_acc":[],
                "val_loss":[],
                "val_acc":[],
                'best_val_acc': (0, 0)}
        best_acc = 0.0
        for step, (x, y) in enumerate(train_dl):
            #print(step)
            x,y = x.to(device), y.to(device)
            y_h = model(x)
            optim.zero_grad()
            loss = loss_function(y_h, y)
            loss.backward()
            optim.step()
            predict = torch.max(y_h,dim=1)[1]
            train_acc += (predict == y).sum().item()
            train_len += len(y)
            trainacc = train_acc/train_len*100
            history['train_loss'].append(loss)
            history['train_acc'].append(trainacc)
             
        val_loss = 0.0
        val_acc,val_len = 0, 0  
        if test_dl is not None:
            model.eval()
            with torch.no_grad():
                for _, (x,y) in enumerate(test_dl):
                    x,y = x.to(device), y.to(device)
                    y_h = model(x)
                    loss = loss_function(y_h, y)
                    val_loss += loss.item()
                    predict = torch.max(y_h,dim=1)[1]
                    val_acc += (predict == y).sum().item()
                    val_len += len(y)
            acc = val_acc/val_len*100
            history['val_loss'].append(val_loss)
            history['val_acc'].append(acc)
        if (k+1)%5 == 0:
            print(f"epoch: {k+1}/{epoch}\n   train_loss: {history['train_loss'][-1]},\n   train_acc: {trainacc},\n   val_acc: {acc},\n   val_loss: {history['val_loss'][-1]}")
        if lr_scheduler is not None: 
            lr_scheduler.step()    
        if best_acc< trainacc:
            best_acc = acc
            history['best val acc']= (epoch, acc)
            best_model_wts = copy.deepcopy(model.state_dict())
    print('=' * 80)
    print(f'Best val acc: {best_acc}')        
    return model, history, best_model_wts
model = model.to(device)
loss_fn = nn.CrossEntropyLoss()
optims = torch.optim.Adam(model.parameters(), lr = 1e-2)
lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optims, 50, last_epoch=-1)
model, history, best_model_wts = train_model(50, model, optims, loss_fn, train_dl, test_dl,lr_scheduler)

epoch: 5/50
train_loss: 1.3529443740844727,
train_acc: 60.84070796460177,
val_acc: 50.66371681415929,
val_loss: 67.96911036968231
epoch: 10/50
train_loss: 0.3679060935974121,
train_acc: 67.03539823008849,
val_acc: 74.5575221238938,
val_loss: 39.28975901007652
epoch: 15/50
train_loss: 1.636350393295288,
train_acc: 75.22123893805309,
val_acc: 80.53097345132744,
val_loss: 30.901596864685416
epoch: 20/50
train_loss: 0.5659410953521729,
train_acc: 77.21238938053098,
val_acc: 77.65486725663717,
val_loss: 30.538477830588818
epoch: 25/50
train_loss: 0.11142534017562866,
train_acc: 81.85840707964603,
val_acc: 88.93805309734513,
val_loss: 16.465759705752134
epoch: 30/50
train_loss: 0.14212661981582642,
train_acc: 86.72566371681415,
val_acc: 87.38938053097345,
val_loss: 18.4492209777236
epoch: 35/50
train_loss: 1.4149291515350342,
train_acc: 87.38938053097345,
val_acc: 97.34513274336283,
val_loss: 8.108955120667815
epoch: 40/50
train_loss: 0.651155412197113,
train_acc: 94.69026548672566,
val_acc: 97.78761061946902,
val_loss: 3.9437274279771373
epoch: 45/50
train_loss: 0.062520831823349,
train_acc: 96.01769911504425,
val_acc: 99.33628318584071,
val_loss: 2.148590993718244
epoch: 50/50
train_loss: 0.03985146805644035,
train_acc: 95.13274336283186,
val_acc: 99.77876106194691,
val_loss: 1.442910757730715
================================================================================
Best val acc: 99.77876106194691

在这个数据集上,效果确实比resnet原版好一些

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/358749.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【Spring Cloud Alibaba】(四)Dubbo框架介绍 及 整合Dubbo和OpenAI实战【文末附源码】

系列目录 【Spring Cloud Alibaba】&#xff08;一&#xff09;微服务介绍 及 Nacos注册中心实战 【Spring Cloud Alibaba】&#xff08;二&#xff09;微服务调用组件Feign原理实战 【Spring Cloud Alibaba】&#xff08;三&#xff09;OpenFeign扩展点实战 源码详解 本文目…

干货| 三大软件架构对比与分析

从系统的组织和部署结构方面来看&#xff0c;软件架构的演化进程显然有着从简单到复杂的趋势。那是否最新最复杂的架构就是目前业界选择的最佳架构呢&#xff1f;非也。没有最好的架构&#xff0c;只有最合适的架构。在软件架构的选择上&#xff0c;“合适”比“新”更加重要。…

11.注意力机制

11.注意力机制 目录 注意力提示 查询、键和值 注意力的可视化 注意力汇聚&#xff1a;Nadaraya-Watson 核回归 生成数据集 非参注意力池化层 Nadaraya-Watson核回归 参数化的注意力机制 批量矩阵乘法 定义模型 训练 注意力评分函数 掩蔽softmax操作 加性注意力 缩…

家政服务小程序实战开发教程015-填充用户信息

我们上一篇讲解了立即预约功能&#xff0c;存在的问题是&#xff0c;每次都需要用户填写联系信息。在我们前述篇章中已经介绍了用户注册的功能&#xff0c;在立即预约的时候我们需要把已经填写的用户信息提取出来&#xff0c;显示到表单对应的字段中。本篇我们就讲解一下如何提…

K8S+Jenkins+Harbor+Docker+gitlab集群部署

K8SJenkinsHarborDockergitlab服务器集群部署 目录K8SJenkinsHarborDockergitlab服务器集群部署1.准备以下服务器2.所有服务器统一处理执行2.1 关闭防火墙2.2 关闭selinux2.3 关闭swap&#xff08;k8s禁止虚拟内存以提高性能&#xff09;2.4 更新yum (看需要更新)2.5 时间同步2…

【自监督论文阅读笔记】MVP: Multimodality-guided Visual Pre-training

Abstract 最近&#xff0c;掩码图像建模&#xff08;MIM&#xff09;已成为视觉预训练的一个有前途的方向。在vision transformers的上下文中&#xff0c;MIM 通过将 token-level 标记级特征 与 预定义空间 对齐来学习有效的视觉表示&#xff08;例如&#xff0c;BEIT 使用在大…

03- 通过OpenCV进行图像变换 (OpenCV基础) (机器视觉)

知识重点 resize(src, dsize[, dst[, fx[, fy[, interpolation]]]]) 图像的放大与缩小, 变形 flip(src, flipCode) 图像的翻转 rotate(img, rotateCode) 图像的旋转 warpAffine(src, M, dsize, flags, mode, value) 仿射变换是图像旋转, 缩放, 平移的总称.具体的做法是通…

第四次作业

学生表&#xff1a;Student (Sno, Sname, Ssex , Sage, Sdept)学号&#xff0c;姓名&#xff0c;性别&#xff0c;年龄&#xff0c;所在系 Sno为主键课程表&#xff1a;Course (Cno, Cname)课程号&#xff0c;课程名 Cno为主键学生选课表&#xff1a;SC (Sno, Cno, Score)学号&…

九龙证券|豪掷超6000万,10转3派6元,今年第二只高送转股出炉!

新瀚新材高送转发布计划&#xff0c;股价年初以来大涨超50%。航运板块6股自2022年低点股价翻倍。 2月17日晚间&#xff0c;凯瑞德、新瀚新材2家公司发布了2022年年报&#xff1b;一起&#xff0c;新瀚新材高送转计划同步出炉。 报告显现&#xff0c;2022年度新瀚新材营业总收入…

软件测试简单面试

文章目录软件程序数据(库)文档服务 程序&#xff1a;完成预定功能、性能的可执行的指令操作信息的数据结构描述程序的操作和使用的文档 软件测试&#xff1a;使用技术手段来验证软件是否满足需求 软件质量&#xff1a; 满足软件需求&#xff0c;软件需求是度量软件质量的基础不…

尚医通 (十九)用户认证

目录一、对象存储OSS1、开通“对象存储OSS”服务2、创建Bucket3、上传默认头像4、创建RAM用户5、使用SDK二、后端集成OSS1、新建云存储微服务2、实现文件上传接口三、用户认证功能1、用户认证需求分析2、开发用户认证接口3、用户认证前端一、对象存储OSS 用户认证需要上传证件…

django项目实战二(django+bootstrap实现增删改查)进阶查询

目录 一、用例管理模块实现 1、创建表和数据 2、创建用例列表 1&#xff09;注册url&#xff08;用例列表&#xff09; 2)修改views.py新增case_list方法 3&#xff09;layout.html导航条新增一个用例管理 4&#xff09;新增case_list.html页面 3、新增用例页面开发 1&…

2023年TS4 入门笔记【慕课网imooc】【Vue3+React18 + TS4考勤系统】

目录 安装ts 基础 类型声明和变量声明 类型注解和类型判断 类型分类与联合类型与交叉类型​编辑 never类型与any类型与unknown类型 类型断言与非空断言 数组类型和元祖类型 对象类型与索引签名 函数类型与void类型 函数重载与可调用注解 枚举类型与const枚举 进阶…

机械革命黑苹果改造计划第四番-外接显示器、win时间不正确问题解决

问题 1.无法外接显示器 最大的问题就是目前无法外接显示器&#xff0c;因为机械革命大多数型号笔记本电脑的HDMI、DP接口都是直接物理接在独显上的&#xff0c;内屏用核显外接显示器接独显&#xff0c;英伟达独显也是黑苹果无法驱动的&#xff0c;而且发现机械革命tpyec接口还…

k8s的基础概念

目录 一、k8s概念 1、k8s是什么 2、为什么要用k8s 3、k8s的特性 二、kubernetes集群架构与组件 1、Master组件 1.1、Kube-apiserver 1.2、Kube-controller-manager 1.3、Kube-scheduler 2、配置储存中心 3、Node组件 3.1、Kubelet 3.2、Kube-Proxy 3.3、docker 或…

SAP S/4 HANA 现金流量表

S4 HANA中的现金流量表 引言&#xff1a;在传统SAP ECC中我们实现现金流量表的方式通常是定义一系列和现金流变动相关的原因代码&#xff08;Reason Code&#xff09;&#xff0c;然后在过账凭证里指定对应的Code&#xff0c;最后通过ABAP代码抓取这些数据产生现金流量表。此方…

力扣(LeetCode)417. 太平洋大西洋水流问题(2023.02.19)

有一个 m n 的矩形岛屿&#xff0c;与 太平洋 和 大西洋 相邻。 “太平洋” 处于大陆的左边界和上边界&#xff0c;而 “大西洋” 处于大陆的右边界和下边界。 这个岛被分割成一个由若干方形单元格组成的网格。给定一个 m x n 的整数矩阵 heights &#xff0c; heights[r][c]…

【pm2】pm2的安装与基本命令:

文章目录一、安装&#xff1a;二、基本命令&#xff1a;【1】启动命令&#xff1a;pm2 start app.js【2】命令行参数&#xff1a;pm2 start app.js --watch -i max【3】 查看有哪些进程&#xff1a;pm2 list【4】停止命令&#xff1a; pm2 stop app_name | app_id &#xff08;…

el-table 复杂表头行内增删改代码示例

效果如图 <template><div class"app-container"><el-card class"box-card item"><div slot"header" class"clearfix" click"showCondition !showCondition"><span><i class"el-ic…

外籍在读博士|赴新西兰奥克兰大学双院士导师麾下联合培养

N同学来自阿拉伯国家&#xff0c;但本硕博都是在我国某省属高校就读&#xff0c;现为材料学专业一年级博士生。联合培养首选澳洲国家&#xff0c;包括澳大利亚和新西兰&#xff0c;其次是美国&#xff0c;希望在2023年初出国&#xff0c;以完成整个学年的学习计划。在我们的帮助…