跨模态检索的简单实现(pytorch)

news2024/10/7 9:22:41

目录

  • 说明
    • 说明
    • 往期文章和专栏
    • 代码及数据集下载
  • 什么是跨模态检索
  • 代码
    • 导入需要的库
    • 构建图像、文本子网络
    • 加载数据集
    • 计算mAP
    • 损失函数构建
    • 总损失函数
    • 训练部分
    • 主函数+参数设置
    • 查看结果

说明

说明

还是读研时候剩下的代码,是我从几篇论文里面摘出来的代码做了简单的拼装,整体框架相对于论文更简单,也更便于理解一些,初学者可以用于学习。
稍作修改之后,其实效果还不错

往期文章和专栏

👉往期文章可以关注我的专栏
下巴同学的数据加油小站
会不定期分享数据挖掘、机器学习、风控模型、深度学习、NLP等方向的学习项目,关注不一定能学到你想学的东西,但是可以学到我想学和正在学的东西😀

代码及数据集下载

和鲸社区上传了一份可以在线运行的,fork后可以下载代码和数据集:简单实现跨模态检索

什么是跨模态检索

跨模态检索指的是:根据一个模态的查询样本,在另一个模态上搜索相关的样本。
例如,给出一张图像,去检索包含相同对象或主题的文本描述;或是给出一段文本,去检索带有其描述对象的图片。
但由于各模态之间具有不同的数据表现形式,所以不同模态的样本间并不能直接进行相似性比较。

代码

导入需要的库

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data.dataset import Dataset
from scipy.io import loadmat, savemat
from torch.utils.data import DataLoader
import numpy as np
import scipy.spatial
import torchvision
import time
import copy

构建图像、文本子网络

class ImageNN(nn.Module):                                              
    def __init__(self, input_dim=4096, output_dim=1024):
        super(ImageNN, self).__init__()
        self.denseL1 = nn.Linear(input_dim, output_dim)
    def forward(self, x):
        out = F.relu(self.denseL1(x))
        return out


class TextNN(nn.Module):                                             
    def __init__(self, input_dim=300, output_dim=1024):
        super(TextNN, self).__init__()
        self.denseL1 = nn.Linear(input_dim, output_dim)

    def forward(self, x):
        out = F.relu(self.denseL1(x))
        return out


class IDCM_NN(nn.Module):                                                    
    def __init__(self, img_input_dim=4096, img_output_dim=2048,
                 text_input_dim=1024, text_output_dim=2048, minus_one_dim=1024, output_dim=20):
        super(IDCM_NN, self).__init__()
        self.img_net = ImageNN(img_input_dim, img_output_dim)
        self.text_net = TextNN(text_input_dim, text_output_dim)
        self.linearLayer = nn.Linear(img_output_dim, minus_one_dim)
        self.linearLayer1 = nn.Linear(text_output_dim, minus_one_dim)
        self.linearLayer2 = nn.Linear(minus_one_dim, output_dim)


    def forward(self, img, text):
        view1_feature = self.img_net(img)    
        view2_feature = self.text_net(text)  
        view1_feature = self.linearLayer(view1_feature)
        view2_feature = self.linearLayer1(view2_feature)

        view1_predict = self.linearLayer2(view1_feature)      
        view2_predict = self.linearLayer2(view2_feature)
        return view1_feature, view2_feature, view1_predict, view2_predict

加载数据集

class CustomDataSet(Dataset):
    def __init__(
            self,
            images,
            texts,
            labels):
        self.images = images
        self.texts = texts
        self.labels = labels

    def __getitem__(self, index):
        img = self.images[index]                                
        text = self.texts[index]
        label = self.labels[index]
        return img, text, label

    def __len__(self):
        count = len(self.images)
        assert len(                                        
            self.images) == len(self.labels)
        return count


def ind2vec(ind, N=None):
    ind = np.asarray(ind)                                 
    if N is None:
        N = ind.max() + 1
    return np.arange(N) == np.repeat(ind, N, axis=1)

def get_loader(path, batch_size):
    img_train = loadmat(path+"train_img.mat")['train_img']
    img_test = loadmat(path + "test_img.mat")['test_img']
    text_train = loadmat(path+"train_txt.mat")['train_txt']
    text_test = loadmat(path + "test_txt.mat")['test_txt']
    label_train = loadmat(path+"train_img_lab.mat")['train_img_lab']
    label_test = loadmat(path + "test_img_lab.mat")['test_img_lab']


    label_train = ind2vec(label_train).astype(int)
    label_test = ind2vec(label_test).astype(int)


    imgs = {'train': img_train, 'test': img_test}
    texts = {'train': text_train, 'test': text_test}
    labels = {'train': label_train, 'test': label_test}
    dataset = {x: CustomDataSet(images=imgs[x], texts=texts[x], labels=labels[x])
               for x in ['train', 'test']}

    shuffle = {'train': False, 'test': False}

    dataloader = {x: DataLoader(dataset[x], batch_size=batch_size,
                                shuffle=shuffle[x], num_workers=0) for x in ['train', 'test']}

   
    img_dim = img_train.shape[1]
    text_dim = text_train.shape[1]
    num_class = label_train.shape[1]


    input_data_par = {}
    input_data_par['img_test'] = img_test
    input_data_par['text_test'] = text_test
    input_data_par['label_test'] = label_test
    input_data_par['img_train'] = img_train
    input_data_par['text_train'] = text_train
    input_data_par['label_train'] = label_train
    input_data_par['img_dim'] = img_dim
    input_data_par['text_dim'] = text_dim
    input_data_par['num_class'] = num_class

    return dataloader, input_data_par

计算mAP

平均查准率均值(mean Average Precision,mAP):mAP是对查询样本和所有返回的检索样本之间进行余弦相似度的计算,综合考虑了排序信息和精度。

def fx_calc_map_label(image, text, label, k=0, dist_method='COS'):
    if dist_method == 'L2':
        dist = scipy.spatial.distance.cdist(image, text, 'euclidean')
    elif dist_method == 'COS':
        dist = scipy.spatial.distance.cdist(image, text, 'cosine')
    ord = dist.argsort()
    numcases = dist.shape[0]
    if k == 0:
        k = numcases
    res = []
    for i in range(numcases):
        order = ord[i]
        p = 0.0
        r = 0.0
        for j in range(k):
            if label[i] == label[order[j]]:
                r += 1
                p += (r / (j + 1))
        if r > 0:
            res += [p / r]
        else:
            res += [0]
    return np.mean(res)

损失函数构建

构建三元组损失
在这里插入图片描述

#余弦距离
def cos_distance(source, target):
    cos_sim = F.cosine_similarity(source.unsqueeze(1), target, dim=-1)
    distances = torch.clamp(1 - cos_sim, 0)
    return distances

class TripletLoss(nn.Module):
    def __init__(self, reduction='mean'):
        super(TripletLoss, self).__init__()
        self.reduction = reduction

    def forward(self, source, s_labels, target=None, t_labels=None, margin=0):
        if target is None:
            target = source
        if t_labels is None:
            t_labels = s_labels

        pairwise_dist = cos_distance(source, target)
#锚点与正样本的距离a
        anchor_positive_dist = pairwise_dist.unsqueeze(2)
#锚点与负样本的距离b
        anchor_negative_dist = pairwise_dist.unsqueeze(1)
#a-b+边距
        triplet_loss = anchor_positive_dist - anchor_negative_dist + margin
#将小于0的值修改为0
        triplet_loss = triplet_loss.clamp(0)

        valid_triplets = triplet_loss.gt(1e-16).float()
        num_positive_triplets = valid_triplets.sum()

        if self.reduction == 'mean':
            triplet_loss = triplet_loss.sum() / (num_positive_triplets + 1e-16)
        elif self.reduction == 'sum':
            triplet_loss = triplet_loss.sum()

        return triplet_loss

总损失函数

构建损失函数从三个方向出发:
1.模态间的异质性差异,即尽可能消除不同模态数据差异,比如floss;
2.模态间的判别性,即放缩模态间具有不同或相同标签的数据的距离,比如tri_loss ;
3.模态内的判别性,即模态不同标签数据保持距离,不会混淆,比如lloss

def calc_loss(view1_feature, view2_feature, view1_predict, view2_predict, labels_1, labels_2, alpha, beta,gamma):
#三元组损失
    tri_loss = TripletLoss(reduction='mean')
    tri_i2t = tri_loss(view1_feature, labels_1.float(), target=view2_feature, margin=0.1)
    tri_t2i = tri_loss(view2_feature, labels_2.float(), target=view1_feature, margin=0.1)
    cos_tri = tri_i2t + tri_t2i
#计算图文特征矩阵的距离损失,即差的F范数 
    floss = ((view1_feature - view2_feature)**2).sum(1).sqrt().mean()
#计算标签矩阵的损失
    lloss = ((view1_predict-labels_1.float())**2).sum(1).sqrt().mean() + ((view2_predict-labels_2.float())**2).sum(1).sqrt().mean()
#损失求和,并给出超参数
    im_loss = alpha * cos_tri + beta*floss+ gamma*lloss

    return im_loss

训练部分

def train_model(model, data_loaders, optimizer, alpha, beta,gamma, num_epochs):
    since = time.time()
    test_img_acc_history = []
    test_txt_acc_history = []
    epoch_loss_history =[]

    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch+1, num_epochs))
        print('-' * 20)

        # Each epoch has a training and validation phase
        for phase in ['train', 'test']:
            if phase == 'train':
                model.train()
            else:
                model.eval()

            running_loss = 0.0
            running_corrects_img = 0
            running_corrects_txt = 0
            for imgs, txts, labels in data_loaders[phase]:
                if torch.sum(imgs != imgs)>1 or torch.sum(txts != txts)>1:
                    print("Data contains Nan.")
                optimizer.zero_grad()

                with torch.set_grad_enabled(phase == 'train'):
                  
                    if torch.cuda.is_available():
                        imgs = imgs.cuda()
                        txts = txts.cuda()
                        labels = labels.cuda()

                    optimizer.zero_grad()

                    view1_feature, view2_feature, view1_predict, view2_predict = model(imgs, txts)

                    loss = calc_loss(view1_feature, view2_feature, view1_predict,
                                     view2_predict, labels, labels, alpha, beta,gamma)

                    img_preds = view1_predict
                    txt_preds = view2_predict

                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                running_loss += loss.item()
                running_corrects_img += torch.sum(torch.argmax(img_preds, dim=1) == torch.argmax(labels, dim=1))
                running_corrects_txt += torch.sum(torch.argmax(txt_preds, dim=1) == torch.argmax(labels, dim=1))

            epoch_loss = running_loss / len(data_loaders[phase].dataset)
            t_imgs, t_txts, t_labels = [], [], []
            with torch.no_grad():
                for imgs, txts, labels in data_loaders['test']:
                    if torch.cuda.is_available():
                            imgs = imgs.cuda()
                            txts = txts.cuda()
                            labels = labels.cuda()
                    t_view1_feature, t_view2_feature, _, _ = model(imgs, txts)
                    t_imgs.append(t_view1_feature.cpu().numpy())
                    t_txts.append(t_view2_feature.cpu().numpy())
                    t_labels.append(labels.cpu().numpy())
            t_imgs = np.concatenate(t_imgs)
            t_txts = np.concatenate(t_txts)
            t_labels = np.concatenate(t_labels).argmax(1)
            img2text = fx_calc_map_label(t_imgs, t_txts, t_labels)
            txt2img = fx_calc_map_label(t_txts, t_imgs, t_labels)

            print('{} Loss: {:.4f} Img2Txt: {:.4f}  Txt2Img: {:.4f}'.format(phase, epoch_loss, img2text, txt2img))

            if phase == 'test' and (img2text + txt2img) / 2. > best_acc:
                best_acc = (img2text + txt2img) / 2.
                best_model_wts = copy.deepcopy(model.state_dict())

            if phase == 'test':
                test_img_acc_history.append(img2text)
                test_txt_acc_history.append(txt2img)
                epoch_loss_history.append(epoch_loss)

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
    print('Best average ACC: {:4f}'.format(best_acc))

    model.load_state_dict(best_model_wts)
    return model, test_img_acc_history, test_txt_acc_history, epoch_loss_history

主函数+参数设置

if __name__ == '__main__':
    dataset = 'pascal'
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    DATA_DIR = 'data/' + dataset + '/'
    MAX_EPOCH = 130
    alpha = 1e-3
    beta = 1
    gamma=10
    weight_decay = 0
    batch_size = 100
    lr = 1e-4
    betas = (0.5, 0.999)
    print('加载数据')

    data_loader, input_data_par = get_loader(DATA_DIR, batch_size)

    model_ft = IDCM_NN(img_input_dim=input_data_par['img_dim'], text_input_dim=input_data_par['text_dim'], output_dim=input_data_par['num_class']).to(device)
    params_to_update = list(model_ft.parameters())
    optimizer = optim.Adam(params_to_update, lr=lr, betas=betas,weight_decay=weight_decay)
    print('开始训练')
    model_ft, img_acc_hist, txt_acc_hist, loss_hist = train_model(model_ft, data_loader, optimizer, alpha, beta,gamma,MAX_EPOCH)
    print('训练完成')
    print('在测试集评估')
    view1_feature, view2_feature, view1_predict, view2_predict = model_ft(torch.tensor(input_data_par['img_test']).to(device), torch.tensor(input_data_par['text_test']).to(device))
    label = torch.argmax(torch.tensor(input_data_par['label_test']), dim=1)
    view1_feature = view1_feature.detach().cpu().numpy()
    view2_feature = view2_feature.detach().cpu().numpy()
    view1_predict = view1_predict.detach().cpu().numpy()
    view2_predict = view2_predict.detach().cpu().numpy()
    img_to_txt = fx_calc_map_label(view1_feature, view2_feature, label)
    print('...Image to Text MAP = {}'.format(img_to_txt))

    txt_to_img = fx_calc_map_label(view2_feature, view1_feature, label)
    print('...Text to Image MAP = {}'.format(txt_to_img))

    print('...Average MAP = {}'.format(((img_to_txt + txt_to_img) / 2.)))

查看结果

加载数据
开始训练
Epoch 1/130
--------------------
train Loss: 0.2704 Img2Txt: 0.2171  Txt2Img: 0.2132
test Loss: 0.2217 Img2Txt: 0.2171  Txt2Img: 0.2132
Epoch 2/130
--------------------
train Loss: 0.2229 Img2Txt: 0.4099  Txt2Img: 0.4798
test Loss: 0.1951 Img2Txt: 0.4099  Txt2Img: 0.4798
Epoch 3/130
--------------------
train Loss: 0.1990 Img2Txt: 0.5426  Txt2Img: 0.5804
test Loss: 0.1780 Img2Txt: 0.5426  Txt2Img: 0.5804
Epoch 4/130
--------------------
train Loss: 0.1784 Img2Txt: 0.5932  Txt2Img: 0.6240
test Loss: 0.1654 Img2Txt: 0.5932  Txt2Img: 0.6240
Epoch 5/130
--------------------
train Loss: 0.1610 Img2Txt: 0.6113  Txt2Img: 0.6585
test Loss: 0.1577 Img2Txt: 0.6113  Txt2Img: 0.6585
Epoch 6/130
--------------------
train Loss: 0.1489 Img2Txt: 0.6362  Txt2Img: 0.6808
test Loss: 0.1550 Img2Txt: 0.6362  Txt2Img: 0.6808
Epoch 7/130
--------------------
train Loss: 0.1426 Img2Txt: 0.6357  Txt2Img: 0.6748
test Loss: 0.1537 Img2Txt: 0.6357  Txt2Img: 0.6748
Epoch 8/130
--------------------
train Loss: 0.1369 Img2Txt: 0.6615  Txt2Img: 0.7022
test Loss: 0.1532 Img2Txt: 0.6615  Txt2Img: 0.7022
Epoch 9/130
--------------------
train Loss: 0.1342 Img2Txt: 0.6626  Txt2Img: 0.7007
test Loss: 0.1536 Img2Txt: 0.6626  Txt2Img: 0.7007
Epoch 10/130
--------------------
train Loss: 0.1314 Img2Txt: 0.6698  Txt2Img: 0.6956
test Loss: 0.1516 Img2Txt: 0.6698  Txt2Img: 0.6956
Epoch 11/130
--------------------
train Loss: 0.1272 Img2Txt: 0.6803  Txt2Img: 0.7025
test Loss: 0.1508 Img2Txt: 0.6803  Txt2Img: 0.7025
Epoch 12/130
--------------------
train Loss: 0.1222 Img2Txt: 0.6670  Txt2Img: 0.7078
test Loss: 0.1518 Img2Txt: 0.6670  Txt2Img: 0.7078
Epoch 13/130
--------------------
train Loss: 0.1232 Img2Txt: 0.6635  Txt2Img: 0.7079
test Loss: 0.1471 Img2Txt: 0.6635  Txt2Img: 0.7079
Epoch 14/130
--------------------
train Loss: 0.1175 Img2Txt: 0.6911  Txt2Img: 0.7148
test Loss: 0.1468 Img2Txt: 0.6911  Txt2Img: 0.7148
Epoch 15/130
--------------------
train Loss: 0.1161 Img2Txt: 0.6827  Txt2Img: 0.7136
test Loss: 0.1455 Img2Txt: 0.6827  Txt2Img: 0.7136
Epoch 16/130

--------------------
省略………………………………………………
--------------------
train Loss: 0.0548 Img2Txt: 0.6535  Txt2Img: 0.6922
test Loss: 0.1257 Img2Txt: 0.6535  Txt2Img: 0.6922
Epoch 125/130
--------------------
train Loss: 0.0554 Img2Txt: 0.6636  Txt2Img: 0.6888
test Loss: 0.1250 Img2Txt: 0.6636  Txt2Img: 0.6888
Epoch 126/130
--------------------
train Loss: 0.0531 Img2Txt: 0.6429  Txt2Img: 0.6794
test Loss: 0.1263 Img2Txt: 0.6429  Txt2Img: 0.6794
Epoch 127/130
--------------------
train Loss: 0.0565 Img2Txt: 0.6624  Txt2Img: 0.6901
test Loss: 0.1250 Img2Txt: 0.6624  Txt2Img: 0.6901
Epoch 128/130
--------------------
train Loss: 0.0502 Img2Txt: 0.6387  Txt2Img: 0.6858
test Loss: 0.1259 Img2Txt: 0.6387  Txt2Img: 0.6858
Epoch 129/130
--------------------
train Loss: 0.0574 Img2Txt: 0.6618  Txt2Img: 0.6885
test Loss: 0.1253 Img2Txt: 0.6618  Txt2Img: 0.6885
Epoch 130/130
--------------------
train Loss: 0.0500 Img2Txt: 0.6357  Txt2Img: 0.6810
test Loss: 0.1261 Img2Txt: 0.6357  Txt2Img: 0.6810
Training complete in 0m 53s
Best average ACC: 0.711302
训练完成
在测试集评估
...Image to Text MAP = 0.690430024096876
...Text to Image MAP = 0.7321743013743085
...Average MAP = 0.7113021627355922

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/101908.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

SCN时间序列预测模型详解(Matlab代码实现)

目录 💥1 概述 📚2 运行结果 🎉3 参考文献 👨‍💻4 Matlab代码 💥1 概述 SCN(System Change Number 简称 SCN)是当Oracle数据库更新后,由DBMS自动维护去累积递增的一…

数据结构链式栈

上一节我们学习了顺序栈 我们了解到栈就是特殊的线性表 我们之前用过顺序表实现了栈 那么我们也可以用单链表的方式来实现一个链式栈 我们可以回顾一下单链表的结构 基础数据结构链表_iccoke的博客-CSDN博客 我们可以看到简单的对尾部进行操作 很难实现时间复杂度达到O&am…

JdbcTemplate

JdbcTemplateJdbcTemplate (概念和准备)1. 什么是 JdbcTemplate(1) Spring 框架对 JDBC 进行封装,使用 JdbcTemplate 方便实现对数据库操作2. 准备工作(1)配置文件的准备(2&#xff…

100份PDF合并还不到3分钟,这2个方法非常简单,请低调使用

PDF文件作为我们工作中最为常见的“文件格式”之一。似乎哪里都有它的身影,有时我们还需要对PDF文件进行编辑和处理。那么,在众多PDF编辑和处理操作上,PDF合并可以算得上使用频率很高的存在。不过大多数小伙伴们并不清楚具体的操作方式是怎样…

Feign的另一种玩法-动态Feign

1.Feign传统方式的不足 ①.在微服务架构中,当我们使用Feign传统方式进行服务调用的时候,需要在每个服务消费者中添加FeignClient接口,编写对应的方法,而且当服务生产者Handler新增方法之后,服务消费者也要在FeignClient接口中添加方法,这样的话,会有些累赘. 那么能不能在调用服…

[附源码]计算机毕业设计Node.js畅玩电竞网咖购物点餐网站(程序+LW)

项目运行 环境配置: Node.js最新版 Vscode Mysql5.7 HBuilderXNavicat11Vue。 项目技术: Express框架 Node.js Vue 等等组成,B/S模式 Vscode管理前后端分离等等。 环境需要 1.运行环境:最好是Nodejs最新版,我…

R语言 线性混合效应模型实战案例

介绍 最近我们被客户要求撰写关于性混合效应模型的研究报告,包括一些图形和统计输出。首先,请注意,围绕多层次模型的术语有很大的不一致性。例如,多层次模型本身可能被称为分层线性模型、随机效应模型、多层次模型、随机截距模型…

web大学生网页作业成品 蛋糕店美食餐饮网站设计与实现(HTML+CSS+JavaScript)

🎀 精彩专栏推荐👇🏻👇🏻👇🏻 ✍️ 作者简介: 一个热爱把逻辑思维转变为代码的技术博主 💂 作者主页: 【主页——🚀获取更多优质源码】 🎓 web前端期末大作业…

代码随想录训练营第54天|LeetCode 392.判断子序列、115.不同的子序列

参考 代码随想录 题目一:LeetCode 392.判断子序列 确定dp数组及其下标的含义 dp[i][j]:s字符串中0~i字符构成的子串和t字符串中0~j字符构成的子串的最长公共子序列的长度。 确定递推公式 dp[i][j]有两个可能的来源:…

内点法求最优潮流和微电网调度(风、光、蓄电池、燃油机)(Matlab实现)

目录 1 概述 2 案例 2.1算例描述 2.2 数据 3 一点小知识 4 Matlab代码实现 1 概述 由于电力系统本身的复杂性,电力潮流优化具有规模大,约束条件多和非线性的特点。通过对最优潮流的求解,最终达到优化已有资源、降低发电厂耗量成本、减少电网线路损耗、提高电力系统输电…

JSON的定义、基本使用、以及和对象之间的相互转换

json的定义与基本使用 <script>//定义jsonvar json{"name" : "张三","age" : "18岁","addr" : ["北京","上海","天津"]}//获取数据console.log(json.age)console.log(json.name)consol…

【算法】可解释机器学习-CAM / Grad_CAM(Datawhale)

可解释机器学习-CAM / Grad_CAM一、CAM1.CAM算法介绍2.CAM的特点3.CAM算法的缺点二、Grad_CAM1.Grad_CAM算法介绍2.Grad_CAM算法优点3.Grad_CAM算法缺点4.Grad_CAM算法的变种1&#xff09;Grad_CAM算法2&#xff09;ScoreCAM算法3&#xff09;LayerCAM算法一、CAM 1.CAM算法介…

jsp+ssm计算机毕业设计高铁售票管理系统【附源码】

项目运行 环境配置&#xff1a; Jdk1.8 Tomcat7.0 Mysql HBuilderX&#xff08;Webstorm也行&#xff09; Eclispe&#xff08;IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持&#xff09;。 项目技术&#xff1a; JSPSSM mybatis Maven等等组成&#xff0c;B/S模式 Mave…

stack和queue

stack&#xff1a;https://cplusplus.com/reference/stack/stack/?kwstack 1. stack是一种容器适配器&#xff0c;专门用在具有后进先出操作的上下文环境中&#xff0c;其删除只能从容器的一端进行 元素的插入与提取操作。 2. stack是作为容器适配器被实现的&#xff0c;容器…

设计模式之命令模式

Command design pattern 命令模式的概念、命令模式的结构、命令模式的优缺点、命令模式的使用场景、命令模式的实现示例、命令模式的源码分析 1、命令模式的概念 命令模式&#xff0c;即将请求封装成一个对象&#xff0c;使发出请求的责任和执行请求的责任分离开。这样两者之间…

App自动化测试【1】Appium的原理解读

1.Appium的架构原理 Appium的架构原理如图所示&#xff0c;由客户端&#xff08;Appium Client&#xff09;和服务器&#xff08;Appium Server&#xff09;两部分组成&#xff0c;客户端与服务器端通过JSON Wire Protocol进行通信。 2.Appium原理解读 客户端Client&#x…

Win10自动更新怎么永久关闭?服务、注册表、组策略、计划任务中全方位设置,永久解决!

Win10自动更新就像打不死的小强&#xff0c;不管怎么关闭&#xff0c;之后还是会自动更新&#xff0c;让不少小伙伴颇为不爽。今天通过4步&#xff0c;在服务、注册表、组策略、计划任务中全方位设置&#xff0c;彻底关闭Win10自动更新&#xff0c;感兴趣的小伙伴不妨试试吧。 …

Python:requirements.txt, environment.yml简介

文章目录简介requirements.txtenvironment.yml小结简介 最近安装了一些从github上clone下载的开源python软件包&#xff0c;经历了许许多多的酸甜苦辣。在python软件包&#xff0c;通常都会包含requirements.txt, environment.yml, setup.py三者之中某些或者全部&#xff0c;一…

11.注解开发依赖注入及管理第三方Bean

1. 注解开发依赖注入 1.1 使用Autowired注解开启自动装配模式 Service public class BookServiceImpl implements BookService {//Autowired&#xff1a;注入引用类型&#xff0c;自动装配模式&#xff0c;默认按类型装配Autowiredprivate BookDao bookDao;public void save(…

【OpenCV+Qt】实现简易视频播放器——支持进度条拖动

OpenCV实现视频播放器&#xff0c;其思路大致就是在线程中使用OpenCV中的VideoCapture循环读取本地视频的每一帧Mat&#xff0c;然后发送到界面转换成QImage进行显示&#xff0c;而进度条拖动则用到了VideoCapture中的set函数&#xff0c;进度条则是使用Qslider&#xff1b;并且…