CV08_深度学习模块之间的缝合教学(3)--加载预训练权重

news2024/9/20 14:42:33

1.1 引言

我们在修改网络模型,添加或删除模块,或者更改了某一层之后,直接加载原先的预训练权重,肯定是会报错的,因为原来的模型权重和修改后的模型权重之间的结构是不匹配的。

那么我们只想加载那些没有更改过的那个部分的权重来初始化,应该怎么做?

1.2 问题的产生

以ResNet34为例,我在原有模型基础上添加一个模块,以SEAttention为例:

然后,加载训练文件,可以看到,报错以下信息:

fc.0和fc.2缺少权重。因为我们之前的模型的预训练权重是没有这两个部分的。

1.3 解决方法一

问题所在:

我们先转去看训练文件:

在加载预训练权重时有这么一个函数"load_state_dict"

我们ctrl+p查看一下该函数的参数:

注意看,最后面有一个叫"strict"的参数,它的默认值是True,也就是说在默认值的情况下,如果预训练的权重和模型的权重关键字不一致,就会报错。所以我们需要把这里的strict的值更改为False。

也就是说,之前训练不能对应的关键字,直接选择忽视掉。相当于一个包容的关系。

我们此时再次运行:正常训练。

现在,我们需要去原来的预训练权重文件中去看,查看里面的字典关键字。

在模型文件中进行:

1.查看预训练权重的关键字:

#1.查看预训练权重的关键字:
pretrained_weights_path ="./resnet34-pre.pth"
state_dict = torch.load(pretrained_weights_path)
#获取预训练权重的关键字
pretrained_keys = state_dict.keys()
print("预训练权重的关键字:")
for key in pretrained_keys:
    print(key)

运行结果如下: 

预训练权重的关键字:
conv1.weight
bn1.running_mean
bn1.running_var
bn1.weight
bn1.bias
layer1.0.conv1.weight
layer1.0.bn1.running_mean
layer1.0.bn1.running_var
layer1.0.bn1.weight
layer1.0.bn1.bias
layer1.0.conv2.weight
layer1.0.bn2.running_mean
layer1.0.bn2.running_var
layer1.0.bn2.weight
layer1.0.bn2.bias
layer1.1.conv1.weight
layer1.1.bn1.running_mean
layer1.1.bn1.running_var
layer1.1.bn1.weight
layer1.1.bn1.bias
layer1.1.conv2.weight
layer1.1.bn2.running_mean
layer1.1.bn2.running_var
layer1.1.bn2.weight
layer1.1.bn2.bias
layer1.2.conv1.weight
layer1.2.bn1.running_mean
layer1.2.bn1.running_var
layer1.2.bn1.weight
layer1.2.bn1.bias
layer1.2.conv2.weight
layer1.2.bn2.running_mean
layer1.2.bn2.running_var
layer1.2.bn2.weight
layer1.2.bn2.bias
layer2.0.conv1.weight
layer2.0.bn1.running_mean
layer2.0.bn1.running_var
layer2.0.bn1.weight
layer2.0.bn1.bias
layer2.0.conv2.weight
layer2.0.bn2.running_mean
layer2.0.bn2.running_var
layer2.0.bn2.weight
layer2.0.bn2.bias
layer2.0.downsample.0.weight
layer2.0.downsample.1.running_mean
layer2.0.downsample.1.running_var
layer2.0.downsample.1.weight
layer2.0.downsample.1.bias
layer2.1.conv1.weight
layer2.1.bn1.running_mean
layer2.1.bn1.running_var
layer2.1.bn1.weight
layer2.1.bn1.bias
layer2.1.conv2.weight
layer2.1.bn2.running_mean
layer2.1.bn2.running_var
layer2.1.bn2.weight
layer2.1.bn2.bias
layer2.2.conv1.weight
layer2.2.bn1.running_mean
layer2.2.bn1.running_var
layer2.2.bn1.weight
layer2.2.bn1.bias
layer2.2.conv2.weight
layer2.2.bn2.running_mean
layer2.2.bn2.running_var
layer2.2.bn2.weight
layer2.2.bn2.bias
layer2.3.conv1.weight
layer2.3.bn1.running_mean
layer2.3.bn1.running_var
layer2.3.bn1.weight
layer2.3.bn1.bias
layer2.3.conv2.weight
layer2.3.bn2.running_mean
layer2.3.bn2.running_var
layer2.3.bn2.weight
layer2.3.bn2.bias
layer3.0.conv1.weight
layer3.0.bn1.running_mean
layer3.0.bn1.running_var
layer3.0.bn1.weight
layer3.0.bn1.bias
layer3.0.conv2.weight
layer3.0.bn2.running_mean
layer3.0.bn2.running_var
layer3.0.bn2.weight
layer3.0.bn2.bias
layer3.0.downsample.0.weight
layer3.0.downsample.1.running_mean
layer3.0.downsample.1.running_var
layer3.0.downsample.1.weight
layer3.0.downsample.1.bias
layer3.1.conv1.weight
layer3.1.bn1.running_mean
layer3.1.bn1.running_var
layer3.1.bn1.weight
layer3.1.bn1.bias
layer3.1.conv2.weight
layer3.1.bn2.running_mean
layer3.1.bn2.running_var
layer3.1.bn2.weight
layer3.1.bn2.bias
layer3.2.conv1.weight
layer3.2.bn1.running_mean
layer3.2.bn1.running_var
layer3.2.bn1.weight
layer3.2.bn1.bias
layer3.2.conv2.weight
layer3.2.bn2.running_mean
layer3.2.bn2.running_var
layer3.2.bn2.weight
layer3.2.bn2.bias
layer3.3.conv1.weight
layer3.3.bn1.running_mean
layer3.3.bn1.running_var
layer3.3.bn1.weight
layer3.3.bn1.bias
layer3.3.conv2.weight
layer3.3.bn2.running_mean
layer3.3.bn2.running_var
layer3.3.bn2.weight
layer3.3.bn2.bias
layer3.4.conv1.weight
layer3.4.bn1.running_mean
layer3.4.bn1.running_var
layer3.4.bn1.weight
layer3.4.bn1.bias
layer3.4.conv2.weight
layer3.4.bn2.running_mean
layer3.4.bn2.running_var
layer3.4.bn2.weight
layer3.4.bn2.bias
layer3.5.conv1.weight
layer3.5.bn1.running_mean
layer3.5.bn1.running_var
layer3.5.bn1.weight
layer3.5.bn1.bias
layer3.5.conv2.weight
layer3.5.bn2.running_mean
layer3.5.bn2.running_var
layer3.5.bn2.weight
layer3.5.bn2.bias
layer4.0.conv1.weight
layer4.0.bn1.running_mean
layer4.0.bn1.running_var
layer4.0.bn1.weight
layer4.0.bn1.bias
layer4.0.conv2.weight
layer4.0.bn2.running_mean
layer4.0.bn2.running_var
layer4.0.bn2.weight
layer4.0.bn2.bias
layer4.0.downsample.0.weight
layer4.0.downsample.1.running_mean
layer4.0.downsample.1.running_var
layer4.0.downsample.1.weight
layer4.0.downsample.1.bias
layer4.1.conv1.weight
layer4.1.bn1.running_mean
layer4.1.bn1.running_var
layer4.1.bn1.weight
layer4.1.bn1.bias
layer4.1.conv2.weight
layer4.1.bn2.running_mean
layer4.1.bn2.running_var
layer4.1.bn2.weight
layer4.1.bn2.bias
layer4.2.conv1.weight
layer4.2.bn1.running_mean
layer4.2.bn1.running_var
layer4.2.bn1.weight
layer4.2.bn1.bias
layer4.2.conv2.weight
layer4.2.bn2.running_mean
layer4.2.bn2.running_var
layer4.2.bn2.weight
layer4.2.bn2.bias
fc.weight
fc.bias

2.查看你自身网络模型的关键字:

#2.查看你自身网络模型的关键字:
net = resnet34()
model_keys = net.state_dict().keys()
print("\n模型权重的关键字:")
for key in model_keys:
    print(key)

模型权重的关键字:
conv1.weight
bn1.weight
bn1.bias
bn1.running_mean
bn1.running_var
bn1.num_batches_tracked
layer1.0.conv1.weight
layer1.0.bn1.weight
layer1.0.bn1.bias
layer1.0.bn1.running_mean
layer1.0.bn1.running_var
layer1.0.bn1.num_batches_tracked
layer1.0.conv2.weight
layer1.0.bn2.weight
layer1.0.bn2.bias
layer1.0.bn2.running_mean
layer1.0.bn2.running_var
layer1.0.bn2.num_batches_tracked
layer1.1.conv1.weight
layer1.1.bn1.weight
layer1.1.bn1.bias
layer1.1.bn1.running_mean
layer1.1.bn1.running_var
layer1.1.bn1.num_batches_tracked
layer1.1.conv2.weight
layer1.1.bn2.weight
layer1.1.bn2.bias
layer1.1.bn2.running_mean
layer1.1.bn2.running_var
layer1.1.bn2.num_batches_tracked
layer1.2.conv1.weight
layer1.2.bn1.weight
layer1.2.bn1.bias
layer1.2.bn1.running_mean
layer1.2.bn1.running_var
layer1.2.bn1.num_batches_tracked
layer1.2.conv2.weight
layer1.2.bn2.weight
layer1.2.bn2.bias
layer1.2.bn2.running_mean
layer1.2.bn2.running_var
layer1.2.bn2.num_batches_tracked
layer2.0.conv1.weight
layer2.0.bn1.weight
layer2.0.bn1.bias
layer2.0.bn1.running_mean
layer2.0.bn1.running_var
layer2.0.bn1.num_batches_tracked
layer2.0.conv2.weight
layer2.0.bn2.weight
layer2.0.bn2.bias
layer2.0.bn2.running_mean
layer2.0.bn2.running_var
layer2.0.bn2.num_batches_tracked
layer2.0.downsample.0.weight
layer2.0.downsample.1.weight
layer2.0.downsample.1.bias
layer2.0.downsample.1.running_mean
layer2.0.downsample.1.running_var
layer2.0.downsample.1.num_batches_tracked
layer2.1.conv1.weight
layer2.1.bn1.weight
layer2.1.bn1.bias
layer2.1.bn1.running_mean
layer2.1.bn1.running_var
layer2.1.bn1.num_batches_tracked
layer2.1.conv2.weight
layer2.1.bn2.weight
layer2.1.bn2.bias
layer2.1.bn2.running_mean
layer2.1.bn2.running_var
layer2.1.bn2.num_batches_tracked
layer2.2.conv1.weight
layer2.2.bn1.weight
layer2.2.bn1.bias
layer2.2.bn1.running_mean
layer2.2.bn1.running_var
layer2.2.bn1.num_batches_tracked
layer2.2.conv2.weight
layer2.2.bn2.weight
layer2.2.bn2.bias
layer2.2.bn2.running_mean
layer2.2.bn2.running_var
layer2.2.bn2.num_batches_tracked
layer2.3.conv1.weight
layer2.3.bn1.weight
layer2.3.bn1.bias
layer2.3.bn1.running_mean
layer2.3.bn1.running_var
layer2.3.bn1.num_batches_tracked
layer2.3.conv2.weight
layer2.3.bn2.weight
layer2.3.bn2.bias
layer2.3.bn2.running_mean
layer2.3.bn2.running_var
layer2.3.bn2.num_batches_tracked
layer3.0.conv1.weight
layer3.0.bn1.weight
layer3.0.bn1.bias
layer3.0.bn1.running_mean
layer3.0.bn1.running_var
layer3.0.bn1.num_batches_tracked
layer3.0.conv2.weight
layer3.0.bn2.weight
layer3.0.bn2.bias
layer3.0.bn2.running_mean
layer3.0.bn2.running_var
layer3.0.bn2.num_batches_tracked
layer3.0.downsample.0.weight
layer3.0.downsample.1.weight
layer3.0.downsample.1.bias
layer3.0.downsample.1.running_mean
layer3.0.downsample.1.running_var
layer3.0.downsample.1.num_batches_tracked
layer3.1.conv1.weight
layer3.1.bn1.weight
layer3.1.bn1.bias
layer3.1.bn1.running_mean
layer3.1.bn1.running_var
layer3.1.bn1.num_batches_tracked
layer3.1.conv2.weight
layer3.1.bn2.weight
layer3.1.bn2.bias
layer3.1.bn2.running_mean
layer3.1.bn2.running_var
layer3.1.bn2.num_batches_tracked
layer3.2.conv1.weight
layer3.2.bn1.weight
layer3.2.bn1.bias
layer3.2.bn1.running_mean
layer3.2.bn1.running_var
layer3.2.bn1.num_batches_tracked
layer3.2.conv2.weight
layer3.2.bn2.weight
layer3.2.bn2.bias
layer3.2.bn2.running_mean
layer3.2.bn2.running_var
layer3.2.bn2.num_batches_tracked
layer3.3.conv1.weight
layer3.3.bn1.weight
layer3.3.bn1.bias
layer3.3.bn1.running_mean
layer3.3.bn1.running_var
layer3.3.bn1.num_batches_tracked
layer3.3.conv2.weight
layer3.3.bn2.weight
layer3.3.bn2.bias
layer3.3.bn2.running_mean
layer3.3.bn2.running_var
layer3.3.bn2.num_batches_tracked
layer3.4.conv1.weight
layer3.4.bn1.weight
layer3.4.bn1.bias
layer3.4.bn1.running_mean
layer3.4.bn1.running_var
layer3.4.bn1.num_batches_tracked
layer3.4.conv2.weight
layer3.4.bn2.weight
layer3.4.bn2.bias
layer3.4.bn2.running_mean
layer3.4.bn2.running_var
layer3.4.bn2.num_batches_tracked
layer3.5.conv1.weight
layer3.5.bn1.weight
layer3.5.bn1.bias
layer3.5.bn1.running_mean
layer3.5.bn1.running_var
layer3.5.bn1.num_batches_tracked
layer3.5.conv2.weight
layer3.5.bn2.weight
layer3.5.bn2.bias
layer3.5.bn2.running_mean
layer3.5.bn2.running_var
layer3.5.bn2.num_batches_tracked
layer4.0.conv1.weight
layer4.0.bn1.weight
layer4.0.bn1.bias
layer4.0.bn1.running_mean
layer4.0.bn1.running_var
layer4.0.bn1.num_batches_tracked
layer4.0.conv2.weight
layer4.0.bn2.weight
layer4.0.bn2.bias
layer4.0.bn2.running_mean
layer4.0.bn2.running_var
layer4.0.bn2.num_batches_tracked
layer4.0.downsample.0.weight
layer4.0.downsample.1.weight
layer4.0.downsample.1.bias
layer4.0.downsample.1.running_mean
layer4.0.downsample.1.running_var
layer4.0.downsample.1.num_batches_tracked
layer4.1.conv1.weight
layer4.1.bn1.weight
layer4.1.bn1.bias
layer4.1.bn1.running_mean
layer4.1.bn1.running_var
layer4.1.bn1.num_batches_tracked
layer4.1.conv2.weight
layer4.1.bn2.weight
layer4.1.bn2.bias
layer4.1.bn2.running_mean
layer4.1.bn2.running_var
layer4.1.bn2.num_batches_tracked
layer4.2.conv1.weight
layer4.2.bn1.weight
layer4.2.bn1.bias
layer4.2.bn1.running_mean
layer4.2.bn1.running_var
layer4.2.bn1.num_batches_tracked
layer4.2.conv2.weight
layer4.2.bn2.weight
layer4.2.bn2.bias
layer4.2.bn2.running_mean
layer4.2.bn2.running_var
layer4.2.bn2.num_batches_tracked
fc.weight
fc.bias
se.fc.0.weight
se.fc.2.weight
 

3.找出模型中缺失/多余的权重

缺失

【针对你预训练关键字比模型关键字少】

#3.找出模型中缺失的权重【针对你预训练关键字比模型关键字少】
missing_keys = model_keys- pretrained_keys
print("\n模型中缺失的权重关键字")
for key in missing_keys:
    print(key)

运行结果如下: 

模型中缺失的权重关键字:
layer1.1.bn1.num_batches_tracked
layer2.1.bn1.num_batches_tracked
layer1.0.bn2.num_batches_tracked
layer4.0.downsample.1.num_batches_tracked
layer4.0.bn2.num_batches_tracked
layer3.5.bn1.num_batches_tracked
layer2.0.bn2.num_batches_tracked
layer3.2.bn2.num_batches_tracked
layer3.1.bn2.num_batches_tracked
layer2.2.bn2.num_batches_tracked
layer3.0.bn1.num_batches_tracked
layer4.1.bn2.num_batches_tracked
layer1.1.bn2.num_batches_tracked
se.fc.2.weight
layer4.2.bn1.num_batches_tracked
layer3.3.bn1.num_batches_tracked
layer3.4.bn2.num_batches_tracked
layer3.2.bn1.num_batches_tracked
layer2.3.bn1.num_batches_tracked
layer2.1.bn2.num_batches_tracked
layer3.0.downsample.1.num_batches_tracked
layer4.2.bn2.num_batches_tracked
layer2.2.bn1.num_batches_tracked
bn1.num_batches_tracked
se.fc.0.weight
layer1.2.bn2.num_batches_tracked
layer2.0.downsample.1.num_batches_tracked
layer3.1.bn1.num_batches_tracked
layer2.0.bn1.num_batches_tracked
layer3.5.bn2.num_batches_tracked
layer1.2.bn1.num_batches_tracked
layer4.0.bn1.num_batches_tracked
layer3.3.bn2.num_batches_tracked
layer1.0.bn1.num_batches_tracked
layer3.4.bn1.num_batches_tracked
layer3.0.bn2.num_batches_tracked
layer2.3.bn2.num_batches_tracked
layer4.1.bn1.num_batches_tracked

多余

【针对你预训练关键字比模型关键字多】

unexpected_keys = pretrained_keys -model_keys
print("\n预训练权重中多余的权重关键字:")
for key in unexpected_keys:
    print(key)

1.4 方法二:写一个判断语句

先找到训练文件的加载初始权重的代码:

  net = resnet34()
    # load pretrain weights
    # download url: https://download.pytorch.org/models/resnet34-333f7ec4.pth
    model_weight_path = "./resnet34-pre.pth"
    assert os.path.exists(model_weight_path), "file {} does not exist.".format(model_weight_path)
    net.load_state_dict(torch.load(model_weight_path, map_location='cpu'),strict=False)
    # for param in net.parameters():
    #     param.requires_grad = False

然后删掉,我们重新写一个:

#模板
    model_weight_path ="./resnet34-pre.pth" #预训练权重的路径
    ckpt = torch.load(model_weight_path) #加载预训练权重
    net = resnet34() #实例化模型
    model_dict = net.state_dict() #获取我们模型的参数

    #判断预训练模型中网络的模块是否在修改后的网络中也存在,并且shape相同,如果相同则取出
    pretrained_dict = {k:v for k,v in ckpt.items() if k in model_dict and(v.shape == model_dict[k].shape)}

    #更新修改之后的model_dict
    model_dict.update(pretrained_dict)
    
    #加载我们真正需要的state_dict
    net.load_state_dict(model_dict,strict=True)

该代码可作为模板使用。

可以看到同样可以运行:

1.5 多GPU训练的关键字问题

看着确实非常简单,但有时候会出现一个问题,比如:

有些同学在单GPU上调用一个基于多GPU预训练像这个问题,报错:

这是因为在加载多GPU训练的模型的时候,由于用DataParallel训练的模型数据并行方式的训练,key中会包含"module“关键字。去掉DataParallel预训练模型中的module,修改如下:

明显看出,我要加载的预训练权重和网络模型结构是一样,只是每个名字前面多了module.这几个后面是一模一样,如果我按照之前的说直接忽略,是不是每一个都对应不上,那就相当于没有加载,【主要原因就是模型的key不一致,也就是层的名称不一致比如我使用的是resnet模型,那么层名应该是conv1.weight而不是module.conv1.weight】

那么如何解决呢:

net就是你实例化的模型名字

checkpoint指的是前面提到的“ckpt”也就是预训练权重。

net.load_state_dict({k.replace('module.',''):v for k,v in checkpoint['state_dict'].items()})

附录:

在PyTorch中,使用torch.load()函数加载的预训练模型权重确实是一个字典类型,这个字典被称为状态字典(state_dict)。状态字典包含了模型中所有可学习参数(如权重和偏置)的键值对,键通常是参数的名称,值则是对应的Tensor(包含实际的数值)。

当你从文件中加载预训练权重时,代码通常看起来像这样:

pretrained_weights_path = 'path_to_pretrained_model.pth'
state_dict = torch.load(pretrained_weights_path)

这里的state_dict就是包含了模型权重的字典。之后,你可以通过model.load_state_dict(state_dict)来将这些权重加载到你的模型中,前提是你模型的结构与预训练模型的结构相匹配,或者你已经适当地处理了任何不匹配的情况。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1923605.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Python酷库之旅-第三方库Pandas(020)

目录 一、用法精讲 49、pandas.merge_asof函数 49-1、语法 49-2、参数 49-3、功能 49-4、返回值 49-5、说明 49-5-1、功能 49-6、用法 49-6-1、数据准备 49-6-2、代码示例 49-6-3、结果输出 50、pandas.concat函数 50-1、语法 50-2、参数 50-3、功能 50-4、返…

中仕公考:没有教师资格证能考编吗?

没有教师资格证的考生,是不能参加教师编考试的。但是,符合“先上岗,再考证”的阶段性措施,高校毕业生可在未获得教师资格证的情况下先行就业。其他考生必须首先取得教师资格证,才能参与教师编考试。 报考普通小学和幼…

【Android Studio】实现底部导航栏Tab切换(提供Gitee源码)

前言:本期教学如何制作底部导航栏以及使用Fragment实现页面切换的完整功能,本篇提供所有源代码,均测试无误,大家可以放心使用。 目录 一、功能演示 二、代码实现 2.1、bottom.xml 2.2、message.xml、book.xml和mine.xml 2.3、…

第三期书生大模型实战营之Git前置知识

闯关任务1 每位参与者提交一份自我介绍。 提交地址&#xff1a;https://github.com/InternLM/Tutorial 的 camp3 分支&#xff5e; 要求 1. 命名格式为 camp3_<id>.md&#xff0c;其中 <id> 是您的报名问卷ID。 2. 文件路径应为 ./data/Git/task/。 3. 在 GitHub…

单网口设备的IP地址识别-还原-自组网

1.如果知道该设备所在网段&#xff1a; 此时可以使用nmap工具&#xff0c;进行网段扫描&#xff1a; nmap -sn 192.168.0.0/24 256个地址的子网10秒就能扫描一轮。关掉设备&#xff0c;打开设备&#xff0c;diff&#xff0c;基本就可以定位所要找到目标设备的IP 2.如果不知道…

链接追踪系列-04.linux服务器docker安装elk

[rootVM-24-17-centos ~]# cat /proc/sys/vm/max_map_count 65530 [rootVM-24-17-centos ~]# sysctl -w vm.max_map_count262144 vm.max_map_count 262144 #先创建出相应目录&#xff1a;/opt/dockerV/es/…docker run -e ES_JAVA_OPTS"-Xms512m -Xmx512m" -d -p 92…

隔离驱动-视频课笔记

目录 1、需要隔离的原因 1.2、四种常用的隔离方案 2、脉冲变压器隔离 2.1、脉冲变压器的工作原理 2.2、泄放电阻对开关电路的影响 2.3、本课小结 3、光耦隔离驱动 3.1、光耦隔离驱动原理 3.2、光耦隔离驱动的电源进行分析 3.3、本课小结 4、自举升压驱动 4.1…

哪款开放式运动耳机佩戴最舒服?2024五款备受推崇产品分享!

​热爱户外活动的你&#xff0c;定是对生活有着独到品味的行者。想象一下&#xff0c;在户外活动时&#xff0c;若有一款耳机能完美融入场景&#xff0c;为你带来无与伦比的音乐享受&#xff0c;岂不是锦上添花&#xff1f;此时&#xff0c;开放式耳机便应运而生&#xff0c;其…

SEO:6个避免被搜索引擎惩罚的策略-华媒舍

在当今数字时代&#xff0c;搜索引擎成为了绝大多数人获取信息和产品的首选工具。为了在搜索结果中获得良好的排名&#xff0c;许多网站采用了各种优化策略。有些策略可能会适得其反&#xff0c;引发搜索引擎的惩罚。以下是彭博社发稿推广的6个避免被搜索引擎惩罚的策略。 1. 内…

结合实体类型信息(3)——TransT: 基于类型的多重嵌入表示用于知识图谱补全

1 引言 1.1 问题 仅仅依赖于三元组的结构化信息有其局限性&#xff0c;因为它们往往忽略了知识图谱中丰富的语义信息以及由这些语义信息所代表的先验知识。语义信息是指实体和关系的含义&#xff0c;比如“北京”是“中国”的首都&#xff0c;“苹果”是一种水果。先验知识则…

uniapp编译成h5后接口请求参数变成[object object]

问题&#xff1a;uniapp编译成h5后接口请求参数变成[object object] 但是运行在开发者工具上没有一点问题 排查&#xff1a; 1&#xff1a;请求参数&#xff1a;看是否是在请求前就已经变成了[object object]了 结果&#xff1a; 一切正常 2&#xff1a;请求头&#xff1a;看…

yolov8-obb训练自己的数据集(标注,训练,推理,转化模型)

一、源码 直接去下载官方的yolov8源码就行&#xff0c;那里面集成了 obb ultralytics/ultralytics/cfg/models/v8 at main ultralytics/ultralytics GitHub 二、环境 如果你训练过yolov5以及以上的yolo环境&#xff0c;可以直接拷贝一个用就行&#xff0c;如果没有的话 直…

破解数据孤岛:论数据中台对企业数据治理的作用与挑战-亿发

在数字化转型浪潮中&#xff0c;数据中台这一概念频频被提及。然而&#xff0c;业界目前尚未对数据中台形成统一的定义。本文将基于PowerData的理解&#xff0c;深入探讨数据中台的核心价值与挑战。 数据中台的本质 数据中台不仅仅是一项单一的技术&#xff0c;而是涵盖数据集…

R语言中交互式图表绘制

revenue <- read.csv("data/revenue.csv") 数据集放在了文章末尾&#xff0c;需要自取。 if(!require(plotly)) install.packages("plotly") # 绘制柱状图 p <- plot_ly(revenue,y ~本周,x ~游戏名称,type "bar",name "本周&q…

记一次项目经历

一、项目需求 1、设备四个工位&#xff0c;每个工位需要测试产品的电参数&#xff1b; 2、每个另外加四个位置温度&#xff1b; 3、显示4个通道电流曲线&#xff0c;16个通道温度曲线&#xff1b; 4、可切换工艺参数&#xff1b; 5、常规判定&#xff0c;测试数据保存到表格内&…

AndoridStudio 使用 Inspect code 检查优化代码

日常开发时&#xff0c;AS 会有报黄提示&#xff0c;如果不修改&#xff0c;日积月累下来&#xff0c;应用性能就有问题了。 针对这种情况&#xff0c;可以使用 AS 自带的 Inspect code 功能来批量检查、优化代码。 选择 Code – Inspect Code &#xff0c; 按需选择 整个工…

如何允许从互联网(外网)进入路由器管理页面

1.绑定UDP端口 操作如图所示&#xff1a; 2.然后再绑定虚拟换回网卡 3.然后再把出端口编号设置成为2 使他成为一个双向输入输出具体操作如图所示&#xff1a; 4.进入防火墙然后再启动防火墙进行端口配置&#xff1a; 1.进入端口g0/0/0配置ip地址&#xff08;注意配置的ip地…

【web]-f12-iphone6

题目&#xff1a;屌丝没有苹果&#xff0c;手机都买不起&#xff0c;咋办&#xff1f;室友的iphone6好眼馋&#xff0c;某些网站也只有手机打得开(答案为flag{}形式&#xff0c;提交{}中内容即可) 手机模式浏览&#xff0c;F5刷新下就可以看到了。 flag a2a7c20140d7520903a70…

uniapp内置组件scroll-view案例解析

参考资料 文档地址&#xff1a;https://uniapp.dcloud.net.cn/component/scroll-view.html 官方给的完整代码 <script>export default {data() {return {scrollTop: 0,old: {scrollTop: 0}}},methods: {upper: function(e) {console.log(e)},lower: function(e) {cons…

MSPM0G3507(三十七)——最新资料包

所有代码本人全部试过都能用 &#xff0c;有啥疑问直接提出 推荐用软件OLED硬件6050&#xff0c;硬件6050读取速度较快&#xff0c;比较稳定 OLED是单独的纯OLED 两个6050程序分别为硬件6050软件oled&#xff0c;软件6050硬件OLED 全都是在CCStheia上编程&#xff0c;有啥问…