变电站火灾检测项目(tf2)

news2024/11/16 9:52:46

目录

1. 项目背景

 2. 项目研究数据集介绍(变电站火灾检测图像数据集)

3. 目标检测模型介绍(SE改进的YOLOv4-tiny模型)

4. 模型训练及测试

1. 项目背景

      我们的日常生活与电力息息相关,变电站作为输配电系统的关键环节,若出现事故,则有可能对整个系统带来很严重的影响。比如变电站出现火灾、爆炸等,这都很有可能对供电系统带来严重的危害。虽然供电系统对变电站做了很多的防火措施,但是变电站火灾与爆炸等事故还是时有发生。变电站着火属于电气火灾,这类火灾的特点就是一旦出现就会以极快的速度在短时间内迅速燃烧,甚至在一刹那之间就可以将一整个电力系统的设备损坏。但是,只要我们可以经常定期对电气设备进行防火检查,这种情况都是可以预防和避免的。尤其是对于电气设备较多的变电站,做好定期的巡视检查和相关知识学习是不可或缺的。

伊拉克变电站发生火灾

 2. 项目研究数据集介绍(变电站火灾检测图像数据集)

变电站火灾检测图像数据集示例

         本项目共收集变电站火灾检测图像3600多幅,分辨率均为768×768。并利用labelimg标注两类目标:1)火灾;2)烟火。按9:1的比例随机选取训练集和测试集,标签预处理及划分的相关程序如下:

import os
import random
import xml.etree.ElementTree as ET

import numpy as np

from utils.utils import get_classes


annotation_mode     = 0

classes_path        = 'model_data/defect.txt'

trainval_percent    = 0.9
train_percent       = 0.9

VOCdevkit_path  = 'VOCdevkit'

VOCdevkit_sets  = [('2007', 'train'), ('2007', 'val')]
classes, _      = get_classes(classes_path)

#-------------------------------------------------------#
photo_nums  = np.zeros(len(VOCdevkit_sets))
nums        = np.zeros(len(classes))
def convert_annotation(year, image_id, list_file):
    in_file = open(os.path.join(VOCdevkit_path, 'VOC%s/Annotations/%s.xml'%(year, image_id)), encoding='utf-8')
    tree=ET.parse(in_file)
    root = tree.getroot()

    for obj in root.iter('object'):
        difficult = 0 
        if obj.find('difficult')!=None:
            difficult = obj.find('difficult').text
        cls = obj.find('name').text
        if cls not in classes or int(difficult)==1:
            continue
        cls_id = classes.index(cls)
        xmlbox = obj.find('bndbox')
        b = (int(float(xmlbox.find('xmin').text)), int(float(xmlbox.find('ymin').text)), int(float(xmlbox.find('xmax').text)), int(float(xmlbox.find('ymax').text)))
        list_file.write(" " + ",".join([str(a) for a in b]) + ',' + str(cls_id))
        
        nums[classes.index(cls)] = nums[classes.index(cls)] + 1
        
if __name__ == "__main__":
    random.seed(0)
    if " " in os.path.abspath(VOCdevkit_path):
        raise ValueError("数据集存放的文件夹路径与图片名称中不可以存在空格")

    if annotation_mode == 0 or annotation_mode == 1:
        print("Generate txt in ImageSets.")
        xmlfilepath     = os.path.join(VOCdevkit_path, 'VOC2007/Annotations')
        saveBasePath    = os.path.join(VOCdevkit_path, 'VOC2007/ImageSets/Main')
        temp_xml        = os.listdir(xmlfilepath)
        total_xml       = []
        for xml in temp_xml:
            if xml.endswith(".xml"):
                total_xml.append(xml)

        num     = len(total_xml)  
        list    = range(num)  
        tv      = int(num*trainval_percent)  
        tr      = int(tv*train_percent)  
        trainval= random.sample(list,tv)  
        train   = random.sample(trainval,tr)  
        
        print("train and val size",tv)
        print("train size",tr)
        ftrainval   = open(os.path.join(saveBasePath,'trainval.txt'), 'w')  
        ftest       = open(os.path.join(saveBasePath,'test.txt'), 'w')  
        ftrain      = open(os.path.join(saveBasePath,'train.txt'), 'w')  
        fval        = open(os.path.join(saveBasePath,'val.txt'), 'w')  
        
        for i in list:  
            name=total_xml[i][:-4]+'\n'  
            if i in trainval:  
                ftrainval.write(name)  
                if i in train:  
                    ftrain.write(name)  
                else:  
                    fval.write(name)  
            else:  
                ftest.write(name)  
        
        ftrainval.close()  
        ftrain.close()  
        fval.close()  
        ftest.close()
        print("Generate txt in ImageSets done.")

    if annotation_mode == 0 or annotation_mode == 2:
        print("Generate 2007_train.txt and 2007_val.txt for train.")
        type_index = 0
        for year, image_set in VOCdevkit_sets:
            image_ids = open(os.path.join(VOCdevkit_path, 'VOC%s/ImageSets/Main/%s.txt'%(year, image_set)), encoding='utf-8').read().strip().split()
            list_file = open('%s_%s.txt'%(year, image_set), 'w', encoding='utf-8')
            for image_id in image_ids:
                list_file.write('%s/VOC%s/JPEGImages/%s.jpg'%(os.path.abspath(VOCdevkit_path), year, image_id))

                convert_annotation(year, image_id, list_file)
                list_file.write('\n')
            photo_nums[type_index] = len(image_ids)
            type_index += 1
            list_file.close()
        print("Generate 2007_train.txt and 2007_val.txt for train done.")
        
        def printTable(List1, List2):
            for i in range(len(List1[0])):
                print("|", end=' ')
                for j in range(len(List1)):
                    print(List1[j][i].rjust(int(List2[j])), end=' ')
                    print("|", end=' ')
                print()

        str_nums = [str(int(x)) for x in nums]
        tableData = [
            classes, str_nums
        ]
        colWidths = [0]*len(tableData)
        len1 = 0
        for i in range(len(tableData)):
            for j in range(len(tableData[i])):
                if len(tableData[i][j]) > colWidths[i]:
                    colWidths[i] = len(tableData[i][j])
        printTable(tableData, colWidths)

 

3. 目标检测模型介绍(SE改进的YOLOv4-tiny模型)

      SE改进的YOLOv4-tiny的主要组成部分如下:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input_1 (InputLayer)            [(None, 416, 416, 3) 0
__________________________________________________________________________________________________
zero_padding2d (ZeroPadding2D)  (None, 417, 417, 3)  0           input_1[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 208, 208, 32) 864         zero_padding2d[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 208, 208, 32) 128         conv2d[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU)         (None, 208, 208, 32) 0           batch_normalization[0][0]
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, 209, 209, 32) 0           leaky_re_lu[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 104, 104, 64) 18432       zero_padding2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 104, 104, 64) 256         conv2d_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU)       (None, 104, 104, 64) 0           batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 104, 104, 64) 36864       leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 104, 104, 64) 256         conv2d_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU)       (None, 104, 104, 64) 0           batch_normalization_2[0][0]
__________________________________________________________________________________________________
lambda (Lambda)                 (None, 104, 104, 32) 0           leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 104, 104, 32) 9216        lambda[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 104, 104, 32) 128         conv2d_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU)       (None, 104, 104, 32) 0           batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 104, 104, 32) 9216        leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 104, 104, 32) 128         conv2d_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU)       (None, 104, 104, 32) 0           batch_normalization_4[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 104, 104, 64) 0           leaky_re_lu_4[0][0]
                                                                 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 104, 104, 64) 4096        concatenate[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 104, 104, 64) 256         conv2d_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU)       (None, 104, 104, 64) 0           batch_normalization_5[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 104, 104, 128 0           leaky_re_lu_2[0][0]
                                                                 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D)    (None, 52, 52, 128)  0           concatenate_1[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 52, 52, 128)  147456      max_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 52, 52, 128)  512         conv2d_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU)       (None, 52, 52, 128)  0           batch_normalization_6[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 52, 52, 64)   0           leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 52, 52, 64)   36864       lambda_1[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 52, 52, 64)   256         conv2d_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU)       (None, 52, 52, 64)   0           batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 52, 52, 64)   36864       leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 52, 52, 64)   256         conv2d_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU)       (None, 52, 52, 64)   0           batch_normalization_8[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, 52, 52, 128)  0           leaky_re_lu_8[0][0]
                                                                 leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 52, 52, 128)  16384       concatenate_2[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 52, 52, 128)  512         conv2d_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU)       (None, 52, 52, 128)  0           batch_normalization_9[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate)     (None, 52, 52, 256)  0           leaky_re_lu_6[0][0]
                                                                 leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 26, 26, 256)  0           concatenate_3[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 26, 26, 256)  589824      max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 26, 26, 256)  1024        conv2d_10[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU)      (None, 26, 26, 256)  0           batch_normalization_10[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda)               (None, 26, 26, 128)  0           leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 26, 26, 128)  147456      lambda_2[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 26, 26, 128)  512         conv2d_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU)      (None, 26, 26, 128)  0           batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 26, 26, 128)  147456      leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 26, 26, 128)  512         conv2d_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU)      (None, 26, 26, 128)  0           batch_normalization_12[0][0]
__________________________________________________________________________________________________
concatenate_4 (Concatenate)     (None, 26, 26, 256)  0           leaky_re_lu_12[0][0]
                                                                 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 26, 26, 256)  65536       concatenate_4[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 26, 26, 256)  1024        conv2d_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU)      (None, 26, 26, 256)  0           batch_normalization_13[0][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate)     (None, 26, 26, 512)  0           leaky_re_lu_10[0][0]
                                                                 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D)  (None, 13, 13, 512)  0           concatenate_5[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 13, 13, 512)  2359296     max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 13, 13, 512)  2048        conv2d_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU)      (None, 13, 13, 512)  0           batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 13, 13, 256)  131072      leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 13, 13, 256)  1024        conv2d_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU)      (None, 13, 13, 256)  0           batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 13, 13, 128)  32768       leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 13, 13, 128)  512         conv2d_18[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU)      (None, 13, 13, 128)  0           batch_normalization_17[0][0]
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D)    (None, 26, 26, 128)  0           leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
concatenate_6 (Concatenate)     (None, 26, 26, 384)  0           up_sampling2d[0][0]
                                                                 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 13, 13, 512)  1179648     leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 26, 26, 256)  884736      concatenate_6[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 13, 13, 512)  2048        conv2d_16[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 26, 26, 256)  1024        conv2d_19[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU)      (None, 13, 13, 512)  0           batch_normalization_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU)      (None, 26, 26, 256)  0           batch_normalization_18[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 13, 13, 255)  130815      leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, 26, 26, 255)  65535       leaky_re_lu_18[0][0]
==================================================================================================

      tf2版本 SE注意力机制模块的实现如下:

def se_block(input_feature, ratio=16, name=""):
	channel = K.int_shape(input_feature)[-1]

	se_feature = GlobalAveragePooling2D()(input_feature)
	se_feature = Reshape((1, 1, channel))(se_feature)

	se_feature = Dense(channel // ratio,
					   activation='relu',
					   kernel_initializer='he_normal',
					   use_bias=False,
					   name = "se_block_one_"+str(name))(se_feature)
					   
	se_feature = Dense(channel,
					   kernel_initializer='he_normal',
					   use_bias=False,
					   name = "se_block_two_"+str(name))(se_feature)
	se_feature = Activation('sigmoid')(se_feature)

	se_feature = multiply([input_feature, se_feature])
	return se_feature

     tf2版本 CSPDarknet53-tiny的实现如下:

from functools import wraps

import tensorflow as tf
from tensorflow.keras.initializers import RandomNormal
from tensorflow.keras.layers import (BatchNormalization, Concatenate,
                                     Conv2D, Lambda, LeakyReLU,
                                     MaxPooling2D, ZeroPadding2D)
from tensorflow.keras.regularizers import l2
from utils.utils import compose


def route_group(input_layer, groups, group_id):
    convs = tf.split(input_layer, num_or_size_splits=groups, axis=-1)
    return convs[group_id]

@wraps(Conv2D)
def DarknetConv2D(*args, **kwargs):
    darknet_conv_kwargs = {'kernel_initializer' : RandomNormal(stddev=0.02), 'kernel_regularizer' : l2(kwargs.get('weight_decay', 5e-4))}
    darknet_conv_kwargs['padding'] = 'valid' if kwargs.get('strides')==(2, 2) else 'same'   
    try:
        del kwargs['weight_decay']
    except:
        pass
    darknet_conv_kwargs.update(kwargs)
    return Conv2D(*args, **darknet_conv_kwargs)
def DarknetConv2D_BN_Leaky(*args, **kwargs):
    no_bias_kwargs = {'use_bias': False}
    no_bias_kwargs.update(kwargs)
    return compose( 
        DarknetConv2D(*args, **no_bias_kwargs),
        BatchNormalization(),
        LeakyReLU(alpha=0.1))

def resblock_body(x, num_filters, weight_decay=5e-4):
    x = DarknetConv2D_BN_Leaky(num_filters, (3,3), weight_decay=weight_decay)(x)
    route = x
    x = Lambda(route_group,arguments={'groups':2, 'group_id':1})(x) 
    x = DarknetConv2D_BN_Leaky(int(num_filters/2), (3,3), weight_decay=weight_decay)(x)
    route_1 = x
    x = DarknetConv2D_BN_Leaky(int(num_filters/2), (3,3), weight_decay=weight_decay)(x)
    x = Concatenate()([x, route_1])
    x = DarknetConv2D_BN_Leaky(num_filters, (1,1), weight_decay=weight_decay)(x)
    feat = x
    x = Concatenate()([route, x])
    x = MaxPooling2D(pool_size=[2,2],)(x)
    return x, feat
    
def darknet_body(x, weight_decay=5e-4):

    x = ZeroPadding2D(((1,0),(1,0)))(x)
    x = DarknetConv2D_BN_Leaky(32, (3,3), strides=(2,2), weight_decay=weight_decay)(x)
    x = ZeroPadding2D(((1,0),(1,0)))(x)
    x = DarknetConv2D_BN_Leaky(64, (3,3), strides=(2,2), weight_decay=weight_decay)(x)
    x, _ = resblock_body(x, num_filters = 64, weight_decay=weight_decay)
    x, _ = resblock_body(x, num_filters = 128, weight_decay=weight_decay)
    x, feat1 = resblock_body(x, num_filters = 256, weight_decay=weight_decay)
    x = DarknetConv2D_BN_Leaky(512, (3,3), weight_decay=weight_decay)(x)

    feat2 = x
    return feat1, feat2

4. 模型训练及测试

     模型训练硬件环境3060,windows10系统。采用冻结训练加速模型拟合,batchsize=32。迭代训练300个epoch。训练得到模型的mAP值=78.79,fps=168.18。

 

博客中涉及一些网络资源,如有侵权请联系删除。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/98403.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【设计模式】责任链模式

【设计模式】责任链模式 文章目录【设计模式】责任链模式一:责任链模式概述二:责任链模式结构三:责任链模式案例实现四:优缺点五:责任链模式实战一:责任链模式概述 在现实生活中,常常会出现这样…

Jmeter(二十一):jmeter导入和导出接口的处理

JMeter测试导入接口 利用Jmeter测试上传文件,首先可根据接口文档或者fiddler抓包分析文件上传的接口;如下图: 以下是我通过fiddler所截取的文件上传的接口 1、填写导入接口的信息 查看文件上传栏下的填写信息: 文件名称&#xf…

【图像分割】多种阈值图像分割(带面板)【含GUI Matlab源码 733期】

⛄一、图像分割简介 理论知识参考:【基础教程】基于matlab图像处理图像分割【含Matlab源码 191期】 ⛄二、部分源代码 function varargout yuzhifenge(varargin) % YUZHIFENGE MATLAB code for yuzhifenge.fig % YUZHIFENGE, by itself, creates a new YUZHIFEN…

【Redis】缓存问题

用户数据一般都是存储在数据库中,数据库则落在磁盘上。而磁盘的I/O速度是计算机中最慢的硬件。 当用户的访问量在某一个时间段突然上升,数据库就很容易崩溃。为了避免用户直接访问数据库,所以会使用缓存数据库(Redis)作为缓冲层。 Redis 是内…

DIY NAS服务器之OMV 5.6入坑指南(二)- 安装omv-extras插件

DIY NAS服务器之OMV 5.6入坑指南(一)-openmediavalut 5.6安装 前面我们已经安装好了OMV5.6了。 接下来就是建设我们的OMV系统了。 首先第一步,我们开启社区插件omv-extras: 通过如下url把插件安装文件 下载到电脑本地&#xff0c…

Mysql分布式锁(二)直接用一条sql语句来实现原子性

文章目录一、直接用更新的sql实现1. 场景描述2. 修改sqlStockMapperStockService3. 重新测试二、问题1. 锁范围问题:行级锁 表级锁2. 若逻辑太复杂,一个sql无法实现4. 无法监控库存变化前后的状态一、直接用更新的sql实现 1. 场景描述 之前的deduct()方…

DevOps实战系列【第十三章】:流水线应用工具Blue Ocean使用

个人亲自录制全套DevOps系列实战教程 :手把手教你玩转DevOps全栈技术 BlueOcean图形化工具 可以通过插件的方式安装到jenkins,搜索“Blue Ocean”,安装后重启即可。 由于兼容问题,BlueOcean依赖的插件有些是失败的,我…

实际生产中使用Oracle的小问题及解决方法记录:ORA-00911,ORA-12514,ORA-28547

背景 在上次安装并初步测试 Oracle 后Oracle 11g安装使用、备份恢复并与SpringBoot集成,在实际生产中使用 Oracle 时又遇到几个小问题: ORA-00911 , ORA-12514 , ORA-28547 。下面分别列出这几个问题的解决方法。 Note&#xff…

DCBC路由模式配置端口映射

DCBC路由模式配置端口映射 拓扑搭建 前提:使用DCBC、CS6200、两台PC机:一台用户配置IP另一台用于测试端口映射,外网环境使用192.168.19.0/24网段 网段划分 全网互通 1.DCBC对应工作模式配置 对应接口配置 配置静态路由两条,一条…

ArcGIS And ENVI:如何进行植被指数的提取并制作成专题地图?

目录 01 目的 02 操作步骤 2.1 在ENVI加载tm_860516.img文件 2.2 进行NDVI指数的计算 2.3 使用ArcGIS对NDVI植被指数提取图进行专题制作 2.3.1 加载NDVI植被指数提取图 2.3.2 对NDVI植被指数提取图进行重分类 2.3.3 布局视图下的NDVI植被指数的相关编辑 03 实验结果 01 目的 对…

Iterated function

In mathematics, an iterated function is a function X → X (that is, a function from some set X to itself) which is obtained by composing another function f : X → X with itself a certain number of times. The process of repeatedly applying the same function…

【数据结构与算法 - 数据结构基础】什么是数据结构?

【数据结构与算法 - 数据结构基础】什么是数据结构? 文章目录【数据结构与算法 - 数据结构基础】什么是数据结构?1 数据结构包含的三个方面1.1 数据的逻辑结构1.1.1 线性结构数组【Array】链表【LinkedList】栈【Stack】队列【Queue】1.1.2 树结构【Tree…

【生信】初探基因定位和全基因组关联分析

初探QTL和GWAS 文章目录初探QTL和GWAS实验目的实验内容实验题目第一题:玉米MAGIC群体的QTL分析第二题:TASSEL自带数据集的关联分析实验过程玉米MAGIC群体的QTL分析① 包含的数据② 绘制LOD曲线株高对应的QTLTASSEL自带数据集的关联分析TASSEL简介实际操作…

Docker ( 一 ) 基本概念及安装

1.Docker是什么? Docker 是一个开源的应用容器引擎, 可以简化理解实现应用与运行环境分离. Docker 其中包括,镜像、容器、仓库等概念,目的就是通过对应用组件的封装、分发、部署、运行等生命周期的管理,使用户的产品&#xff08…

Android 进阶——性能优化之电量优化全攻略及实战小结(一)

文章大纲引言一、偷懒至上的原则二、低电耗模式1、低电耗模式概述2、低电耗模式限制3、适配适应低电耗模式三、应用待机模式对其他用例的支持引言 电池续航时间是移动用户体验中最重要的一个方面。没电的设备完全无法使用。因此,对于应用来说,尽可能地考…

数据库拆分4--sharding-jdbc-spring-boot-starter自动装配启动过程

学习一下springboot是如何整合sharding-jdbc的。 添加依赖以后 <dependency><groupId>org.apache.shardingsphere</groupId><artifactId>sharding-jdbc-spring-boot-starter</artifactId><version>4.1.1</version> </dependency…

Java8之JMX与MBean

参考文章&#xff1a; 《JMX超详细解读》 《JMX》 写在开头&#xff1a;本文为学习后的总结&#xff0c;可能有不到位的地方&#xff0c;错误的地方&#xff0c;欢迎各位指正。 在学习tomcat源码架构的时候了解到其中使用了JMX来实现一些管理工作&#xff0c;于是便整理了这篇…

Linux systemd-run unit封装CGroup资源进行任务运行

Linux systemd-run 封装资源使用 序 之前我们讲了关于 systemctl 对各种服务或者说是 unit 进行了讲解&#xff0c;也讲了怎么创建一个 unit&#xff0c;进行相关配置或者依赖设置等等。在使用 systemctl status xxx 时&#xff0c;我们可以发现对应的资源使用情况&#xff0…

JSTL标签库 | 深入解析JSTL标签库

目录 一&#xff1a;深入解析JSTL标签库 1、什么是JSTL标签库 2、使用JSTL标签库的步骤 3、JSTL标签的原理 4、jstl中的核心标签库core当中常用的标签 一&#xff1a;深入解析JSTL标签库 1、什么是JSTL标签库 ①Java Standard Tag Lib&#xff08;Java标准的标签库&am…

电脑文件不小心删除了怎么恢复 ? 删除的文件如何恢复文件?

如果误删电脑文件后&#xff0c;如何恢复文件&#xff1f; 电脑删除文件是很经常的事&#xff0c;为了电脑运行更快我们经常都会清理&#xff0c;但是有时候也会出现不小心删除重要文件的情况。如何恢复删除的文件&#xff1f;本文总结的2种常用方法可以帮助到你。 方法1、注册…