Matlab:利用1D-CNN(一维卷积神经网络),分析高光谱曲线数据或时序数据

news2024/11/16 22:35:37

1DCNN 简介:

1D-CNN(一维卷积神经网络)是一种特殊类型的卷积神经网络,设计用于处理一维序列数据。这种网络结构通常由多个卷积层和池化层交替组成,最后使用全连接层将提取的特征映射到输出。

以下是1D-CNN的主要组成部分和特点:

  1. 输入层:接收一维序列数据作为模型的输入。
  2. 卷积层:使用一系列可训练的卷积核在输入数据上滑动并提取特征。卷积操作能够有效地提取局部信息,从而捕捉输入序列的局部模式。
  3. 激活函数:对卷积层的输出进行非线性变换,增强模型的表达能力。
  4. 池化层:通过对卷积层输出进行降维,减少计算量,同时提高模型的鲁棒性和泛化能力。
  5. 全连接层:将池化层的输出映射到模型的输出,通常用于分类、回归等任务。

在使用1D-CNN时,通常需要设置一些超参数,如卷积核的大小、卷积层的个数、池化操作的方式、激活函数的选择等。

与传统机器学习对比:

首先,1D-CNN是一种深度学习模型,它使用卷积层来自动提取一维序列数据(如音频、文本等)中的特征。这种方式与传统机器学习中的特征提取方法不同,传统机器学习通常需要手动设计和选择特征。通过自动提取特征,1D-CNN能够减少人工特征提取的工作量,并有可能发现更复杂的特征表示。其次,1D-CNN在处理序列数据时能够更好地捕捉局部关系。卷积操作通过在输入数据上滑动固定大小的窗口来提取局部特征,这使得1D-CNN在语音识别、自然语言处理、时间序列预测等任务中表现出色。而传统机器学习模型,如支持向量机(SVM)或决策树,通常不具备这种处理局部关系的能力。

需要注意的是,在数据尺度较小的时候,如只有100多个参数,相较于传统机器学习模型,1D-CNN并没有优势,表现性能一般和机器学习表现无明显差距。鉴于卷积对于目标特征的提取及压缩的特点,数据长度(参数)越高,1D-CNN就越发有优势。因此在时序回归、高光谱分析、股票预测、音频分析上1D-CNN的表现可圈可点。此外,利用1D-CNN做回归和分类对样本量有较高的要求,因为卷积结构本身对噪声就比较敏感,数据集较少时,特别容易发生严重的过拟合现象,建议样本量800+有比较好的应用效果。

三种不同结构的自定义的1D-CNN

基于VGG结构的1D-CNN(VNet)

基于 VGG 主干网 络设计的 VNet 参考了陈庆等的优化结构,卷积核大 小为 4,包含 6 个卷积深度,并在每个平均池化层后引 入一个比例为 0.3 的随机失活层(dropout layer)防止过拟合,参数量为342K。

matlab构建代码:

function layers=creatCNN2D_VGG(inputsize)
filter=16;
layers = [
    imageInputLayer([inputsize],"Name","imageinput")
    convolution2dLayer([1 4],filter,"Name","conv","Padding","same")
    convolution2dLayer([1 4],filter,"Name","conv_1","Padding","same")
    maxPooling2dLayer([1 2],"Name","maxpool","Padding","same","Stride",[1 2])
    convolution2dLayer([1 4],filter*2,"Name","conv_2","Padding","same")
    convolution2dLayer([1 4],filter*2,"Name","conv_3","Padding","same")
    maxPooling2dLayer([1 2],"Name","maxpool_1","Padding","same","Stride",[1 2])
    fullyConnectedLayer(filter*8,"Name","fc")
    fullyConnectedLayer(1,"Name","fc_1")
    regressionLayer("Name","regressionoutput")];

基于EfficienNet结构的1D-CNN (ENet)

ENet 采用 Swish 激活函数,引入了跳跃连接与 SE(squeeze and excitation)注意力机制,其不仅能有效 实现更深的卷积深度,还能对通道方向上的数据特征进 行感知,在数据尺度较大时,有一定优势。参数量170.4K

生成代码:

function lgraph=creatCNN2D_EffiPlus2(inputsize)

filter=8;
lgraph = layerGraph();

tempLayers = [
    imageInputLayer([1 1293 1],"Name","imageinput")
    convolution2dLayer([1 3],filter,"Name","conv_11","Padding","same","Stride",[1 2])%'DilationFactor',[1,2]
    batchNormalizationLayer("Name","batchnorm_8")
    swishLayer("Name","swish_1_1_1")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter,"Name","conv_1_1","Padding","same","Stride",[1 1])%%
    batchNormalizationLayer("Name","batchnorm_1_1")
    swishLayer("Name","swish_1_5")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    globalAveragePooling2dLayer("Name","gapool_1_1")
    convolution2dLayer([1 1],2,"Name","conv_2_1_1","Padding","same")
    swishLayer("Name","swish_2_1_1")
    convolution2dLayer([1 1],filter,"Name","conv_3_1_1","Padding","same")
    sigmoidLayer("Name","sigmoid_1_1")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    multiplicationLayer(2,"Name","multiplication_3")
    convolution2dLayer([1 3],filter*2,"Name","conv","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm")
    swishLayer("Name","swish_1_1")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter*2,"Name","conv_1","Padding","same","Stride",[1 1])%%
    batchNormalizationLayer("Name","batchnorm_1")
    swishLayer("Name","swish_1")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    globalAveragePooling2dLayer("Name","gapool_1")
    convolution2dLayer([1 1],4,"Name","conv_2_1","Padding","same")
    swishLayer("Name","swish_2_1")
    convolution2dLayer([1 1],filter*2,"Name","conv_3_1","Padding","same")
    sigmoidLayer("Name","sigmoid_1")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    multiplicationLayer(2,"Name","multiplication")
    convolution2dLayer([1 3],filter*4,"Name","conv_9","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm_6")
    swishLayer("Name","swish_1_4")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter*4,"Name","conv_10","Padding","same","Stride",[1 1])%%
    batchNormalizationLayer("Name","batchnorm_7")
    swishLayer("Name","swish_1_3")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    globalAveragePooling2dLayer("Name","gapool_2")
    convolution2dLayer([1 1],8,"Name","conv_2_2","Padding","same")
    swishLayer("Name","swish_2_2")
    convolution2dLayer([1 1],filter*4,"Name","conv_3_2","Padding","same")
    sigmoidLayer("Name","sigmoid_2")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    multiplicationLayer(2,"Name","multiplication_2")
    convolution2dLayer([1 3],filter*8,"Name","conv_5","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm_2")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 1],filter*8,"Name","conv_6","Padding","same")
    batchNormalizationLayer("Name","batchnorm_3")
    swishLayer("Name","swish")
    convolution2dLayer([1 3],filter*8,"Name","conv_7","Padding","same")
    batchNormalizationLayer("Name","batchnorm_4")
    swishLayer("Name","swish_1_2")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    globalAveragePooling2dLayer("Name","gapool")
    convolution2dLayer([1 1],12,"Name","conv_2","Padding","same")
    swishLayer("Name","swish_2")
    convolution2dLayer([1 1],filter*8,"Name","conv_3","Padding","same")
    sigmoidLayer("Name","sigmoid")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    multiplicationLayer(2,"Name","multiplication_1")
    convolution2dLayer([1 3],filter*8,"Name","conv_8","Padding","same")
    batchNormalizationLayer("Name","batchnorm_5")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    additionLayer(2,"Name","addition")
    convolution2dLayer([1 3],1,"Name","conv_4","Padding","same")
    swishLayer("Name","swish_3")
    averagePooling2dLayer([1 3],"Name","avgpool2d","Padding","same")
    fullyConnectedLayer(1,"Name","fc")
    regressionLayer("Name","regressionoutput")];
lgraph = addLayers(lgraph,tempLayers);

lgraph = connectLayers(lgraph,"swish_1_1_1","conv_1_1");
lgraph = connectLayers(lgraph,"swish_1_1_1","gapool_1_1");
lgraph = connectLayers(lgraph,"swish_1_5","multiplication_3/in1");
lgraph = connectLayers(lgraph,"sigmoid_1_1","multiplication_3/in2");
lgraph = connectLayers(lgraph,"swish_1_1","conv_1");
lgraph = connectLayers(lgraph,"swish_1_1","gapool_1");
lgraph = connectLayers(lgraph,"swish_1","multiplication/in1");
lgraph = connectLayers(lgraph,"sigmoid_1","multiplication/in2");
lgraph = connectLayers(lgraph,"swish_1_4","conv_10");
lgraph = connectLayers(lgraph,"swish_1_4","gapool_2");
lgraph = connectLayers(lgraph,"swish_1_3","multiplication_2/in1");
lgraph = connectLayers(lgraph,"sigmoid_2","multiplication_2/in2");
lgraph = connectLayers(lgraph,"batchnorm_2","conv_6");
lgraph = connectLayers(lgraph,"batchnorm_2","addition/in2");
lgraph = connectLayers(lgraph,"swish_1_2","gapool");
lgraph = connectLayers(lgraph,"swish_1_2","multiplication_1/in1");
lgraph = connectLayers(lgraph,"sigmoid","multiplication_1/in2");
lgraph = connectLayers(lgraph,"batchnorm_5","addition/in1");

基于ResNet结构的1D-CNN (RNet)

RNet 由 3 层残差网络模块构成,其结构相较 于 ENet 较为精简,模型容量更少,个人感觉性能比较综合。参数量33.7K

function lgraph=creatCNN2D_ResNet(inputsize)
lgraph = layerGraph();
filter=16;

tempLayers = [
    imageInputLayer([inputsize],"Name","imageinput")
    convolution2dLayer([1 3],filter,"Name","conv","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm")
    reluLayer("Name","relu")
    maxPooling2dLayer([1 3],"Name","maxpool","Padding",'same',"Stride",[1 2])];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter,"Name","conv_1","Padding","same")
    batchNormalizationLayer("Name","batchnorm_1")
    reluLayer("Name","relu_1")
    convolution2dLayer([1 3],filter,"Name","conv_2","Padding","same")
    batchNormalizationLayer("Name","batchnorm_2")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    additionLayer(2,"Name","addition")
    reluLayer("Name","relu_3")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter*2,"Name","conv_3","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm_3")
    reluLayer("Name","relu_2")
    convolution2dLayer([1 3],filter*2,"Name","conv_4","Padding","same")
    batchNormalizationLayer("Name","batchnorm_4")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter*2,"Name","conv_8","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm_8")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    additionLayer(2,"Name","addition_1")
    reluLayer("Name","relu_5")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter*4,"Name","conv_5","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm_5")
    reluLayer("Name","relu_4")
    convolution2dLayer([1 3],filter*4,"Name","conv_6","Padding","same")
    batchNormalizationLayer("Name","batchnorm_6")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    convolution2dLayer([1 3],filter*4,"Name","conv_7","Padding","same","Stride",[1 2])
    batchNormalizationLayer("Name","batchnorm_7")];
lgraph = addLayers(lgraph,tempLayers);

tempLayers = [
    additionLayer(2,"Name","addition_2")
    reluLayer("Name","res3a_relu")
    globalMaxPooling2dLayer("Name","gmpool")
    fullyConnectedLayer(1,"Name","fc")
    regressionLayer("Name","regressionoutput")];
lgraph = addLayers(lgraph,tempLayers);


lgraph = connectLayers(lgraph,"maxpool","conv_1");
lgraph = connectLayers(lgraph,"maxpool","addition/in2");
lgraph = connectLayers(lgraph,"batchnorm_2","addition/in1");
lgraph = connectLayers(lgraph,"relu_3","conv_3");
lgraph = connectLayers(lgraph,"relu_3","conv_8");
lgraph = connectLayers(lgraph,"batchnorm_4","addition_1/in1");
lgraph = connectLayers(lgraph,"batchnorm_8","addition_1/in2");
lgraph = connectLayers(lgraph,"relu_5","conv_5");
lgraph = connectLayers(lgraph,"relu_5","conv_7");
lgraph = connectLayers(lgraph,"batchnorm_6","addition_2/in1");
lgraph = connectLayers(lgraph,"batchnorm_7","addition_2/in2");

ENet和RNet的结构示意图

训练代码与案例:

训练代码

我们基于RNet采用1293长度的数据对样本进行训练,做回归任务,代码如下:

clear all

load("TestData2.mat");

%数据分割
%[AT,AP]=ks(Alltrain,588);
num_div=1;

%直接载入数据


[numsample,sampleSize]=size(AT);
for i=1:numsample
    XTrain(:,:,1,i)=AT(i,1:end-num_div);
    YTrain(i,1)=AT(i,end);
end
[numtest,~]=size(AP)
for i=1:numtest
    XTest(:,:,1,i)=AP(i,1:end-num_div);
    YTest(i,1)=AP(i,end);
end


%Ytrain=inputData(:,end);
figure
histogram(YTrain)
axis tight
ylabel('Counts')
xlabel('TDS')

options = trainingOptions('adam', ...
    'MaxEpochs',150, ...
    'MiniBatchSize',64, ...
    'InitialLearnRate',0.008, ...
    'GradientThreshold',1, ...
    'Verbose',false,...
    'Plots','training-progress',...
    'ValidationData',{XTest,YTest});

layerN=creatCNN2D_ResNet([1,1293,1]);%创建网络,根据自己的需求改函数名称

[Net, traininfo] = trainNetwork(XTrain,YTrain,layerN,options);


YPredicted = predict(Net,XTest);
predictionError = YTest- YPredicted;
squares = predictionError.^2;
rmse = sqrt(mean(squares))
[R P] = corrcoef(YTest,YPredicted)
scatter(YPredicted,YTest,'+')
xlabel("Predicted Value")
ylabel("True Value")
R2=R(1,2)^2;
hold on
plot([0 2000], [-0 2000],'r--')

训练数据输入如下:最后一列为预测值: 

训练过程如下: 

训练数据分享    

    源数据分享:TestData2.mat

链接:https://pan.baidu.com/s/1B1o2xB4aUFOFLzZbwT-7aw?pwd=1xe5 
提取码:1xe5 
 

训练建议    

    以个人经验来说,VNet结构最为简单,但是综合表现最差。对于800-3000长度的数据,容量较小的RNet的表现会比ENet好,对于长度超过的3000的一维数据,ENet的表现更好。

    关于超参数的设计:首先最小批次minibatch设置小于64会好一点,确保最终结果会比较好,反正一维卷积神经网络训练很快。第二,与图片不同,一维数据常常数值精度比较高(图片一般就uint8或16格式),因此学习率不宜太高,要不表现会有所下降。我自己尝试的比较好的学习率是0.008.总体来说0.015-0.0005之间都OK,0.05以上结果就开始下降了。

其他引用函数

KS数据划分

Kennard-Stone(KS)方法是一种常用于数据集划分的方法,尤其适用于化学计量学等领域。其主要原理是保证训练集中的样本按照空间距离分布均匀。

function [XSelected,XRest,vSelectedRowIndex]=ks(X,Num) %Num=三分之二的数值
%  ks selects the samples XSelected which uniformly distributed in the exprimental data X's space 
%  Input  
%         X:the matrix of the sample spectra 
%         Num:the number of the sample spectra you want select  
%  Output 
%         XSelected:the sample spectras was sel   ected from the X 
%         XRest:the sample spectras remain int the X after select 
%         vSelectedRowIndex:the row index of the selected sample in the X matrix      
%  Programmer: zhimin zhang @ central south university on oct 28 ,2007 


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% start of the kennard-stone step one 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

[nRow,nCol]=size(X); % obtain the size of the X matrix 
mDistance=zeros(nRow,nRow); %dim a matrix for the distance storage 
vAllofSample=1:nRow; 

for i=1:nRow-1 
     
    vRowX=X(i,:); % obtain a row of X 
     
    for j=i+1:nRow 
         
        vRowX1=X(j,:); % obtain another row of X         
        mDistance(i,j)=norm(vRowX-vRowX1); % calc the Euclidian distance 
         
         
    end 
     
end 


[vMax,vIndexOfmDistance]=max(mDistance); 

[nMax,nIndexofvMax]=max(vMax); 


%vIndexOfmDistance(1,nIndexofvMax) 
%nIndexofvMax 
vSelectedSample(1)=nIndexofvMax; 
vSelectedSample(2)=vIndexOfmDistance(nIndexofvMax); 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% end of the kennard-stone step one 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 





%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
% start of the kennard-stone step two 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

for i=3:Num 
    vNotSelectedSample=setdiff(vAllofSample,vSelectedSample); 
    vMinDistance=zeros(1,nRow-i + 1); 
     
     
    for j=1:(nRow-i+1) 
        nIndexofNotSelected=vNotSelectedSample(j); 
        vDistanceNew = zeros(1,i-1); 
         
        for k=1:(i-1) 
            nIndexofSelected=vSelectedSample(k); 
            if(nIndexofSelected<=nIndexofNotSelected) 
                vDistanceNew(k)=mDistance(nIndexofSelected,nIndexofNotSelected); 
            else 
                vDistanceNew(k)=mDistance(nIndexofNotSelected,nIndexofSelected);     
            end                        
        end 
         
        vMinDistance(j)=min(vDistanceNew); 
    end 
     
    [nUseless,nIndexofvMinDistance]=max(vMinDistance); 
    vSelectedSample(i)=vNotSelectedSample(nIndexofvMinDistance); 
end 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% end of the kennard-stone step two 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 






%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% start of export the result 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
vSelectedRowIndex=vSelectedSample; 

for i=1:length(vSelectedSample) 
    
    XSelected(i,:)=X(vSelectedSample(i),:); 
end 

vNotSelectedSample=setdiff(vAllofSample,vSelectedSample); 
for i=1:length(vNotSelectedSample) 
    
    XRest(i,:)=X(vNotSelectedSample(i),:); 
end 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 
%%%%% end of export the result 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 

参考文献

卷积神经网络的紫外-可见光谱水质分类方法 (wanfangdata.com.cn)

光谱技术结合水分校正与样本增广的棉田土壤盐分精准反演 (tcsae.org)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1435819.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Mac最实用的日常快捷键,最方便快捷的Mac使用技巧合集

今天小编给大家分享一下这几年来使用Mac过程中的各种小技巧。&#xff0c;大家不用担心&#xff0c;下面的各种小技巧在apple其他各型号电脑中几乎也是都是通用的&#xff0c;话不多说&#xff0c;下面开始&#xff01; 屏幕相关 &#xff08;1&#xff09;截屏 ctrlshift3 截…

jsp商场会员卡管理系统Myeclipse开发mysql数据库web结构java编程计算机网页项目

一、源码特点 JSP 商场会员卡管理系统是一套完善的java web信息管理系统&#xff0c;对理解JSP java编程开发语言有帮助&#xff0c;系统具有完整的源代码和数据库&#xff0c;系统主要采用B/S模式开发。开发环境为TOMCAT7.0,Myeclipse8.5开发&#xff0c;数据库为Mysql5.…

计组学习笔记2024/2/5

1. 2. 3. 1.这么多步,才完成第一条指令,通过0索引来找到2 2.PC的值是对应着MAR来的,为了更好地找到地址 3.操作码, 地址码这些东西都是放在存储体里面的 MAR和MDR只是一个中转站 MAR对应着拿到各个部件给出的主存地址 MDR对应着拿到各个部件给出的指令 4.取指令完成后就自…

2024Node.js零基础教程(小白友好型),nodejs新手到高手,(五)NodeJS入门——http模块

044_http模块_创建HTTP服务端 hello&#xff0c;大家好&#xff0c;那这个小节我们来使用 nodejs 创建一个 http 的服务&#xff0c;有了这个 http 服务之后&#xff0c;我们就可以处理浏览器所发送过来的请求&#xff0c;并且还可以给这个浏览器返回响应。 顺便说一下&#x…

数据分析基础之《pandas(5)—文件读取与存储》

一、概述 1、我们的数据大部分存在于文件当中&#xff0c;所以pandas会支持复杂的IO操作&#xff0c;pandas的API支持众多文件格式&#xff0c;如CSV、SQL、XLS、JSON、HDF5 二、CSV 1、读取csv文件 read_csv(filepath_or_buffer, sep,, delimiterNone) 说明&#xff1a; fi…

<.Net>使用visual Studio 2022在VB.net中新添自定义画图函数(优化版)

前言 这是基于我之前的一篇博文&#xff1a; 使用visual Studio 2019在VB.net中新添自定义画图函数 在此基础上&#xff0c;我优化了一下&#xff0c;改进了UI&#xff0c;添加了示例功能&#xff0c;即以画圆函数为基础&#xff0c;添加了走马灯功能。 先看一下最终效果&#…

在线JSON解析格式化工具

在线JSON解析格式化工具 - BTool在线工具软件&#xff0c;为开发者提供方便。JSON在线可视化工具:提供JSON视图,JSON格式化视图,JSON可视化,JSON美化,JSON美化视图,JSON在线美化,JSON结构化,JSON格式化,JSON中文Unicode等等。以清晰美观的结构化视图来展示json,可伸缩折叠展示,…

【Linux】Linux权限(下)

Hello everybody!在上一篇文章中&#xff0c;权限讲了大部分内容。今天继续介绍权限剩下的内容&#xff0c;希望大家看过这篇文章后都能有所收获&#xff01; 1.更改文件的拥有者和所属组 对于普通用户&#xff0c;文件的拥有者和所属组都无权修改。 、 、 但root可以修改文件…

071:vue中过滤器filters的使用方法(图文示例)

第071个 查看专栏目录: VUE ------ element UI 专栏目标 在vue和element UI联合技术栈的操控下&#xff0c;本专栏提供行之有效的源代码示例和信息点介绍&#xff0c;做到灵活运用。 提供vue2的一些基本操作&#xff1a;安装、引用&#xff0c;模板使用&#xff0c;computed&a…

双侧条形图绘制教程

写在前面 双侧条形图在我们的文章中也是比较常见的&#xff0c;那么这样的图形是如何绘制的呢&#xff1f; 以及它使用的数据类型是什么呢&#xff1f; 这些都是我们在绘制图形前需要掌握的&#xff0c;至少我们知道绘图的数据集如何准备&#xff0c;这样才踏出第一步。 今天…

Unity接入GVoice腾讯实时语音

Unity接入GVoice腾讯实时语音 一、介绍二、注册GVoice创建项目语音服务1.创建项目2.申请语音权限3.项目管理查看SDK初始化的一些参数和基本信息4.GVoice检测 三、SDK下载SDK是分为两种类型&#xff1a;独立版集成板 SDK放入Unity工程中 四、语音代码写法五、GVoice踩坑语音权限…

知到如何找答案?这7款足够解决问题 #笔记#其他

在这个信息爆炸的时代&#xff0c;合理利用学习工具可以帮助我们过滤和获取有用的知识。 1.网易公开课 这是一个可以帮你找到国内外演讲课程的学习APP&#xff0c;提供了多个专业的视频课程&#xff0c;而且还有丰富的TED、精品国外英语纪录片等。 其中涵盖的大学专业课程包…

ClickHouse基于数据分析常用函数

文章标题 一、WITH语法-定义变量1.1 定义变量1.2 调用函数1.3 子查询 二、GROUP BY子句&#xff08;结合WITH ROLLUP、CUBE、TOTALS&#xff09;三、FORM语法3.1表函数3.1.1 file3.1.2 numbers3.1.3 mysql3.1.4 hdfs 四、ARRAY JOIN语法&#xff08;区别于arrayJoin(arr)函数&a…

Java开发IntelliJ IDEA2023

IntelliJ IDEA 2023是一款强大的集成开发环境&#xff08;IDE&#xff09;&#xff0c;专为Java开发人员设计。它提供了许多特色功能&#xff0c;帮助开发人员更高效地编写、测试和调试Java应用程序。以下是一些IntelliJ IDEA 2023的特色功能&#xff1a; 智能代码编辑器&…

在 MacOS 上虚拟化 x86Linux 的最佳方法(通过 Rosetta)

categories: [VM] tags: MacOS VM 写在前面 买了 ARM 的 mac, 就注定了要折腾一下虚拟机了… 之前写过一篇文章是通过 utm 虚拟化archlinux, 其实本质上还是调用了 qemu-system-x86_64, 所以速度并不快, 后来想着能不能借用 Rosetta 的优势即原生转译, 来虚拟化 Intel 的 Linu…

idea开发工具的简单使用与常见问题

1、配置git 选择左上角目录file->setting 打开&#xff0c;Version Control 目录下Git&#xff0c;选择git安装目录下的git.exe文件&#xff1b; 点击test&#xff0c;出现git版本&#xff0c;则表示git识别成功&#xff0c;点击右下角确认即可生效。 2、配置node.js 选…

C++ 哈希+unordered_map+unordered_set+位图+布隆过滤器(深度剖析)

文章目录 1. 前言2. unordered 系列关联式容器2.1 unordered_map2.1.1 unordered_map 的概念2.1.2 unordered_map 的使用 2.2 unordered_set2.2.1 unordered_set 的概念2.2.2 unordered_set 的使用 3. 底层结构3.1 哈希的概念3.2 哈希冲突3.3 哈希函数3.4 哈希冲突的解决3.4.1 …

GaussDB HCS 轻量化部署软件下载指引

一、Support 账号准备 1. 账号说明 华为的软件服务在华为support网站发布&#xff0c;注册该账号后&#xff0c;可以申请软件、下载离线文档&#xff0c;查看技术案例等功能 2. 账号注册 步骤 1&#xff1a;点击如下官方链接 华为运营商技术支持 - 华为 步骤 2&#xff1…

Oracle Analytics BIEE 操作方法(四)标题

1 背景 版本&#xff1a;BIEE 12C 测试地&#xff1a;分析 2 显示运行时间 2.1 说明 分析视图中的标题&#xff0c;希望可以显示运行时间 2.2 操作步骤 分析进入编辑状态 在“结果”标签中&#xff0c;找到要编辑的标题 “开始时间”中&#xff0c;选择想要的格式 1&a…

JIT逆优化引发的Java服务瞬时抖动 问题排查解决方案

目录 一、背景 二、前期排查&#xff08;失败&#xff09; 三、使用神器JFR 四、学习JIT&思考解决方案 五、最终的解决方案 五、总结 一、背景 我们有一个QPS较高、机器数较多的Java服务&#xff1b;该服务的TP9999一般为几十ms&#xff0c;但偶尔会突然飙升至数秒&a…