Win11系统C++将ONNX转换为engineer文件,TensorRT加速推理

news2024/11/24 11:38:40

准确工作,安装配置好CUDA,cudnn,vs2019,TensorRT

可以参考我博客(下面博客有CUDA11.2,CUDNN11.2,vs2019,TensorRT配置方法)

(70条消息) WIN11+CUAD11.2+vs2019+tensorTR8.6+Yolov3/4/5模型加速_Vertira的博客-CSDN博客

下面我们在上面的基础上,下载opencv4(备用),然后创建onnx2TensorRT 项目

1.vs2019创建控制台程序,

  如果你是初次安装,没有c++套件(如果你安装了C++套件 ,忽略此步,直接进行第2步),

打开 Visual Studio Installer , 点击 " 修改 " 按钮 

进入 可以安装套件的界面

安装 " 使用 C++ 的桌面开发 " 组件 ; 有很多组件,这里咱们只安装C++

 选中后 , 右下角会显示 " 修改 " 按钮 , 点击该按钮 , 即可开始

 

然后等待安装完成即可 ;(时间有点长)

然后:

2、创建并运行 Windows 控制台程序 

 文件-->新建---》项目

选择创建 " 控制台应用 " ,

下一步,

在弹出的界面中 定义好工程名称onnx2TensorRT  和路径,确定创建  即可 

 然后进入工程文件夹,打开工程,工程自动创建带有输出hello world的cpp源文件,把文件清空

然后粘贴下面的代码:(由于没有配置路径,会出现很多红线,这个不用担心,下面就开始配置CUDA,TensorRT,这里没有用的图片显示,我暂时没有配置opencv)

#include <iostream>
#include <fstream>
#include "NvInfer.h"
#include "NvOnnxParser.h"

// 实例化记录器界面。捕获所有警告消息,但忽略信息性消息
class Logger : public nvinfer1::ILogger
{
    void log(Severity severity, const char* msg) noexcept override
    {
        // suppress info-level messages
        if (severity <= Severity::kWARNING)
            std::cout << msg << std::endl;
    }
} logger;


void ONNX2TensorRT(const char* ONNX_file, std::string save_ngine)
{
    // 1.创建构建器的实例
    nvinfer1::IBuilder* builder = nvinfer1::createInferBuilder(logger);

    // 2.创建网络定义
    uint32_t flag = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
    nvinfer1::INetworkDefinition* network = builder->createNetworkV2(flag);

    // 3.创建一个 ONNX 解析器来填充网络
    nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, logger);

    // 4.读取模型文件并处理任何错误
    parser->parseFromFile(ONNX_file, static_cast<int32_t>(nvinfer1::ILogger::Severity::kWARNING));
    for (int32_t i = 0; i < parser->getNbErrors(); ++i)
    {
        std::cout << parser->getError(i)->desc() << std::endl;
    }

    // 5.创建一个构建配置,指定 TensorRT 应该如何优化模型
    nvinfer1::IBuilderConfig* config = builder->createBuilderConfig();

    // 6.设置属性来控制 TensorRT 如何优化网络
    // 设置内存池的空间
    config->setMemoryPoolLimit(nvinfer1::MemoryPoolType::kWORKSPACE, 16 * (1 << 20));
    // 设置低精度   注释掉为FP32
    if (builder->platformHasFastFp16())
    {
        config->setFlag(nvinfer1::BuilderFlag::kFP16);
    }

    // 7.指定配置后,构建引擎
    nvinfer1::IHostMemory* serializedModel = builder->buildSerializedNetwork(*network, *config);

    // 8.保存TensorRT模型
    std::ofstream p(save_ngine, std::ios::binary);
    p.write(reinterpret_cast<const char*>(serializedModel->data()), serializedModel->size());

    // 9.序列化引擎包含权重的必要副本,因此不再需要解析器、网络定义、构建器配置和构建器,可以安全地删除
    delete parser;
    delete network;
    delete config;
    delete builder;

    // 10.将引擎保存到磁盘,并且可以删除它被序列化到的缓冲区
    delete serializedModel;
}


void exportONNX(const char* ONNX_file, std::string save_ngine)
{
    std::ifstream file(ONNX_file, std::ios::binary);
    if (!file.good())
    {
        std::cout << "Load ONNX file failed! No file found from:" << ONNX_file << std::endl;
        return ;
    }

    std::cout << "Load ONNX file from: " << ONNX_file << std::endl;
    std::cout << "Starting export ..." << std::endl;

    ONNX2TensorRT(ONNX_file, save_ngine);

    std::cout << "Export success, saved as: " << save_ngine << std::endl;

}


int main(int argc, char** argv)
{
    // 输入信息
    const char* ONNX_file  = "../weights/test.onnx";
    std::string save_ngine = "../weights/test.engine";

    exportONNX(ONNX_file, save_ngine);

    return 0;
}

 

 然后,开始配置

属性界面如下: 

 包含目录:是CUDA的include文件夹所在路径和TensorRT的 include文件夹

$(CUDA_PATH)\include                            
D:\software\TensorRT-8.4.1.5_CUDA11.6_Cudnn8.4.1\include
D:\software\TensorRT-8.4.1.5_CUDA11.6_Cudnn8.4.1\samples\common


库目录:指的是 CUDA的lib 和TensorRT的lib

$(CUDA_PATH)\lib
$(CUDA_PATH)\lib\x64
D:\software\TensorRT-8.4.1.5_CUDA11.6_Cudnn8.4.1\lib

注意:$(CUDA_PATH)  也可以用绝对路径代替

3. 设置属性 — 链接器 

 输入的内容:

nvinfer.lib
nvinfer_plugin.lib
nvonnxparser.lib
nvparsers.lib
cudnn.lib
cublas.lib
cudart.lib

然后确定,确定  即可

5. 创建模型文件夹

新建weights文件夹,将你的onnx文件放在里面

      

 注意:weights路径的位置一定要和程序  最下面的路径位置一样

 然后,在debug模型在运行,如果你需要release版本,那就在release模式下运行。

 运行结果:

Load ONNX file from: ../weights/model.onnx
Starting export ...
onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
TensorRT was linked against cuDNN 8.4.1 but loaded cuDNN 8.2.1
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Conv_3 + Relu_4.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
 - Values less than smallest positive FP16 Subnormal value detected. Converting to FP16 minimum subnormalized value.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_8 + Relu_9.bias] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
Weights [name=Gemm_10.weight] had the following issues when converted to FP16:
 - Subnormal FP16 values detected.
If this is not the desired behavior, please modify the weights or retrain with regularization to reduce the magnitude of the weights.
The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
Export success, saved as: ../weights/model.engine

D:\VS_project\onnx2TensorRT\x64\Debug\onnx2TensorRT.exe (进程 17936)已退出,代码为 0。
要在调试停止时自动关闭控制台,请启用“工具”->“选项”->“调试”->“调试停止时自动关闭控制台”。
按任意键关闭此窗口. . .

从最后两句话可以看出

The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
Export success, saved as: ../weights/model.engine

D:\VS_project\onnx2TensorRT\x64\Debug\onnx2TensorRT.exe (进程 17936)已退出,代码为 0。
要在调试停止时自动关闭控制台,请启用“工具”->“选项”->“调试”->“调试停止时自动关闭控制台”。
按任意键关闭此窗口. . .

程序已经运行成功,没有报错。

7.注意:如果运行时出现 如下警告

问题 — 缺少 zlibwapi.dll 文件

若出现缺少zlibwapi.dll,需要下载,

若出现缺少zlibwapi.dll,需要下载,
链接:https://pan.baidu.com/s/12sVdiDH-NOOZNI9QqJoZuA?pwd=a0n0

 链接:百度网盘 请输入提取码百度网盘为您提供文件的网络备份、同步和分享服务。空间大、速度快、安全稳固,支持教育网加速,支持手机端。注册使用百度网盘即可享受免费存储空间https://pan.baidu.com/s/12sVdiDH-NOOZNI9QqJoZuA?pwd=a0n0
提取码:a0n0
下载后解压,将zlibwapi.dll 放入 …\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin(我的路径)
重新运行程序。如果没有错误,就忽略这个注意。

8. 生成engine文件

选择Release模式,然后重新配置一遍

点击,更换为Release

结束

欢迎   点赞   收藏   加   关注

参考:

(56条消息) Win10系统C++将ONNX转换为TensorRT_田小草呀的博客-CSDN博客_c++ onnx转tensorrt

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2158.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【JavaDS】HashMap与HashSet的底层原理

✨博客主页: 心荣~ ✨系列专栏:【Java实现数据结构】 ✨一句短话: 难在坚持,贵在坚持,成在坚持! 文章目录一. HashMap底层原理1. HashMap的属性2. HashMap的构造方法3. 给HashMap分配内存的时机4. HashMap中的put5. HashMap中的哈希函数6. HashMap的扩容机制二. HashSet的底层原…

【资损】业务产品分析资损防控规范

&#x1f4eb;作者简介&#xff1a;小明java问道之路&#xff0c;专注于研究 Java/ Liunx内核/ C及汇编/计算机底层原理/源码&#xff0c;就职于大型金融公司后端高级工程师&#xff0c;擅长交易领域的高安全/可用/并发/性能的架构设计与演进、系统优化与稳定性建设。 &#x1…

读书 | 设计模式之禅 - 责任链模式

文章目录1. 实现古代妇女的“三从”制度2. 实现古代妇女的“三从”制度-优化3. 责任链模式的定义1. 实现古代妇女的“三从”制度 一位女性在结婚之前要听从于父亲&#xff0c;结婚之后要听从于丈夫&#xff0c;如果丈夫死了还要听从于儿子。作为父亲、丈夫或儿子&#xff0c;只…

Spring-day01 spring全家桶,如何学习框架,单元测试

Spring学习-概述 1. spring全家桶&#xff1a;spring &#xff0c; springmvc &#xff0c;spring boot , spring cloud spring: 出现是在2002左右&#xff0c;解决企业开发的难度。减轻对项目模块之间的管理&#xff0c; 类和类之间的管理&#xff0c; 帮助开发人员创建对象&a…

Linux:iptables和firewalld基础j解析

目录 1、四表五链概念&#xff1a; 2、数据报文流程 3、iptables 与 firewalld 区别 4、DROP 和 REJECT策略的区别&#xff1a; 5、iptables命令参数 6、iptables基本的命令使用 7.firewalld&#xff1a;基于CLI&#xff08;命令行界面&#xff09;和基于GUI&#xff08;图…

多线程wait()和notify()方法详解

多线程wait()和notify()方法详解 文章目录多线程wait()和notify()方法详解前言一、线程间等待与唤醒机制二、等待方法wait()三、唤醒方法notify()四、关于wait和notify内部等待问题&#xff08;重要&#xff09;五、完整代码&#xff08;仅供测试用&#xff09;六、wait和sleep…

docker实战学习2022版本(六)之Dockerfile整合微服务实战

需求&#xff1a;通过idea新建一个普通微服务模块&#xff1b;然后通过Dockerfile发布微服务部署到docker容器 step1&#xff1a;新建一个springboot项目&#xff0c;添加依赖 <dependencies><!--SpringBoot通用依赖模块--><dependency><groupId>org.…

Clickhouse与Doris的区别

Doris使用较为简单&#xff0c;join功能更强大&#xff0c;运维更简单&#xff0c;灵活的扩容缩容&#xff0c;分布式更强&#xff0c;支持事务和幂等性导数 Clickhouse性能更佳&#xff0c;导入性能和单表查询性能更好&#xff0c;同时可靠性更好&#xff0c;支持非常多的表引…

谈一谈AI对人工的取代

文章目录AI绘画现在达到了什么水平&#xff1f;易用性怎么样&#xff1f;**缘起&#xff1a;2015年 用文字画画****2021年 Dalle 与 开源社区的程序员们****openAI与它并不open的Dalle****AI开源社区****Dream by [wombo](https://www.zhihu.com/search?qwombo&search_sou…

内农大《嵌入式基础》实验二 C语言进阶和Makefile

一、 实验目的 利用多文件编程&#xff0c;掌握Linux环境下C程序的编辑、编译、运行等操作。掌握Makefile文件的编写、变量及隐式规则和模式规则的应用。掌握Linux环境下main函数的参数。掌握各类指针的应用。 二、 实验任务与要求 根据实验要求编写C语言程序&#xff1b;写…

LiteIDE主题定制教程【续】

摘要&#xff1a;本篇文章是LiteIDE主题定制教程的续作&#xff0c;之所以会有这篇续作&#xff0c;是因为在写完那篇文章之后&#xff0c;我在使用过程中陆续发现了一些问题&#xff0c;以及一些可以优化的地方&#xff0c;我将这些内容作为补充放到这篇文章里。所有更新都已同…

<Linux系统复习>文件系统的理解

一、本章重点 1、磁盘的物理结构 2、磁盘文件如何存储&#xff1f; 3、目录的理解 4、创建一个文件做了什么&#xff1f; 5、删除一个文件做了什么&#xff1f; 6、软连接 7、硬链接 01 磁盘的物理结构 磁盘是硬件结构唯一的机械设备&#xff0c;它通过磁头来进行磁盘的读写&am…

LabVIEW前面板上的字体大小取决于操作系统

LabVIEW前面板上的字体大小取决于操作系统 创建了一个VI&#xff0c;其前面板使用了多个标签和文本。我发现Windows 7系统上的字体大小与Windows 10系统上的字体大小不同。这导致我的前面板看起来不像我希望在计算机上看到的那模样。如何使字体在所有Windows操作系统上变得相同…

【Linux_】权限

【Linux_】权限 心有所向&#xff0c;日复一日&#xff0c;必有精进专栏&#xff1a;《Linux_》作者&#xff1a;沂沐沐目录 【Linux_】权限 前言 Linux权限的概念&#xff08;是什么&#xff09;&#xff1f; 什么是权限&#xff1f; Linux权限管理 文件访问者的分类&am…

npm包学习

想开发自己的的工具包&#xff0c;那必然要借鉴一些常用的npm包来帮我们解决一些问题&#xff0c;下面就罗列一些在学习vue-cli实现原理时候遇到的一些依赖包吧。 1、chalk 用途&#xff1a;可以修改终端输出字符的颜色&#xff0c;类似css的color属性&#xff0c;npm地址&am…

100天精通Python(数据分析篇)——第62天:pandas常用统计方法与案例

文章目录每篇前言一、常用统计方法与案例1. 求和&#xff08;sum&#xff09;2. 求平均值&#xff08;mean&#xff09;3. 求最小值&#xff08;min&#xff09;4. 求最大值&#xff08;max&#xff09;5. 求中位数&#xff08;median&#xff09;6. 求众数&#xff08;mode&am…

jQuery网页开发案例:jQuery 其他方法--jQuery 拷贝对象,多库共存,jQuery 插件

jQuery 对象拷贝 如果想要把某个对象拷贝&#xff08;合并&#xff09; 给另外一个对象使用&#xff0c;此时可以使用 $.extend() 方法 语法&#xff1a; $.extend([deep], target, object1, [objectN]) 1. deep: 如果设为true 为深拷贝&#xff0c; 默认为false 浅拷贝 …

做减法才是真本事,别以为你很能学,做加法一点都不难

文章目录 顶级的高手才敢做减法 前言 一、做减法才是真本事 二、大数据梦想联盟活动开启 顶级的高手才敢做减法 前言 大多数人不懂&#xff0c;不会&#xff0c;不做&#xff0c;才是你的机会&#xff0c;你得行动&#xff0c;不能畏首畏尾 大数据等于趋势&#xff0c;一…

Vue中computed和watch区别

前言 vue中的computed和watch我们经常会用到&#xff0c;那么在什么场景下使用computed和watch&#xff0c;两者又有什么区别呢&#xff0c;傻傻分不清楚。记录一下&#xff0c;温故而知新&#xff01; computed computed是计算属性&#xff0c;基于data中声明过或者父组件传递…

makkefile文件自动化编译以及基础文件命令(补)

目录makefile文件&#xff1a;实现自动化编译基础文件命令find&#xff08;查找&#xff09;grep&#xff08;过滤&#xff09;| &#xff08;管道&#xff09;关机重启文件压缩解压分步压缩解压一步压缩解压makefile文件&#xff1a;实现自动化编译 文件名称必须是:makefile …