如何熟练的使用trtexec

news2024/9/28 23:22:57

目录

  • 如何熟练的使用trtexec
    • 前言
    • 1. 参数解释
      • 1.1 Model Options
      • 1.2 Build Options
      • 1.3 Inference Options
      • 1.4 Reporting Options
      • 1.5 System Options
      • 1.6 完整的参数

如何熟练的使用trtexec

前言

杜老师推出的 trtexec 工具的使用课程,链接。记录下个人学习笔记,仅供自己参考。

trtexec 工具是 tensorRT 安装包里面自带的一个命令行应用程序软件,能够极大的便利我们在 tensorRT 开发过程中的模型编译、精度设置、性能调优等工作

课程大纲可看下面的思维导图

在这里插入图片描述

1. 参数解释

trtexec 重点参数的相关介绍

1.1 Model Options

  • –onnx=<file> ONNX model
    • 指定 onnx model 的路径

1.2 Build Options

  • –minShapes=spec Build with dynamic shapes using a profile with the min shapes provided
  • –optShapes=spec Build with dynamic shapes using a profile with the opt shapes provided
  • –maxShapes=spec Build with dynamic shapes using a profile with the max shapes provided
    • 上述三个参数用来做动态 Shape 的指定
    • Example input shapes spec: input0:1x3x256x256,input1:1x3x128x128
  • –inputIOFormats=spec Type and format of each of the input tensors (default = all inputs in fp32:chw) Note: If this option is specified, please set comma-separated types and formats for all inputs.
    • 输入类型和格式的指定 FP32、FP16、INT8
  • –outputIOFormats=spec Type and format of each of the output tensors (default = all outputs in fp32:chw)
    • 输出类型和格式的指定 FP32、FP16、INT8
    • IO Formats:
      • spec ::= IOfmt[","spec]
      • IOfmt ::= type:fmt
      • type ::= “fp32”|“fp16”|“int32”|“int8”
      • fmt ::= (“chw”|“chw2”|“chw4”|“hwc8”|“chw16”|“chw32”|“dhwc8”| “cdhw32”|“hwc”|“dla_linear”|“dla_hwc4”)["+"fmt]
    • Example --inputIOFormats=fp32:chw,fp32:chw -outputIOFormats=fp16:chw,fp16:chw
  • –memPoolSize=poolspec Specify the size constraints of the designated memory pool(s) in MiB.
    • 替代以前的 workspace,当模型使用一些 shared memory 时,会去 workspace 中请求
    • Example: --memPoolSize=workspace:1024.5,dlaSRAM:256
  • –profilingVerbosity=mode Specify profiling verbosity. mode ::= layer_names_only|detailed|none (default = layer_names_only)
    • 打印信息的详细程度
    • Example: --profilingVerbosity=detailed
  • –fp16 Enable fp16 precision, in addition to fp32 (default = disabled)
    • 使能 FP16 精度
  • –int8 Enable int8 precision, in addition to fp32 (default = disabled)
    • 使能 INT8 量化精度
  • –calib=<file> Read INT8 calibration cache file
    • INT8 的量化表,存储的是每个 tensor 的 scale 值,不常用
    • 对于没有 calibration table 或者 QDQ 的,dynamic range 设置为 4,精度有较大影响,主要是为了测速
  • –best Enable all precisions to achieve the best performance (default = disabled)
    • 三个精度 FP32+FP16+INT8 同时使用,找一个速度最快的
  • –saveEngine=<file> Save the serialized engine
    • 保存序列化后的引擎文件
  • –loadEngine=<file> Load a serialized engine
    • 加载序列化后的引擎文件
  • –tacticSources=tactics Specify the tactics to be used by adding (+) or removing (-) tactics from the default tactic sources (default = all available tactics). Note: Currently only cuDNN, cuBLAS, cuBLAS-LT, and edge mask convolutions are listed as optional tactics.
    • 指定编译时优化策略的来源,比较冷门,使用较少
    • Tactic Sources:
      • tactics ::= [","tactic]
      • tactic ::= (+|-)lib
      • lib ::= “CUBLAS”|“CUBLAS_LT”|“CUDNN”|“EDGE_MASK_CONVOLUTIONS”
    • For example, to disable cudnn and enable cublas: --tacticSources=-CUDNN,+CUBLAS

1.3 Inference Options

  • –shapes=spec Set input shapes for dynamic shapes inference inputs.
    • 设置推理时输入的动态 shape
  • –loadInputs=spec Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: ‘Input:0’)
    • 模型做 debug 看推理结果是否和 pytorch 一致时可以指定该参数
    • 输入的 binary 通过 numyp 导出即可
    • For example: --loadInputs=input0:input0.binary,input1:input1.binary
  • –iterations=N Run at least N inference iterations (default = 10)
    • 运行最少 N 次推理
  • –warmUp=N Run for N milliseconds to warmup before measuring performance (default = 200)
    • 在性能测试时执行 N 毫秒的 warmup
  • –duration=N Run performance measurements for at least N seconds wallclock time (default = 3)
    • 最少运行 N 秒
  • –sleepTime=N Delay inference start with a gap of N milliseconds between launch and compute (default = 0)
    • 推理前延迟 N 毫秒
  • –idleTime=N Sleep N milliseconds between two continuous iterations(default = 0)
    • 两次连续推理之间空闲 N 毫秒
  • –streams=N Instantiate N engines to use concurrently (default = 1)
    • 启动 N 个实例,可以测试多流执行时是否提速
  • –separateProfileRun Do not attach the profiler in the benchmark run; if profiling is enabled, a second profile run will be executed (default = disabled)
    • profile 和 benchmark 分开
  • –buildOnly Exit after the engine has been built and skip inference perf measurement (default = disabled)
    • 只做编译不做 inference

1.4 Reporting Options

  • –verbose Use verbose logging (default = false)
    • 使用详细的日志输出信息
  • –dumpOutput Print the output tensor(s) of the last inference iteration (default = disabled)
    • 将推理结果直接打印出来
  • –dumpProfile Print profile information per layer (default = disabled)
    • 打印每一层的 profile 信息
  • –dumpLayerInfo Print layer information of the engine to console (default = disabled)
    • 打印 engine 的每层信息
  • –exportOutput=<file> Write the output tensors to a json file (default = disabled)
    • 将 ouput 打印信息存储下来
  • –exportProfile=<file> Write the profile information per layer in a json file (default = disabled)
    • 将 profile 打印信息存储下来
  • –exportLayerInfo=<file> Write the layer information of the engine in a json file (default = disabled)
    • 将 engine 的 layer 打印信息存储下来
    • Example: --exportLayerInfo=layer.json --profilingVerbosity=detailed

1.5 System Options

  • –device=N Select cuda device N (default = 0)
    • device 设备的设置
  • –useDLACore=N Select DLA core N for layers that support DLA (default = none)
    • 使用较少
  • –allowGPUFallback When DLA is enabled, allow GPU fallback for unsupported layers (default = disabled)
    • 当 DLA 使能时,是否允许某些不支持的层在 GPU 上 fallback
  • –plugins Plugin library (.so) to load (can be specified multiple times)
    • 加载插件,实现自定义算子的编译工作
    • Example: --plugin=xxx.so --plugin=aaa.so --plugins=www.so

1.6 完整的参数

trtexec 的完整参数如下所示

&&&& RUNNING TensorRT.trtexec [TensorRT v8401] # trtexec
=== Model Options ===
  --uff=<file>                UFF model
  --onnx=<file>               ONNX model
  --model=<file>              Caffe model (default = no model, random weights used)
  --deploy=<file>             Caffe prototxt file
  --output=<name>[,<name>]*   Output names (it can be specified multiple times); at least one output is required for UFF and Caffe
  --uffInput=<name>,X,Y,Z     Input blob name and its dimensions (X,Y,Z=C,H,W), it can be specified multiple times; at least one is required for UFF models
  --uffNHWC                   Set if inputs are in the NHWC layout instead of NCHW (use X,Y,Z=H,W,C order in --uffInput)

=== Build Options ===
  --maxBatch                  Set max batch size and build an implicit batch engine (default = same size as --batch)
                              This option should not be used when the input model is ONNX or when dynamic shapes are provided.
  --minShapes=spec            Build with dynamic shapes using a profile with the min shapes provided
  --optShapes=spec            Build with dynamic shapes using a profile with the opt shapes provided
  --maxShapes=spec            Build with dynamic shapes using a profile with the max shapes provided
  --minShapesCalib=spec       Calibrate with dynamic shapes using a profile with the min shapes provided
  --optShapesCalib=spec       Calibrate with dynamic shapes using a profile with the opt shapes provided
  --maxShapesCalib=spec       Calibrate with dynamic shapes using a profile with the max shapes provided
                              Note: All three of min, opt and max shapes must be supplied.
                                    However, if only opt shapes is supplied then it will be expanded so
                                    that min shapes and max shapes are set to the same values as opt shapes.
                                    Input names can be wrapped with escaped single quotes (ex: \'Input:0\').
                              Example input shapes spec: input0:1x3x256x256,input1:1x3x128x128
                              Each input shape is supplied as a key-value pair where key is the input name and
                              value is the dimensions (including the batch dimension) to be used for that input.
                              Each key-value pair has the key and value separated using a colon (:).
                              Multiple input shapes can be provided via comma-separated key-value pairs.
  --inputIOFormats=spec       Type and format of each of the input tensors (default = all inputs in fp32:chw)
                              See --outputIOFormats help for the grammar of type and format list.
                              Note: If this option is specified, please set comma-separated types and formats for all
                                    inputs following the same order as network inputs ID (even if only one input
                                    needs specifying IO format) or set the type and format once for broadcasting.
  --outputIOFormats=spec      Type and format of each of the output tensors (default = all outputs in fp32:chw)
                              Note: If this option is specified, please set comma-separated types and formats for all
                                    outputs following the same order as network outputs ID (even if only one output
                                    needs specifying IO format) or set the type and format once for broadcasting.
                              IO Formats: spec  ::= IOfmt[","spec]
                                          IOfmt ::= type:fmt
                                          type  ::= "fp32"|"fp16"|"int32"|"int8"
                                          fmt   ::= ("chw"|"chw2"|"chw4"|"hwc8"|"chw16"|"chw32"|"dhwc8"|
                                                     "cdhw32"|"hwc"|"dla_linear"|"dla_hwc4")["+"fmt]
  --workspace=N               Set workspace size in MiB.
  --memPoolSize=poolspec      Specify the size constraints of the designated memory pool(s) in MiB.
                              Note: Also accepts decimal sizes, e.g. 0.25MiB. Will be rounded down to the nearest integer bytes.
                              Pool constraint: poolspec ::= poolfmt[","poolspec]
                                               poolfmt ::= pool:sizeInMiB
                                               pool ::= "workspace"|"dlaSRAM"|"dlaLocalDRAM"|"dlaGlobalDRAM"
  --profilingVerbosity=mode   Specify profiling verbosity. mode ::= layer_names_only|detailed|none (default = layer_names_only)
  --minTiming=M               Set the minimum number of iterations used in kernel selection (default = 1)
  --avgTiming=M               Set the number of times averaged in each iteration for kernel selection (default = 8)
  --refit                     Mark the engine as refittable. This will allow the inspection of refittable layers 
                              and weights within the engine.
  --sparsity=spec             Control sparsity (default = disabled). 
                              Sparsity: spec ::= "disable", "enable", "force"
                              Note: Description about each of these options is as below
                                    disable = do not enable sparse tactics in the builder (this is the default)
                                    enable  = enable sparse tactics in the builder (but these tactics will only be
                                              considered if the weights have the right sparsity pattern)
                                    force   = enable sparse tactics in the builder and force-overwrite the weights to have
                                              a sparsity pattern (even if you loaded a model yourself)
  --noTF32                    Disable tf32 precision (default is to enable tf32, in addition to fp32)
  --fp16                      Enable fp16 precision, in addition to fp32 (default = disabled)
  --int8                      Enable int8 precision, in addition to fp32 (default = disabled)
  --best                      Enable all precisions to achieve the best performance (default = disabled)
  --directIO                  Avoid reformatting at network boundaries. (default = disabled)
  --precisionConstraints=spec Control precision constraint setting. (default = none)
                                  Precision Constaints: spec ::= "none" | "obey" | "prefer"
                                  none = no constraints
                                  prefer = meet precision constraints set by --layerPrecisions/--layerOutputTypes if possible
                                  obey = meet precision constraints set by --layerPrecisions/--layerOutputTypes or fail
                                         otherwise
  --layerPrecisions=spec      Control per-layer precision constraints. Effective only when precisionConstraints is set to
                              "obey" or "prefer". (default = none)
                              The specs are read left-to-right, and later ones override earlier ones. "*" can be used as a
                              layerName to specify the default precision for all the unspecified layers.
                              Per-layer precision spec ::= layerPrecision[","spec]
                                                  layerPrecision ::= layerName":"precision
                                                  precision ::= "fp32"|"fp16"|"int32"|"int8"
  --layerOutputTypes=spec     Control per-layer output type constraints. Effective only when precisionConstraints is set to
                              "obey" or "prefer". (default = none)
                              The specs are read left-to-right, and later ones override earlier ones. "*" can be used as a
                              layerName to specify the default precision for all the unspecified layers. If a layer has more than
                              one output, then multiple types separated by "+" can be provided for this layer.
                              Per-layer output type spec ::= layerOutputTypes[","spec]
                                                    layerOutputTypes ::= layerName":"type
                                                    type ::= "fp32"|"fp16"|"int32"|"int8"["+"type]
  --calib=<file>              Read INT8 calibration cache file
  --safe                      Enable build safety certified engine
  --consistency               Perform consistency checking on safety certified engine
  --restricted                Enable safety scope checking with kSAFETY_SCOPE build flag
  --saveEngine=<file>         Save the serialized engine
  --loadEngine=<file>         Load a serialized engine
  --tacticSources=tactics     Specify the tactics to be used by adding (+) or removing (-) tactics from the default 
                              tactic sources (default = all available tactics).
                              Note: Currently only cuDNN, cuBLAS, cuBLAS-LT, and edge mask convolutions are listed as optional
                                    tactics.
                              Tactic Sources: tactics ::= [","tactic]
                                              tactic  ::= (+|-)lib
                                              lib     ::= "CUBLAS"|"CUBLAS_LT"|"CUDNN"|"EDGE_MASK_CONVOLUTIONS"
                              For example, to disable cudnn and enable cublas: --tacticSources=-CUDNN,+CUBLAS
  --noBuilderCache            Disable timing cache in builder (default is to enable timing cache)
  --timingCacheFile=<file>    Save/load the serialized global timing cache

=== Inference Options ===
  --batch=N                   Set batch size for implicit batch engines (default = 1)
                              This option should not be used when the engine is built from an ONNX model or when dynamic
                              shapes are provided when the engine is built.
  --shapes=spec               Set input shapes for dynamic shapes inference inputs.
                              Note: Input names can be wrapped with escaped single quotes (ex: \'Input:0\').
                              Example input shapes spec: input0:1x3x256x256, input1:1x3x128x128
                              Each input shape is supplied as a key-value pair where key is the input name and
                              value is the dimensions (including the batch dimension) to be used for that input.
                              Each key-value pair has the key and value separated using a colon (:).
                              Multiple input shapes can be provided via comma-separated key-value pairs.
  --loadInputs=spec           Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: 'Input:0')
                              Input values spec ::= Ival[","spec]
                                           Ival ::= name":"file
  --iterations=N              Run at least N inference iterations (default = 10)
  --warmUp=N                  Run for N milliseconds to warmup before measuring performance (default = 200)
  --duration=N                Run performance measurements for at least N seconds wallclock time (default = 3)
  --sleepTime=N               Delay inference start with a gap of N milliseconds between launch and compute (default = 0)
  --idleTime=N                Sleep N milliseconds between two continuous iterations(default = 0)
  --streams=N                 Instantiate N engines to use concurrently (default = 1)
  --exposeDMA                 Serialize DMA transfers to and from device (default = disabled).
  --noDataTransfers           Disable DMA transfers to and from device (default = enabled).
  --useManagedMemory          Use managed memory instead of separate host and device allocations (default = disabled).
  --useSpinWait               Actively synchronize on GPU events. This option may decrease synchronization time but increase CPU usage and power (default = disabled)
  --threads                   Enable multithreading to drive engines with independent threads or speed up refitting (default = disabled) 
  --useCudaGraph              Use CUDA graph to capture engine execution and then launch inference (default = disabled).
                              This flag may be ignored if the graph capture fails.
  --timeDeserialize           Time the amount of time it takes to deserialize the network and exit.
  --timeRefit                 Time the amount of time it takes to refit the engine before inference.
  --separateProfileRun        Do not attach the profiler in the benchmark run; if profiling is enabled, a second profile run will be executed (default = disabled)
  --buildOnly                 Exit after the engine has been built and skip inference perf measurement (default = disabled)

=== Build and Inference Batch Options ===
                              When using implicit batch, the max batch size of the engine, if not given, 
                              is set to the inference batch size;
                              when using explicit batch, if shapes are specified only for inference, they 
                              will be used also as min/opt/max in the build profile; if shapes are 
                              specified only for the build, the opt shapes will be used also for inference;
                              if both are specified, they must be compatible; and if explicit batch is 
                              enabled but neither is specified, the model must provide complete static
                              dimensions, including batch size, for all inputs
                              Using ONNX models automatically forces explicit batch.

=== Reporting Options ===
  --verbose                   Use verbose logging (default = false)
  --avgRuns=N                 Report performance measurements averaged over N consecutive iterations (default = 10)
  --percentile=P              Report performance for the P percentage (0<=P<=100, 0 representing max perf, and 100 representing min perf; (default = 99%)
  --dumpRefit                 Print the refittable layers and weights from a refittable engine
  --dumpOutput                Print the output tensor(s) of the last inference iteration (default = disabled)
  --dumpProfile               Print profile information per layer (default = disabled)
  --dumpLayerInfo             Print layer information of the engine to console (default = disabled)
  --exportTimes=<file>        Write the timing results in a json file (default = disabled)
  --exportOutput=<file>       Write the output tensors to a json file (default = disabled)
  --exportProfile=<file>      Write the profile information per layer in a json file (default = disabled)
  --exportLayerInfo=<file>    Write the layer information of the engine in a json file (default = disabled)

=== System Options ===
  --device=N                  Select cuda device N (default = 0)
  --useDLACore=N              Select DLA core N for layers that support DLA (default = none)
  --allowGPUFallback          When DLA is enabled, allow GPU fallback for unsupported layers (default = disabled)
  --plugins                   Plugin library (.so) to load (can be specified multiple times)

=== Help ===
  --help, -h                  Print this message
&&&& PASSED TensorRT.trtexec [TensorRT v8401] # trtexec

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/503563.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

禁止Windows更新自动安装驱动程序

禁止Windows更新自动安装驱动程序 问题解决方案方案1&#xff1a;修改系统设置方案2&#xff1a;修改组策略方案3&#xff1a;修改注册表方案4&#xff1a;回退驱动 问题 Windows更新时&#xff0c;会自动更新驱动程序&#xff0c;甚至有时会将驱动程序反向更新&#xff0c;替…

使用 webdriver API 编写自动化脚本的基本语法

文章目录 1. 打开和关闭浏览器1&#xff09;打开浏览器并访问 URL2&#xff09;关闭浏览器窗口 2. 元素的定位1&#xff09;以 id 定位元素2&#xff09;以 name 定位元素3&#xff09;以 tag name 定位元素4&#xff09;以 class name 定位元素5&#xff09;以 xpath 定位元素…

第九章 控制单元的功能课后习题

指令周期有四个阶段&#xff1a;取值 间址 执行 中断 &#xff0c;控制单元为了完成不同指令会发出不同的操作命令&#xff0c;这些操作信号控制着计算机所有部件有次序的完成不同的操作&#xff0c;以达到执行程序的目的。 控制单元的外特性 9.2控制单元的功能是什么?其输入…

【Java+GS】GeoServer——使用Java发布图层(SHP文件和DB数据库),附自用工具类

文章目录 SHP文件发布逻辑 1、获取到geoserver的manager对象2、调用createWorkArea方法&#xff0c;参入manager&#xff0c;创建空间空间 workArea3、调用createShpDataPool方法&#xff0c;创建数据存储4、发布样式Style.5、发布图层 调用业务层库发布shp文件图层业务逻辑如下…

【致敬未来的攻城狮计划】— 连续打卡第二十五天:RA2E1的 DTC传输模式

系列文章目录 由于一些特殊原因&#xff1a; 系列文章链接&#xff1a;&#xff08;其他系列文章&#xff0c;请点击链接&#xff0c;可以跳转到其他系列文章&#xff09; 24.RA2E1的 DMAC——数据传输 文章目录 系列文章目录 前言 一、DTC是什么&#xff1f; 二、DTC内部寄存…

Springboot——集成Elastic Job实现任务调度

目录 1.任务调度 2.Elastic Job 3.springboot集成Elastic Job 1.任务调度 什么是任务调度&#xff1f; 任务调度就是指系统为了自动地完成特定任务&#xff0c;在指定的时刻去执行任务的过程&#xff0c;其目的是为了让系统自动且精确地完成任务从而解放人力资源。 如&am…

WX小程序 - 1

视图层&#xff1a;WXML&#xff0c;WXSS 逻辑层&#xff1a;JS 响应数据绑定&#xff0c;事件绑定 勾选这个其实就是解决跨域问题&#xff08;仅限本地开发阶段&#xff09;。 上线需要去合法域名添加。 app.json 文件创建和删除&#xff0c;保持一致&#xff0c;否则报错…

二叉树的层序遍历思想模板

分为两种&#xff1a; 1.第一种是直接将遍历的数据保存到列表里&#xff1b; 2.第二种是将每一层的数据以列表形式保存在列表&#xff1b;&#xff08;今天要讲述的内容&#xff09; 代码如下&#xff0c;思想在后 class Solution {public List<List<Integer>> …

全新 – Amazon EC2 R6a 实例由第三代 AMD EPYC 处理器提供支持,适用于内存密集型工作负载

我们在 Amazon re:Invent 2021 上推出了通用型 Amazon EC2 M6a 实例&#xff0c;并于今年 2 月推出了计算密集型 C6a 实例。这些实例由运行频率高达 3.6 GHz 的第三代 AMD EPYC 处理器提供支持&#xff0c;与上一代实例相比&#xff0c;性价比提高多达 35%。 如今&#xff0c;…

不断联的从Google Drive下载超大文件

不断联的从Google Drive下载超大文件 最近在研究OWOD代码&#xff0c;需要从google drive 下载超大文件&#xff0c;普通方式下载&#xff0c;首先得有个上外网的工具&#xff0c;其次下载过程中总是会断开&#xff0c;所以看了一些博客&#xff0c;总结如下&#xff1a; 安…

基于TINY4412的Andorid开发-------简单的LED灯控制【转】

基于TINY4412的Andorid开发-------简单的LED灯控制 阅读目录(Content) 一、编写驱动程序二、编写代码测试驱动程序三、编写HAL代码四、编写Framework代码五、编写JNI代码六、编写App 参考资料&#xff1a; 《Andriod系统源代码情景分析》 《嵌入式Linux系统开发完全手册_基…

实时语义分割PIDNet算法TensorRT转换

[PIDNet](GitHub - XuJiacong/PIDNet: This is the official repository for our recent work: PIDNet) 是22年新开源的实时语义分割算法&#xff0c;和DDRNet一样具有不错的性能 网络结构如下&#xff1a; 网络分为三个分支&#xff0c;如上图&#xff0c;整体结构和DDRNet比…

shell 脚本中的函数

目录 一. shell 函数作用&#xff1a;二. shell 函数的定义格式&#xff1a;三.函数返回值&#xff1a;四.函数传参&#xff1a;扩展&#xff1a; 六. 函数变量的作用范围:七 . 递归7.1阶乘 八. 函数库 一. shell 函数作用&#xff1a; 使用函数可以避免代码的重复 使用函数可以…

OJ刷题 第十五篇(递推较多,奥赛篇)

31005 - 昆虫繁殖&#xff08;难度非常大&#xff0c;信息奥赛题&#xff09; 时间限制 : 1 秒 内存限制 : 128 MB 科学家在热带森林中发现了一种特殊的昆虫&#xff0c;这种昆虫的繁殖能力很强。每对成虫过x个月产y对卵&#xff0c;每对卵要过两个月长成成虫。假设每个成虫…

从零开始 Spring Boot 27:IoC

从零开始 Spring Boot 27&#xff1a;IoC 自从开始学习和从事Spring Boot开发以来&#xff0c;一个一直让我很迷惑的问题是IoC和Bean到底是什么东西。这个问题一直到我翻阅完Spring开发文档Core Technologies (spring.io)后才真正得到解惑。 虽然中文互联网上关于IoC的文章很多…

基于AT89C51单片机的电子琴设计与仿真

点击链接获取Keil源码与Project Backups仿真图&#xff1a; https://download.csdn.net/download/qq_64505944/87765092?spm1001.2014.3001.5503 源码获取 运用单片机&#xff0c;将音乐的大部分音符与相应按键相匹配&#xff0c;让音乐爱好者利用单片机也可以进行演奏。 基…

SAP EWM /SCWM/CHM_LOG - 显示及分析检查日志

很多公司上了EWM系统后会在运行一段时间之后出现一些系统数据异常情况&#xff0c;问题大致分为以下一些情况&#xff1a; 序号异常情况1 库存调整过账后可用数量的数据不正确2 错误地传输原产国 (2)3 HU 在可用库存中&#xff0c;即使不允许 HU4 非 AQUA 级别创建仓库任务后可…

TCP通讯(三次握手、四次挥手;滑动窗口;TCP状态转换;端口复用;TCP心跳检测机制)

前言&#xff1a;建议看着图片&#xff0c;根据文字描述走一遍TCP通讯过程&#xff0c;加深理解。 目录 TCP通信时序&#xff1a; 1&#xff09;建立连接&#xff08;三次握手&#xff09;的过程&#xff1a; 2&#xff09;数据传输的过程&#xff1a; 3&#xff09;关闭连…

智能优化算法:斑马优化算法-附代码

智能优化算法&#xff1a;斑马优化算法 文章目录 智能优化算法&#xff1a;斑马优化算法1.斑马优化算法1.1 初始化1.2 阶段一&#xff1a;觅食行为1.3 阶段二&#xff1a;针对捕食者的防御策略 2.实验结果3.参考文献4.Matlab 摘要&#xff1a;斑马优化算法&#xff08;Zebra Op…

协同设计有什么优势?都有哪些协同设计软件

设计师创作既有视觉吸引力又实用的作品需要很多时间。对于某些项目&#xff0c;第一次可能会顺利验收&#xff0c;但事实上&#xff0c;设计和修改总是伴随着。 如何有效地修改和促进项目的实施&#xff1f;答案很简单&#xff1a;协作设计。本文将带您深入学习协作设计的相关…