目标检测的通用实例提取

news2024/7/6 18:18:45

论文:General Instance Distillation for Object Detection

论文地址:https://arxiv.org/pdf/2103.02340.pdficon-default.png?t=N4N7https://arxiv.org/pdf/2103.02340.pdf

摘要

       In recent years, knowledge distillation has been proved to be an effective solution for model compression. This approach can make lightweight student models acquire the knowledge extracted from cumbersome teacher models. However, previous distillation methods of detection have weak generalization for different detection frameworks and rely heavily on ground truth (GT), ignoring the valuable relation information between instances. Thus, we propose a novel distillation method for detection tasks based on discriminative instances without considering the positive or negative distinguished by GT, which is called general instance distillation (GID). Our approach contains a general instance selection module (GISM) to make full use offeature-based, relation-based and response-based knowledge for distillation. Extensive results demonstrate that the student model achieves significant AP improvement and even outperforms the teacher in various detection frameworks. Specifically, RetinaNet with ResNet-50 achieves 39.1% in mAP with GID on COCO dataset, which surpasses the baseline 36.2% by 2.9%, and even better than the ResNet-101 based teacher model with 38.1% AP.

       近年来,知识蒸馏被证明是一种有效的解决模型压缩的方法。这种方法可以使轻量级的学生模型获得从繁琐的教师模型中提取的知识。然而,以前的蒸馏检测方法具有较弱的推广不同的检测框架,严重依赖地面真相(GT),忽略了有价值的实例之间的关系信息。因此,我们提出了一种新的蒸馏方法的检测任务的基础上的歧视性的实例,而不考虑的积极或消极区分GT,这被称为一般的实例蒸馏(GID)。我们的方法包含一个通用的实例选择模块(GISM),以充分利用offeature-based,基于关系和响应的知识蒸馏。大量的结果表明,学生模型实现了显着的AP改进,甚至在各种检测框架中优于教师。具体来说,RetinaNet与ResNet-50在COCO数据集上的GID的mAP中达到39.1%,超过基线36.2% 2.9%,甚至优于基于ResNet-101的教师模型38.1%的AP。

1介绍

      In recent years, the accuracy of object detection has made a great progress due to the blossom of deep convolutional neural network (CNN). The deep learning network structure, including a variety of one-stage detection models [19, 23, 24, 25, 17] and two-stage detection models [26, 16, 8, 2], has replaced the traditional object detection and has become the mainstream method in this field. Furthermore, the anchor-free frameworks [13, 5, 32] have also achieved better performance with more simplified ap proaches. However, these high-precision deep learning based models are usually cumbersome, while a lightweight with high performance model is demanded in practical applications. Therefore, how to find a better trade-off between the accuracy and efficiency has become a crucial problem.

       近年来,由于深度卷积神经网络(CNN)的开花,目标检测的准确性有了很大的进步。深度学习网络结构,包括各种一阶段检测模型和两阶段检测模型,已经取代了传统的对象检测,成为该领域的主流方法。此外,无锚框架也通过更简单的方法实现了更好的性能。然而,这些基于高精度深度学习的模型通常是繁琐的,而在实际应用中需要一个轻量级的高性能模型。因此,如何在准确性和效率之间找到一个更好的平衡点成为一个至关重要的问题。

      Knowledge Distillation (KD), proposed by Hinton et al. [10], is a promising solution for the above problem. Knowledge distillation is to transfer the knowledge of large model to small model, thereby improving the performance of the small model and achieving the purpose of model compression. At present, the typical forms of knowledge can be divided into three categories [7], response-based knowledge [10, 22], feature-based knowledge [27, 35, 9] and relationbased knowledge [22, 20, 31, 33, 15]. However, most of the distillation methods are mainly designed for multi-class classification problems. Directly migrating the classification specific distillation method to the detection model is less effective, because of the extremely unbalanced ratio of positive and negative instances in the detection task. Some distillation frameworks designed for detection tasks cope with this problem and achieve impressive results, e.g. Li et al. [14] address the problem by distilling the positive and negative instances in a certain proportion sampled by RPN, and Wang et al. [34] further propose to only distill the near ground truth area. Nevertheless, the ratio between positive and negative instances for distillation needs to be meticulously designed, and distilling only GT-related area may ignore the potential informative area in the background. Moreover, current detection distillation methods cannot work well in multi detection frameworks simultaneously, e.g. two-stage, anchor-free methods. Therefore, we hope to design a general distillation method for various detection frameworks to use as much knowledge as possible effectively without concerning the positive or negative.

       知识蒸馏(KD),由欣顿等人提出。是解决上述问题的一个有希望的解决方案。知识蒸馏是将大模型中的知识转移到小模型中,从而提高小模型的性能,达到模型压缩的目的。目前,知识的典型形式可以分为三类,基于响应的知识,基于特征的知识和基于关系的知识。然而,大多数蒸馏方法主要是针对多类分类问题设计的。直接将分类指定蒸馏方法迁移到检测模型的效率较低,因为检测任务中阳性和阴性实例的比例极不平衡。一些为检测任务设计的蒸馏框架科普这个问题并取得了令人印象深刻的结果,例如。Li等人通过以RPN采样的一定比例提取正面和负面实例来解决这个问题,Wang等人进一步提出仅提取近地面实况区域。然而,需要精心设计用于提取的正实例和负实例之间的比率,并且仅提取GT相关区域可能忽略背景中的潜在信息区域。此外,当前的检测蒸馏方法不能同时在多个检测框架中很好地工作,例如:两阶段无锚方法。因此,我们希望为各种检测框架设计一种通用的蒸馏方法,以有效地使用尽可能多的知识,而不考虑积极或消极的。

      Towards this goal, we propose a distillation method based on discriminative instances, utilizing response-based knowledge, feature-based knowledge as well as relationbased knowledge, as shown in Fig 1. There are several advantages: (i) We can model the relational knowledge between instances in one image for distillation. Hu et al. [11] demonstrates the effectiveness of relational information on detection tasks. However, the relation-based knowledge distillation in object detection has not been explored yet. (ii) We avoid manually setting the proportion of the positive and negative areas or selecting only the GT-related areas for distillation. Though GT-related areas are almost informative, the extremely hard and simple instances may be useless, and even some informative patches from the background can be useful for students to learn the generalization of teachers. Besides, we find that the automatic selection of some discriminative instances between the student and teacher for distillation can make knowledge transferring more effective. Those discriminative instances are called general instances (GIs), since our method does not care about the proportion between positive and negative instances, nor does it rely on GT labels. (iii) Our methods have robust generalization for various detection frameworks. GIs are calculated upon the output from student and teacher model without relying on certain modules from a specific detector or some key characteristic, such as anchor, from a particular detection framework.

       为了这个目标,我们提出了一种基于判别实例的蒸馏方法,利用基于响应的知识,基于特征的知识以及基于关系的知识,如图1所示。有几个优点:

       (1)我们可以对一个图像中的实例之间的关系知识进行建模以进行提炼。Hu等人证明了关系信息对检测任务的有效性。然而,基于关系的知识提取在目标检测中的研究还没有得到深入的研究。

       (2)我们避免手动设置正区域和负区域的比例或仅选择GT相关区域进行蒸馏。虽然与GT相关的领域几乎是信息量大的,但极其困难和简单的例子可能是无用的,甚至一些信息补丁的背景可以帮助学生学习教师的概括。此外,我们发现学生和教师之间自动选择一些区分实例进行提炼,可以使知识传递更有效。这些判别实例被称为一般实例(GI),因为我们的方法不关心阳性和阴性实例之间的比例,也不依赖于GT标签。

        (3)我们的方法具有强大的泛化能力,各种检测框架。GI是根据来自学生和教师模型的输出来计算的,而不依赖于来自特定检测器的某些模块或来自特定检测框架的一些关键特性,例如锚。 

图1.一般实例蒸馏(GID)的总体管线。一般实例(GI)是自适应地选择从教师和学生模型的输出。然后,基于特征的,基于关系的和基于响应的知识提取蒸馏基于所选择的地理标志。

       综上所述,本文做出了以下贡献:

  • 定义一般实例(GI)作为蒸馏目标,可以有效提高检测模型的蒸馏效果。(Define general instance (GI) as the distillation target, which can effectively improve the distillation effect of
    the detection model.)
  • 在GI的基础上,首先引入基于关系的知识,对检测任务进行提炼,并将其与基于响应和基于特征的知识相结合,使学生超越教师。(Based on GI, we first introduce the relation-based
    knowledge for distillation on detection tasks and inte-grate it with response-based and feature-based knowl-edge, which makes student surpass the teacher.)
  • 我们在MSCOCO和PASCAL VOC数据集上验证了我们的方法的有效性,包括一阶段,两阶段和无锚方法,实现了最先进的性能。(We verify the effectiveness of our method on the MSCOCO [18] and PASCAL VOC [6] datasets, including one-stage, two-stage and anchor-free methods, achieving state-of-the-art performance.)

2相关工作

2.1目标检测 

     The current mainstream object detection algorithms are roughly divided into two-stage and one-stage detectors. Two-stage methods [16, 8, 2] represented by Faster R-CNN [26] maintain the highest accuracy in the detection field. These methods utilize region proposal network (RPN) and refinement procedure of classification and location to obtain better performance. However, high demands for lower latency bring one-stage detectors [19, 23] under the spotlight, which achieve classification and location of targets through the feature map directly. 

       目前主流的目标检测算法大致分为两阶段和一阶段检测器。以Faster R-CNN 为代表的两阶段方法在检测领域保持了最高的准确性。这些方法利用区域建议网络(RPN)和分类和定位的细化过程,以获得更好的性能。然而,对较低延迟的高需求使一级检测器成为焦点,其直接通过特征图实现目标的分类和定位。

      In recent years, another criterion divides detection algorithm into anchor-based and anchor-free methods. Anchorbased detectors such as [24, 17, 19] solve object detection tasks with the help of anchor boxes, which can be viewed as pre-defined sliding windows or proposals. Nevertheless, all anchor-based methods need to be meticulously designed and calculate a large number of anchor boxes which takes much computation. To avoid tunning hyper-parameters and calculation related to anchor boxes, anchor-free methods [23, 13, 5, 32] predict several key points of target, such as center and distance to boundaries, reach a better performance with less cost.

      近年来,另一种标准将检测算法分为基于锚点的方法和无锚点的方法。基于锚点的检测器,如,在锚框的帮助下解决了对象检测任务,锚框可以被视为预定义的滑动窗口或建议。然而,所有基于锚的方法都需要精心设计和计算大量的锚箱,这需要大量的计算。为了避免调整超参数和与锚框相关的计算,无锚方法预测目标的几个关键点,例如中心和到边界的距离,以更少的成本达到更好的性能。

2.2知识蒸馏

        Knowledge distillation is a kind of model compression and acceleration approach which can effectively improve the performance of small models with guiding of teacher models. In knowledge distillation, knowledge takes many forms, e.g. the soft targets of the output layer [10], the intermediate feature map [27], the distribution of the intermediate feature [12], the activation status of each neuron [9], the mutual information of intermediate feature [1], the transformation of the intermediate feature [35] and the instance relationship [22, 20, 31, 33]. Those knowledge for distillation can be classified into the following categories [7]: response-based [10], feature-based [27, 12, 9, 1, 35], and relation-based [22, 20, 31, 33].

       知识提炼是一种模型压缩和加速方法,在教师模型的指导下,可以有效地提高小模型的性能。在知识蒸馏中,知识有多种形式,例如:输出层软目标、中间特征图、中间特征分布、各神经元激活状态、中间特征互信息、中间特征变换和实例关系。这些蒸馏知识可以分为以下几类:基于响应,基于特征和基于关系。

      Recently, there are some works applying knowledge distillation to object detection tasks. Unlike the classification tasks, the distillation losses in detection tasks will encounter the extreme unbalance between positive and negative instances. Chen et al. [3] first deals with this problem by underweighting the background distillation loss in the classification head while remaining imitating the full feature map in the backbone. Li et al. [14] designs a distillation framework for two-stage detectors, applying the L2 distilla tion loss to the features sampled by RPN of student model, which consists of randomly sampled negative and positive proposals discriminated by ground truth (GT) labels in a certain proportion. Wang et al. [34] proposes a fine-grained feature imitation for anchor-based detectors, distilling the near objects regions which are calculated by the intersection between GT boxes and anchors generated from detectors. That is to say, the background areas will hardly be distilled even if it may contain several information-rich areas. Similar to Wang et al. [34], Sun et al. [30] only distilling the GT-related region both on feature map and detector head.

       最近,有一些工作将知识提炼应用于目标检测任务。与分类任务不同,检测任务中的蒸馏损失将遇到正负实例之间的极端不平衡。Chen等人首先通过降低分类头中的背景蒸馏损失的权重,同时保持模仿主干中的完整特征图来解决这个问题。Li等人设计了一个两阶段检测器的蒸馏框架,将L2蒸馏损失应用于学生模型的RPN采样的特征,该特征由随机采样的负和正建议组成,由地面真值(GT)标签以一定比例区分。Wang等人提出了一种基于锚点的检测器的细粒度特征模仿,提取通过GT盒和检测器生成的锚点之间的交集计算的近物体区域。也就是说,即使背景区域可能包含多个信息丰富的区域,也很难提取背景区域。类似于Wang et al.,Sun et al.仅在特征图和探测器头上提取GT相关区域。

        In summary, the previous distillation framework for detection tasks all manually set the ratio between distilled positive and negative instances distinguished by the GT labels to cope with the disproportion of foreground and background area in detection tasks. Thus, the main difference between our method and the previous works can be summarized as follows: (i) Our method does not rely on GT labels, nor does it care about the proportion between positive and negative instances selected for distillation. It is the information gap between student and teacher that guides the model to choose the discriminative patches for imitation. (ii) None of the previous methods take advantage of the relation-based knowledge for distillation. However, it is widely acknowledged that the relation between objects contains tremendous information even within one single image. Thus, based on our selected discriminative patches, we extract the relation-based knowledge among them for distillation, achieving further performance gain.

       综上所述,以前的检测任务的提取框架都是手动设置由GT标签区分的提取的正实例和负实例之间的比率,以科普检测任务中前景和背景区域的不均衡。因此,我们的方法和以前的工作之间的主要区别可以总结如下:

  1. 我们的方法不依赖于GT标签,也不关心选择用于蒸馏的阳性和阴性实例之间的比例。学生和教师之间的信息差引导模型选择用于模仿的判别块。
  2. 以前的方法都没有利用基于关系的知识进行蒸馏。然而,人们普遍认为,即使在一个单一的图像中,对象之间的关系也包含了大量的信息。因此,基于我们选择的判别补丁,我们提取其中的关系为基础的知识蒸馏,实现进一步的性能增益。

3一般实例蒸馏

     Previous work [34] proposed that the feature regions near objects have considerable information which is useful for knowledge distillation. However, we find that not only the feature regions near objects but also the discriminative patches even from the background area have meaningful knowledge. Base on this finding, we design the general instance selection module (GISM), as shown in Fig 2. The module utilizes the predictions from both teacher and student model to select the key instances for distillation. 

      以前的工作提出,物体附近的特征区域具有相当多的信息,这对于知识蒸馏是有用的。然而,我们发现,不仅特征区域附近的对象,但也歧视补丁,甚至从背景区域有意义的知识。基于这一发现,我们设计了通用实例选择模块(GISM),如图2所示。该模块利用来自教师和学生模型的预测来选择用于蒸馏的关键实例。

图2.常规实例选择模块(GISM)的图示。为了获得最翔实的位置,我们计算的L1距离的分类分数从学生和教师的GI分数,并保留回归框具有较高的分数GI框。为了避免重复计算的损失,我们使用非最大值抑制(NMS)算法来删除重复。

        Furthermore, to make better use of the information provided by the teacher, we extract and take advantage of feature-based, relation-based and the response-based knowledge for distillation, as shown in Fig 3. The experimental results show that our distillation framework is general for current state-of-the-art detection models.           

        此外,为了更好地利用教师提供的信息,我们提取并利用基于特征,基于关系和基于响应的知识进行蒸馏,如图3所示。实验结果表明,我们的蒸馏框架是一般的当前国家的最先进的检测模型。

图3.我们的方法的细节:(a)通过ROI对齐,使用所选的GI来裁剪学生和教师骨干中的特征。然后提取基于特征和基于关系的知识进行提炼。(b)选定的地理标志首先通过地理标志分配生成掩码。然后提取掩蔽分类和回归头以利用基于响应的知识。

3.1常规实例选择模块

      In detection model, predictions indicate the attention patches which are commonly meaningful areas. The difference of such patches between teacher and student model is also closely related to their performance gap. In order to quantify the difference for each instance and then select the discriminative instances for distillation, we propose two indicator: GI score and GI box. Both of them are dynamically calculated during each training step. For saving the computation resources during training, we simply calculate the L1 distance of classification score as GI score and choose box with higher score as GI box. Fig 2 illustrates the procedure of generating GI, and the score and box of which from each predicted instance r is defined as below. 

       在检测模型中,预测指示通常有意义的区域的注意补丁。教师和学生模型之间的这种补丁的差异也与他们的表现差距密切相关。为了量化每个实例的差异,然后选择用于蒸馏的判别实例,我们提出了两个指标:GI评分GI箱。在每个训练步骤期间动态地计算它们两者。为了节省训练过程中的计算资源,我们简单地计算分类得分的L1距离作为GI得分,并选择得分较高的框作为GI框。图2示出了生成GI的过程,

图2.常规实例选择模块(GISM)的图示。为了获得最翔实的位置,我们计算的L1距离的分类分数从学生和教师的GI分数,并保留回归框具有较高的分数GI框。为了避免重复计算的损失,我们使用非最大值抑制(NMS)算法来删除重复。

        并且来自每个预测实例r的GI的得分和框定义如下:

P_{GI}^{r}=\underset{0<c\leq C}{max}\left | P_{t}^{rc}-P_{s}^{rc} \right |,

B_{GI}^{r}=\left\{\begin{matrix} B_{t}^{r} ,& \underset{0<c\leq C}{max}P_{t}^{rc}> \underset{0<c\leq C}{max}P_{s}^{rc} & \\ B_{s}^{r}, & \underset{0<c\leq C}{max}P_{t}^{rc}\leq \underset{0<c\leq C}{max}P_{s}^{rc} & \end{matrix}\right., 

GI=NMS\left ( P_{GI} ,B_{GI}\right ), 

待续...... 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/563961.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

vulnhub靶场之bassamctf

1.信息收集 探测存活主机&#xff0c;输入&#xff1a;netdiscover -r 192.168.239.0/24 &#xff0c;发现192.168.239.177存活。 对目标主机192.168.239.176进行端口扫描&#xff0c;发现存活22(SSH)、80端口。 在浏览器上输入&#xff1a;http://192.168.239.177&#xff…

网络协议 | 典型协议、B/S模式、C/S模式

欢迎关注博主 Mindtechnist 或加入【Linux C/C/Python社区】一起学习和分享Linux、C、C、Python、Matlab&#xff0c;机器人运动控制、多机器人协作&#xff0c;智能优化算法&#xff0c;滤波估计、多传感器信息融合&#xff0c;机器学习&#xff0c;人工智能等相关领域的知识和…

LeetCode:29. 两数相除

29. 两数相除 1&#xff09;题目2&#xff09;思路3&#xff09;代码1.初始代码2.第一次优化3.第二次优化 4&#xff09;结果1.初始结果2.第一次优化结果3.第二次优化结果 1&#xff09;题目 给你两个整数&#xff0c;被除数 dividend 和除数 divisor。将两数相除&#xff0c;…

es 三 安装 es 安装kibana

目录 安装7.3.0 版本 下载地址 一个比一个快 页面测试访问 安装kibana 下载 Config/kibana.yml 配置修改开启中文 页面访问 安装7.3.0 版本 下载地址 一个比一个快 Index of /elasticsearch/ 下载中心 - Elastic 中文社区 下载中心 - Elastic 中文社区 官网下载 开箱…

CANape使用记录(一):CANape新建工程及标定观测

目录 1、概述 2、新建工程 3、添加观测与标定量 1、概述 CANape具有以下主要组件&#xff1a;在线测量&#xff0c;离线分析&#xff0c;诊断&#xff0c;打印机功能&#xff0c;数据管理&#xff0c;闪存编程&#xff0c;校准&#xff0c;CDM Studio和设备数据库编辑器&…

从工地到办公室:一个土木工程师如何学成测试技能?

提桶跑路成功了&#xff0c;这工地我是再也不来了。 ​ 工作中流的汗真的都是报专业时脑袋里进的水。 当时高考完&#xff0c;对于要学什么专业感到很迷茫&#xff0c;因为姨夫是干工地的&#xff0c;零几年土木专业的大学生&#xff0c;在我们这五线城市一个月也能够拿一万多…

汇聚支付APP+技术方案介绍

一、时序图 商户使用汇聚支付的 APP方案&#xff0c;需要两个步骤&#xff1a; 步骤一&#xff1a; 请求汇聚支付的支付接口&#xff0c;https://www.joinpay.com/trade/uniPayApi.action 获取返回的关键参数 rc_Result。 步骤二&#xff1a; 商户 APP 按照微信的 SDK 规范使…

pdf怎么拆分成一页一页的?办公常备工具说明

PDF&#xff08;Portable Document Format&#xff09;是一种用于创建和共享文档的文件格式。它由Adobe Systems开发&#xff0c;并已成为电子文档的通用格式。PDF文件可以包含文本、图像、表格、超链接和其他多媒体内容&#xff0c;使其成为一种非常方便的文件格式。 然而&…

ChatGPT国内免费使用的网站

ChatGPT是什么意思&#xff1f; ChatGPT全称&#xff1a;Chat Generative Pre-trained Transformer 在英文中“chat”是聊天的意思&#xff0c;GPT是一种预训练语言模型的缩写。 所以ChatGPT是一款功能非常强大的AI&#xff08;人工智能&#xff09;聊天机器人&#xff0c;能…

linux小技巧-如何修改IP(四种方法)

目录 项目场景&#xff1a; 方法分析及步骤介绍 原因分析&#xff1a; 解决方案&#xff1a; 项目场景&#xff1a; 项目上经常遇到修改IP的情况&#xff0c;这里总结一些各个情况下修改IP的方法&#xff0c;尤其时有时候没有主机屏幕显示&#xff0c;借助于命令行的方式修…

MATMacOS安装

MAT MacOS安装 文章目录 MAT MacOS安装第一章 简述第01节 介绍第02节 获取 第二章 配置第01节 显示包内容第02节 MAT配置Java环境变量第03节 MAT兼容低版本Java 第三章 问题第01节 正常启动效果第02节 可能遇到的问题 第一章 简述 第01节 介绍 什么是 MAT 工具&#xff1f; …

【随时更新】知识点回顾

哈夫曼编码和解码 C 哈夫曼编码 【介绍编码过程】 哈夫曼树编码及其图形化的实现 【使用可视化方式展现最终编码效果】 Python中使用哈夫曼算法实现文件的压缩与解压缩 【Python实现】 哈夫曼树 C语言实现 【图解如何生成】 编码过程 1. 使用二进制流&#xff0c;统计当前文件…

如何利用Smartbi电子表格进行财务常用账簿数据的联动查询

财务&#xff0c;是几乎所有企事业单位内部的核心组织。单位今年耗费几何&#xff0c;企业去年赚多少钱&#xff0c;平均成本在什么水平&#xff0c;为国家创造多少税收等等&#xff0c;所有这些信息&#xff0c;最终都通过财务账表的方式来体现。可以说&#xff0c;大家工作辛…

OSI/RM七层网络模型和网络协议

目录 1.OSI/RM七层网络模型1.1 结构图1.2 各层功能 2.OSI七层、TCP/IP四层、五层网络模型对比3.各层对应网络协议3.1 应用层3.2 传输层 1.OSI/RM七层网络模型 OSI/RM&#xff08;Open System Interconnection/Reference Model&#xff0c;开放式系统互联参考模型&#xff09;是…

前端到接口层的反序列化流程

前置知识 参考我的另一篇博客&#xff0c;(209条消息) Servlet和SpringMVC_fengwuJ的博客-CSDN博客&#xff0c;描述了Servlet与SpringMVC的关系&#xff0c;大致可以知道从前端请求&#xff0c;到后端接口的中间过程 反序列化流程 前篇文章中&#xff0c;走到 getMethodPara…

平台总线模型简介

1. 平台总线介绍 平台总线模型将一个驱动分成两部分 device.c, driver.c。一个描述硬件, 一个控制硬件。 平台总线通过比较字符串, 将name相同的device.c和driver.c匹配到一起来控制硬件。 driver通过平台总线去拿到device.c内容。 平台总线的优点是减少重复代码 提高效率。 …

中间件(三)- Kafka(一)

Kafka 1. Kafka简介1.1 名字由来1.2 主要特性1.3 相关术语1.4 架构图1.5 消息队列1.6 Kafka消费模式1. 一对一消费模式2. 一对多消费模式 1.7 消息中间件 2. Kafka安装及使用2.1 下载kafka2.2 修改配置文件2.3 启动2.4 docker启动 3. 简单指令3.1 topic相关3.2 Kafka 生产/消费…

`JOB`的正确打开方式

文章目录 JOB的正确打开方式 简介工作原理使用场景使用方式注意事项启动JOB失败的情况JOB正确打开方式错误方式正确方式进阶方式终极方式 总结 JOB的正确打开方式 最近有一些小伙伴在使用JOB时&#xff0c;由于使用不当&#xff0c;引起一些问题。例如把license占满&#xff0c…

ASEMI代理长电可控硅MCR100-8特征,MCR100-8应用

编辑-Z 长电可控硅MCR100-8参数&#xff1a; 型号&#xff1a;MCR100-8 VDRM/VRRM&#xff1a;600V IT(RMS)&#xff1a;0.8A 结点温度Tj&#xff1a;-40~125℃ 储存温度Tstg&#xff1a;-55 ~ 150℃ 通态电压VTM&#xff1a;1.7V 栅极触发电压VGT&#xff1a;0.8V 正…

泰克MDO4104C(Tektronix) mdo4104c混合域示波器

泰克 MDO4104C混合域示波器&#xff0c;1 GHz&#xff0c;4 通道&#xff0c;2.5 - 5 GS/s&#xff0c;20 M 点 ​泰克 MDO4104C 示波器是一款 6 合 1 集成示波器&#xff0c;可以配置可选的频谱分析仪、任意波形/函数发生器、逻辑分析仪、协议分析仪和 DVM/频率计数器。当配置…