Rk1126 实现 yolov5 6.2 推理

news2024/11/26 2:36:22

在这里插入图片描述

基于 RK1126 实现 yolov5 6.2 推理.


转换 ONNX

  1. python export.py --weights ./weights/yolov5s.pt --img 640 --batch 1 --include onnx --simplify

安装 rk 环境

  1. 安装部分参考网上, 有很多. 参考: https://github.com/rockchip-linux/rknpu

转换 RK模型 并验证

  1. yolov562_to_rknn_3_4.py ( s/m/l/x ..,输出节点不同,使用 netror 查看 )

    # -*- coding: utf-8 -*-
    """
    Created on Wed Oct 12 18:24:38 2022
    
    @author: bobod
    """
    
    
    import os
    import numpy as np
    import cv2
    from rknn.api import RKNN
    
    
    ONNX_MODEL = './weights/yolov5s_v6.2.onnx'
    RKNN_MODEL = './weights/yolov5s_v6.2.rknn'
    IMG_PATH = './000000102411.jpg'
    DATASET = './dataset.txt'
    
    QUANTIZE_ON = True
    
    BOX_THRESH = 0.5
    NMS_THRESH = 0.6
    IMG_SIZE = (640, 640) # (width, height), such as (1280, 736)
    
    SHAPES =((0.0, 0.0), (0.0, 0.0)) #1 scale_coords
    SHAPE =(0,0)
    
    CLASSES = ("person", "bicycle", "car","motorbike ","aeroplane ","bus ","train","truck ","boat","traffic light",
               "fire hydrant","stop sign ","parking meter","bench","bird","cat","dog ","horse ","sheep","cow","elephant",
               "bear","zebra ","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite",
               "baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife ",
               "spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza ","donut","cake","chair","sofa",
               "pottedplant","bed","diningtable","toilet ","tvmonitor","laptop	","mouse	","remote ","keyboard ","cell phone","microwave ",
               "oven ","toaster","sink","refrigerator ","book","clock","vase","scissors ","teddy bear ","hair drier", "toothbrush ")
    
    def sigmoid(x):
        return 1 / (1 + np.exp(-x))
    
    def xywh2xyxy(x):
        # Convert [x, y, w, h] to [x1, y1, x2, y2]
        y = np.copy(x)
        y[:, 0] = x[:, 0] - x[:, 2] / 2  # top left x
        y[:, 1] = x[:, 1] - x[:, 3] / 2  # top left y
        y[:, 2] = x[:, 0] + x[:, 2] / 2  # bottom right x
        y[:, 3] = x[:, 1] + x[:, 3] / 2  # bottom right y
        return y
    
    def process(input, mask, anchors):
    
        anchors = [anchors[i] for i in mask]
        grid_h, grid_w = map(int, input.shape[0:2])
    
        box_confidence = sigmoid(input[..., 4])
        box_confidence = np.expand_dims(box_confidence, axis=-1)
    
        box_class_probs = sigmoid(input[..., 5:])
    
        box_xy = sigmoid(input[..., :2])*2 - 0.5
    
        col = np.tile(np.arange(0, grid_w), grid_h).reshape(-1, grid_w)
        row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_w)
        col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
        row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
        grid = np.concatenate((col, row), axis=-1)
        box_xy += grid
        box_xy *= (int(IMG_SIZE[1]/grid_h), int(IMG_SIZE[0]/grid_w))
    
        box_wh = pow(sigmoid(input[..., 2:4])*2, 2)
        box_wh = box_wh * anchors
    
        box = np.concatenate((box_xy, box_wh), axis=-1)
    
        return box, box_confidence, box_class_probs
    
    def filter_boxes(boxes, box_confidences, box_class_probs):
        """Filter boxes with box threshold. It's a bit different with origin yolov5 post process!
        # Arguments
            boxes: ndarray, boxes of objects.
            box_confidences: ndarray, confidences of objects.
            box_class_probs: ndarray, class_probs of objects.
        # Returns
            boxes: ndarray, filtered boxes.
            classes: ndarray, classes for boxes.
            scores: ndarray, scores for boxes.
        """
        boxes = boxes.reshape(-1, 4)
        box_confidences = box_confidences.reshape(-1)
        box_class_probs = box_class_probs.reshape(-1, box_class_probs.shape[-1])
    
        _box_pos = np.where(box_confidences >= BOX_THRESH)
        boxes = boxes[_box_pos]
        box_confidences = box_confidences[_box_pos]
        box_class_probs = box_class_probs[_box_pos]
    
        class_max_score = np.max(box_class_probs, axis=-1)
        classes = np.argmax(box_class_probs, axis=-1)
        _class_pos = np.where(class_max_score* box_confidences >= BOX_THRESH)
    
        boxes = boxes[_class_pos]
        classes = classes[_class_pos]
        scores = (class_max_score* box_confidences)[_class_pos]
    
        return boxes, classes, scores
    
    def nms_boxes(boxes, scores):
        """Suppress non-maximal boxes.
        # Arguments
            boxes: ndarray, boxes of objects.
            scores: ndarray, scores of objects.
        # Returns
            keep: ndarray, index of effective boxes.
        """
        x = boxes[:, 0]
        y = boxes[:, 1]
        w = boxes[:, 2] - boxes[:, 0]
        h = boxes[:, 3] - boxes[:, 1]
    
        areas = w * h
        order = scores.argsort()[::-1]
    
        keep = []
        while order.size > 0:
            i = order[0]
            keep.append(i)
    
            xx1 = np.maximum(x[i], x[order[1:]])
            yy1 = np.maximum(y[i], y[order[1:]])
            xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])
            yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])
    
            w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
            h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
            inter = w1 * h1
    
            ovr = inter / (areas[i] + areas[order[1:]] - inter)
            inds = np.where(ovr <= NMS_THRESH)[0]
            order = order[inds + 1]
        keep = np.array(keep)
        return keep
    
    
    def yolov5_post_process(input_data):
        masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
        anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
                  [59, 119], [116, 90], [156, 198], [373, 326]]
    
        boxes, classes, scores = [], [], []
        for input,mask in zip(input_data, masks):
            b, c, s = process(input, mask, anchors)
            b, c, s = filter_boxes(b, c, s)
            boxes.append(b)
            classes.append(c)
            scores.append(s)
    
        boxes = np.concatenate(boxes)
        boxes = xywh2xyxy(boxes)
        classes = np.concatenate(classes)
        scores = np.concatenate(scores)
    
        nboxes, nclasses, nscores = [], [], []
        for c in set(classes):
            inds = np.where(classes == c)
            b = boxes[inds]
            c = classes[inds]
            s = scores[inds]
    
            keep = nms_boxes(b, s)
    
            nboxes.append(b[keep])
            nclasses.append(c[keep])
            nscores.append(s[keep])
    
        if not nclasses and not nscores:
            return None, None, None
    
        boxes = np.concatenate(nboxes)
        scale_coords(IMG_SIZE, boxes, SHAPE, SHAPES) #2
        classes = np.concatenate(nclasses)
        scores = np.concatenate(nscores)
    
        return boxes, classes, scores
    
    def draw(image, boxes, scores, classes):
        """Draw the boxes on the image.
        # Argument:
            image: original image.
            boxes: ndarray, boxes of objects.
            classes: ndarray, classes of objects.
            scores: ndarray, scores of objects.
            all_classes: all classes name.
        """
        for box, score, cl in zip(boxes, scores, classes):
            left, top, right, bottom = box
            print('class: {}, score: {}'.format(CLASSES[cl], score))
            print('box coordinate left,top,right,bottom: [{}, {}, {}, {}]'.format(left, top, right, bottom))
            left = int(left)
            top = int(top)
            right = int(right)
            bottom = int(bottom)
    
            cv2.rectangle(image, (left, top), (right, bottom), (255, 0, 0), 2)
            cv2.putText(image, '{0} {1:.2f}'.format(CLASSES[cl], score),
                        (left, top - 6),
                        cv2.FONT_HERSHEY_SIMPLEX,
                        0.6, (0, 0, 255), 2)
    
    
    def letterbox(im, new_shape=(640, 640), color=(0, 0, 0)):
        # Resize and pad image while meeting stride-multiple constraints
        shape = im.shape[:2]  # current shape [height, width]
        if isinstance(new_shape, int):
            new_shape = (new_shape, new_shape)
    
        # Scale ratio (new / old)
        r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
    
        # Compute padding
        ratio = r, r  # width, height ratios
        new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
        dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding
    
        dw /= 2  # divide padding into 2 sides
        dh /= 2
    
        if shape[::-1] != new_unpad:  # resize
            im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
        top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
        left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
        im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
        return im, ratio, (dw, dh)
    
    #3
    def scale_coords(img1_shape, coords, img0_shape, ratio_pad=None):
        # Rescale coords (xyxy) from img1_shape to img0_shape
        if ratio_pad is None:  # calculate from img0_shape
            gain = min(img1_shape[0] / img0_shape[0], img1_shape[1] / img0_shape[1])  # gain  = old / new
            pad = (img1_shape[1] - img0_shape[1] * gain) / 2, (img1_shape[0] - img0_shape[0] * gain) / 2  # wh padding
        else:
            gain = ratio_pad[0][0]
            pad = ratio_pad[1]
    
        coords[:, [0, 2]] -= pad[0]  # x padding
        coords[:, [1, 3]] -= pad[1]  # y padding
        coords[:, :4] /= gain
        clip_coords(coords, img0_shape)
        return coords
    
    
    def clip_coords(boxes, shape):
        # Clip bounding xyxy bounding boxes to image shape (height, width)
        boxes[:, [0, 2]] = boxes[:, [0, 2]].clip(0, shape[1])  # x1, x2
        boxes[:, [1, 3]] = boxes[:, [1, 3]].clip(0, shape[0])  # y1, y2
        
    if __name__ == '__main__':
    
        # Create RKNN object
        rknn = RKNN(verbose=False)
    
        if not os.path.exists(ONNX_MODEL):
            print('model not exist')
            exit(-1)
    
        _force_builtin_perm = False
        # pre-process config
        print('--> Config model')
        rknn.config(
                    reorder_channel='0 1 2',
                    mean_values=[[0, 0, 0]],
                    std_values=[[255, 255, 255]],
                    optimization_level=3,
                    #target_platform = 'rk1808',
                    # target_platform='rv1109',
                    target_platform = 'rv1126',
                    quantize_input_node= QUANTIZE_ON,
                    output_optimize=1,
                    force_builtin_perm=_force_builtin_perm)
        print('done')
    
        # Load ONNX model
        print('--> Loading model')
        #ret = rknn.load_pytorch(model=PT_MODEL, input_size_list=[[3,IMG_SIZE[1], IMG_SIZE[0]]])
        ret = rknn.load_onnx(model=ONNX_MODEL, outputs=['output', '391', '402'])
        if ret != 0:
            print('Load yolov5 failed!')
            exit(ret)
        print('done')
    
        # Build model
        print('--> Building model')
        ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET, pre_compile=False)
        if ret != 0:
            print('Build yolov5 failed!')
            exit(ret)
        print('done')
    
        # Export RKNN model
        print('--> Export RKNN model')
        ret = rknn.export_rknn(RKNN_MODEL)
        if ret != 0:
            print('Export yolov5rknn failed!')
            exit(ret)
        print('done')
    
        # init runtime environment
        print('--> Init runtime environment')
        ret = rknn.init_runtime() 
        #ret = rknn.init_runtime('rv1126', device_id='bab4d7a824f04867')
        # ret = rknn.init_runtime('rv1109', device_id='1109')
        # ret = rknn.init_runtime('rk1808', device_id='1808')
        if ret != 0:
            print('Init runtime environment failed')
            exit(ret)
        print('done')
    
        # Set inputs
        original_img = cv2.imread(IMG_PATH)#4
        img, ratio, pad = letterbox(original_img, new_shape=(IMG_SIZE[1], IMG_SIZE[0]))
        SHAPES=(ratio,pad)
        SHAPE=(original_img.shape[0],original_img.shape[1])
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    
        # Inference
        print('--> Running model')
        outputs = rknn.inference(inputs=[img], inputs_pass_through=[0 if not _force_builtin_perm else 1])
    
        # post process
        input0_data = outputs[0]
        input1_data = outputs[1]
        input2_data = outputs[2]
    
        input0_data = input0_data.reshape([3,-1]+list(input0_data.shape[-2:]))
        input1_data = input1_data.reshape([3,-1]+list(input1_data.shape[-2:]))
        input2_data = input2_data.reshape([3,-1]+list(input2_data.shape[-2:]))
    
        input_data = list()
        input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))
        input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))
        input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))
    
    
        boxes, classes, scores = yolov5_post_process(input_data)
    
    
        if boxes is not None:
            draw(original_img, boxes, scores, classes)
        cv2.imwrite("result.jpg", original_img)
    
    

在这里插入图片描述

混合量化

  1. 首先做精度分析, 量化 build 之后, 调用精度分析函数。后来尝试在 hybrid_quantization_step2 后调用,结果总是不成功 , 原因未知, 有知道的大佬望告知一下.
  2. 精度分析: entire_qnt ( 完全量化结果 ) , fp32 ( fp32结果 ) ,individual_qnt ( 逐层量化结果,即输入为float, 排除累计误差 ), entire_qnt_error_analysis.txt individual_qnt_error_analysis.txt ( 完全量化和逐层量化分析结果 (欧式距离和余弦距离) )
  3. 这里需要注意, DATASET 只能有一行数据.
    ...
        # Build model
        print('--> Building model')
        ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET, pre_compile=False)
        if ret != 0:
            print('Build yolov5 failed!')
            exit(ret)
        print('done')
    
        print('--> Accuracy analysis')
        ret = rknn.accuracy_analysis(inputs=DATASET1,output_dir="./output_dir")
        if ret != 0:
            print('accuracy_analysis failed!')
            exit(ret)
        print('done')
    
  4. 生成混合量化配置文件, 调用 rknn.hybrid_quantization_step1 会得到 torchjitexport.datatorchjitexport.jsontorchjitexport.quantization.cfg 3个文件.
  5. 根据精度分析结果,torchjitexport.quantization.cfg 将不想量化的层添加到自定义层中.
    # add layer name and corresponding quantized_dtype to customized_quantize_layers, e.g conv2_3: float32
    customized_quantize_layers: {
        "Conv_Conv_0_187":float32,
        "Sigmoid_Sigmoid_1_188_Mul_Mul_2_172":float32,
        "Conv_Conv_3_171":float32,
        ...
    }
    ...
    
  6. 调用 rknn.hybrid_quantization_step2 导出rknn模型
    ...
    ret = rknn.hybrid_quantization_step2(model_input='./torchjitexport.json',data_input='./torchjitexport.data', model_quantization_cfg='./torchjitexport.quantization.cfg',dataset=DATASET, pre_compile=False)
        if ret != 0:
            print("hybrid_quantization_step2 failed. ")
            exit(ret)
    
    # Export RKNN model
    print('--> Export RKNN model')
    ret = rknn.export_rknn(RKNN_MODEL)
    if ret != 0:
        print('Export yolov5rknn failed!')
        exit(ret)
    print('done')
    
  7. 重复测试模型,找出一个平衡速度和准确率的模型。
  8. 验证没问题了,就可以连接板子尝试啦,可以将 pre_compile=True, 提高初始化速度. 不过不能在仿真环境中测试.

参考

  1. https://github.com/shaoshengsong/rockchip_rknn_yolov5
  2. https://github.com/rockchip-linux/rknpu

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/578551.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

企业想提高商机转化率该如何挑选CRM系统

CRM客户管理系统可以帮助销售人员跟踪和分析潜在客户的需求、行为和偏好&#xff0c;制定合适的销售策略&#xff0c;提高商机转化率。下面我们就来说说&#xff0c;CRM系统如何加速销售商机推进。 1、跟踪客户和动态 Zoho CRM可以帮助您记录和分析客户的需求、行为和偏好&am…

8 年 SQL 人,撑不过前 6 题

抱歉各位&#xff0c;标题党了。。 前两天发布了一款 SQL 题集&#xff1a; 开发了一个SQL数据库题库小程序 <<- 戳它直达 群里小伙伴反馈&#xff0c;太简单&#xff1a; 于是&#xff0c;我又改版了下&#xff1a; 列举几题&#xff0c;大家看看难度&#xff1a; SQL S…

Python类的成员介绍

Python类的成员介绍 在Python中&#xff0c;类&#xff08;class&#xff09;是一种定义对象的模板。对象是由类创建的实例&#xff0c;它们具有属性和方法。属性是对象的变量&#xff0c;而方法是对象的函数。 定义在类中的变量也称为属性&#xff0c;定义在类中的函数也称为方…

DragGAN:interactive point-based manipulation on the generative image manifold

AI绘画可控性研究与应用清华美院课程的文字稿和PPThttps://mp.weixin.qq.com/s?__bizMzIxOTczNjQ2OQ&mid2247484794&idx1&sn3556e5c467512953596237d71326be6e&chksm97d7f580a0a07c968dedb02d0ca46a384643e38b51b871c7a4f89b38a04fb2056e084167be05&scene…

基于html+css的图展示97

准备项目 项目开发工具 Visual Studio Code 1.44.2 版本: 1.44.2 提交: ff915844119ce9485abfe8aa9076ec76b5300ddd 日期: 2020-04-16T16:36:23.138Z Electron: 7.1.11 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Windows_NT x64 10.0.19044 项目…

测量平差实习心得精选三篇(合集)

测量平差实习心得精选三篇 测量平差实习心得一为期两周的实习在不断地学习、尝试、修正的过程中圆满结束了。这次实习让我对许多问题有了深刻的认识。我认识到编程的重要性&#xff0c;认识到自学能力的重要性&#xff0c;认识到从书本到实践还有很长一段路要走。 熟练掌握一门…

探索C++非质变算法:如何更高效地处理数据

前言 &#x1f4d6;欢迎大家来到小K的c专栏&#xff0c;本节将为大家带来C非质变算法的详解 &#x1f389;欢迎各位→点赞&#x1f44f; 收藏&#x1f49e; 留言&#x1f514;​ &#x1f4ac;总结&#xff1a;希望你看完之后&#xff0c;能对你有所帮助&#xff0c;不足请指…

随机数发生器设计(二)

一、软件随机数发生器组成概述 密码产品应至少包含一个随机比特生成器。 软件随机数发生器应遵循GM/T 0105-2021《软件随机数发生器设计指南》要求&#xff0c;使用SM3或SM4算法作为生成随机比特算法。 本文以SM3算法为例描述软件随机数发生器的一个设计实例。需要注意的是&a…

如何在华为OD机试中获得满分?Java实现【猜字谜】一文详解!

✅创作者&#xff1a;陈书予 &#x1f389;个人主页&#xff1a;陈书予的个人主页 &#x1f341;陈书予的个人社区&#xff0c;欢迎你的加入: 陈书予的社区 &#x1f31f;专栏地址: Java华为OD机试真题&#xff08;2022&2023) 文章目录 1、题目描述2、输入描述3、输出描述…

python爬虫——pandas的简单使用

pandas作为爬虫中最重要的包之一&#xff0c;我们要想学好爬虫&#xff0c;就必须要深入了解pandas 直接上代码 import pandas as pd import numpy as npdata pd.DataFrame(np.arange(16).reshape((4,4)),index[a,b,c,d],#如果不写列索引默认为0&#xff0c;1&#xff0c;…

基于html+css的图展示96

准备项目 项目开发工具 Visual Studio Code 1.44.2 版本: 1.44.2 提交: ff915844119ce9485abfe8aa9076ec76b5300ddd 日期: 2020-04-16T16:36:23.138Z Electron: 7.1.11 Chrome: 78.0.3904.130 Node.js: 12.8.1 V8: 7.8.279.23-electron.0 OS: Windows_NT x64 10.0.19044 项目…

【源码解析】Nacos配置热更新的实现原理

使用入门 使用RefreshScopeValue&#xff0c;实现动态刷新 RestController RefreshScope public class TestController {Value("${cls.name}")private String clsName;}使用ConfigurationProperties&#xff0c;通过Autowired注入使用 Data ConfigurationProperti…

警惕AI换脸技术:近期诈骗事件揭示的惊人真相

大家好&#xff0c;我是可夫小子&#xff0c;《小白玩转ChatGPT》专栏作者&#xff0c;关注AIGC、读书和自媒体。 目录 1. deepswap 2. faceswap 3. swapface 总结 &#x1f4e3;通知 近日&#xff0c;包头警方公布了一起用AI进行电信诈骗的案件&#xff0c;其中福州科技公…

医院PACS系统:三维多平面重建操作使用

三维多平面重建&#xff08;MPR\CPR&#xff09;界面工具栏&#xff1a; 按钮1&#xff1a;点击此按钮&#xff0c;用鼠标拖动正交的MPR定位线&#xff0c;可以动态浏览MPR图像。 按钮2&#xff1a;点击此按钮&#xff0c;按下鼠标左键在图像上作任意勾边&#xff0c;弹起鼠标…

python3.8安装rpy2

python3.8安装rpy2 rpy2是一个可以让r和python交互的库&#xff0c;非常强大&#xff0c;但是安装过程有些坎坷。 安装r语言 安装时首先需要安装r语言。 官网下载链接&#xff1a;https://www.r-project.org/ 选择与自己电脑相应的版本就好。 安装rpy2 然后需要安装rpy2库…

Radxa ROCK 5A RK3588S 开箱 vs 树莓派

Rock5 Model A 是一款高性能的单板计算机&#xff0c;采用了 RK3588S (8nm LP 制程&#xff09;处理器&#xff0c;具有 4 个高达 2.4GHz 的 ARM Cortex-A76 CPU 核心、4 个高达 1.8GHz 的 Cortex-A55 内核和 Mali-G610 MP4 GPU&#xff0c;支持 8K 60fps 视频播放&#xff0c…

光力转债上市价格预测

光力转债 基本信息 转债名称&#xff1a;光力转债&#xff0c;评级&#xff1a;A&#xff0c;发行规模&#xff1a;4.0亿元。 正股名称&#xff1a;光力科技&#xff0c;今日收盘价&#xff1a;22.53元&#xff0c;转股价格&#xff1a;21.46元。 当前转股价值 转债面值 / 转股…

Redis的常用数据结构之字符串类型

redis中的数据结构是根据value的值来进行区别的&#xff0c;主要分了String、Hash、List、Set&#xff08;无序集合&#xff09;、Zset&#xff08;有序集合&#xff09; 字符串&#xff08;String&#xff09; String类型是redis中最基础的数据结构&#xff0c;也可以理解为…

Java基础面试题突击系列6

&#x1f469;&#x1f3fb; 作者&#xff1a;一只IT攻城狮 &#xff0c;关注我不迷路 ❤️《java面试核心知识》突击系列&#xff0c;持续更新… &#x1f490; 面试必知必会学习路线&#xff1a;Java技术栈面试系列SpringCloud项目实战学习路线 &#x1f4dd;再小的收获x365天…

一、CNNs网络架构-基础网络架构(LeNet、AlexNet、ZFNet)

目录 1.LeNet 2.AlexNet 2.1 激活函数&#xff1a;ReLU 2.2 随机失活&#xff1a;Droupout 2.3 数据扩充&#xff1a;Data augmentation 2.4 局部响应归一化&#xff1a;LRN 2.5 多GPU训练 2.6 论文 3.ZFNet 3.1 网络架构 3.2 反卷积 3.3 卷积可视化 3.4 ZFNet改…