【扒网络架构】backbone、ccff

news2024/9/24 13:18:39

backbone 

CCFF 

还不知道网络连接方式,只是知道了每一层

 

backbone

  1. backbone.backbone.conv1.weight torch.Size([64, 3, 7, 7])
  2. backbone.backbone.layer1.0.conv1.weight torch.Size([64, 64, 1, 1])
  3. backbone.backbone.layer1.0.conv2.weight torch.Size([64, 64, 3, 3])
  4. backbone.backbone.layer1.0.conv3.weight torch.Size([256, 64, 1, 1])
  5. backbone.backbone.layer1.0.downsample.0.weight torch.Size([256, 64, 1, 1])
  6. backbone.backbone.layer1.1.conv1.weight torch.Size([64, 256, 1, 1])
  7. backbone.backbone.layer1.1.conv2.weight torch.Size([64, 64, 3, 3])
  8. backbone.backbone.layer1.1.conv3.weight torch.Size([256, 64, 1, 1])
  9. backbone.backbone.layer1.2.conv1.weight torch.Size([64, 256, 1, 1])
  10. backbone.backbone.layer1.2.conv2.weight torch.Size([64, 64, 3, 3])
  11. backbone.backbone.layer1.2.conv3.weight torch.Size([256, 64, 1, 1])
  12. backbone.backbone.layer2.0.conv1.weight torch.Size([128, 256, 1, 1])
  13. backbone.backbone.layer2.0.conv2.weight torch.Size([128, 128, 3, 3])
  14. backbone.backbone.layer2.0.conv3.weight torch.Size([512, 128, 1, 1])
  15. backbone.backbone.layer2.0.downsample.0.weight torch.Size([512, 256, 1, 1])
  16. backbone.backbone.layer2.1.conv1.weight torch.Size([128, 512, 1, 1])
  17. backbone.backbone.layer2.1.conv2.weight torch.Size([128, 128, 3, 3])
  18. backbone.backbone.layer2.1.conv3.weight torch.Size([512, 128, 1, 1])
  19. backbone.backbone.layer2.2.conv1.weight torch.Size([128, 512, 1, 1])
  20. backbone.backbone.layer2.2.conv2.weight torch.Size([128, 128, 3, 3])
  21. backbone.backbone.layer2.2.conv3.weight torch.Size([512, 128, 1, 1])
  22. backbone.backbone.layer2.3.conv1.weight torch.Size([128, 512, 1, 1])
  23. backbone.backbone.layer2.3.conv2.weight torch.Size([128, 128, 3, 3])
  24. backbone.backbone.layer2.3.conv3.weight torch.Size([512, 128, 1, 1])
  25. backbone.backbone.layer3.0.conv1.weight torch.Size([256, 512, 1, 1])
  26. backbone.backbone.layer3.0.conv2.weight torch.Size([256, 256, 3, 3])
  27. backbone.backbone.layer3.0.conv3.weight torch.Size([1024, 256, 1, 1])
  28. backbone.backbone.layer3.0.downsample.0.weight torch.Size([1024, 512, 1, 1])
  29. backbone.backbone.layer3.1.conv1.weight torch.Size([256, 1024, 1, 1])
  30. backbone.backbone.layer3.1.conv2.weight torch.Size([256, 256, 3, 3])
  31. backbone.backbone.layer3.1.conv3.weight torch.Size([1024, 256, 1, 1])
  32. backbone.backbone.layer3.2.conv1.weight torch.Size([256, 1024, 1, 1])
  33. backbone.backbone.layer3.2.conv2.weight torch.Size([256, 256, 3, 3])
  34. backbone.backbone.layer3.2.conv3.weight torch.Size([1024, 256, 1, 1])
  35. backbone.backbone.layer3.3.conv1.weight torch.Size([256, 1024, 1, 1])
  36. backbone.backbone.layer3.3.conv2.weight torch.Size([256, 256, 3, 3])
  37. backbone.backbone.layer3.3.conv3.weight torch.Size([1024, 256, 1, 1])
  38. backbone.backbone.layer3.4.conv1.weight torch.Size([256, 1024, 1, 1])
  39. backbone.backbone.layer3.4.conv2.weight torch.Size([256, 256, 3, 3])
  40. backbone.backbone.layer3.4.conv3.weight torch.Size([1024, 256, 1, 1])
  41. backbone.backbone.layer3.5.conv1.weight torch.Size([256, 1024, 1, 1])
  42. backbone.backbone.layer3.5.conv2.weight torch.Size([256, 256, 3, 3])
  43. backbone.backbone.layer3.5.conv3.weight torch.Size([1024, 256, 1, 1])
  44. backbone.backbone.layer4.0.conv1.weight torch.Size([512, 1024, 1, 1])
  45. backbone.backbone.layer4.0.conv2.weight torch.Size([512, 512, 3, 3])
  46. backbone.backbone.layer4.0.conv3.weight torch.Size([2048, 512, 1, 1])
  47. backbone.backbone.layer4.0.downsample.0.weight torch.Size([2048, 1024, 1, 1])
  48. backbone.backbone.layer4.1.conv1.weight torch.Size([512, 2048, 1, 1])
  49. backbone.backbone.layer4.1.conv2.weight torch.Size([512, 512, 3, 3])
  50. backbone.backbone.layer4.1.conv3.weight torch.Size([2048, 512, 1, 1])
  51. backbone.backbone.layer4.2.conv1.weight torch.Size([512, 2048, 1, 1])
  52. backbone.backbone.layer4.2.conv2.weight torch.Size([512, 512, 3, 3])
  53. backbone.backbone.layer4.2.conv3.weight torch.Size([2048, 512, 1, 1])
  54. backbone.backbone.fc.weight torch.Size([1000, 2048])
  55. backbone.backbone.fc.bias torch.Size([1000])

ccf

  1. ccff.conv1.conv.weight torch.Size([3584, 3584, 1, 1])
  2. ccff.conv1.norm.weight torch.Size([3584])
  3. ccff.conv1.norm.bias torch.Size([3584])
  4. ccff.conv2.conv.weight torch.Size([3584, 3584, 1, 1])
  5. ccff.conv2.norm.weight torch.Size([3584])
  6. ccff.conv2.norm.bias torch.Size([3584])
  7. ccff.bottlenecks.0.conv1.conv.weight torch.Size([3584, 3584, 3, 3])
  8. ccff.bottlenecks.0.conv1.norm.weight torch.Size([3584])
  9. ccff.bottlenecks.0.conv1.norm.bias torch.Size([3584])
  10. ccff.bottlenecks.0.conv2.conv.weight torch.Size([3584, 3584, 1, 1])
  11. ccff.bottlenecks.0.conv2.norm.weight torch.Size([3584])
  12. ccff.bottlenecks.0.conv2.norm.bias torch.Size([3584])
  13. ccff.bottlenecks.1.conv1.conv.weight torch.Size([3584, 3584, 3, 3])
  14. ccff.bottlenecks.1.conv1.norm.weight torch.Size([3584])
  15. ccff.bottlenecks.1.conv1.norm.bias torch.Size([3584])
  16. ccff.bottlenecks.1.conv2.conv.weight torch.Size([3584, 3584, 1, 1])
  17. ccff.bottlenecks.1.conv2.norm.weight torch.Size([3584])
  18. ccff.bottlenecks.1.conv2.norm.bias torch.Size([3584])
  19. ccff.bottlenecks.2.conv1.conv.weight torch.Size([3584, 3584, 3, 3])
  20. ccff.bottlenecks.2.conv1.norm.weight torch.Size([3584])
  21. ccff.bottlenecks.2.conv1.norm.bias torch.Size([3584])
  22. ccff.bottlenecks.2.conv2.conv.weight torch.Size([3584, 3584, 1, 1])
  23. ccff.bottlenecks.2.conv2.norm.weight torch.Size([3584])
  24. ccff.bottlenecks.2.conv2.norm.bias torch.Size([3584])

input_proj

  1. input_proj.weight torch.Size([256, 3584, 1, 1])
  2. input_proj.bias torch.Size([256])

encoder

  1. encoder.layers.0.norm1.weight torch.Size([256])
  2. encoder.layers.0.norm1.bias torch.Size([256])
  3. encoder.layers.0.norm2.weight torch.Size([256])
  4. encoder.layers.0.norm2.bias torch.Size([256])
  5. encoder.layers.0.self_attn.in_proj_weight torch.Size([768, 256])
  6. encoder.layers.0.self_attn.in_proj_bias torch.Size([768])
  7. encoder.layers.0.self_attn.out_proj.weight torch.Size([256, 256])
  8. encoder.layers.0.self_attn.out_proj.bias torch.Size([256])
  9. encoder.layers.0.mlp.linear1.weight torch.Size([2048, 256])
  10. encoder.layers.0.mlp.linear1.bias torch.Size([2048])
  11. encoder.layers.0.mlp.linear2.weight torch.Size([256, 2048])
  12. encoder.layers.0.mlp.linear2.bias torch.Size([256])
  13. encoder.layers.1.norm1.weight torch.Size([256])
  14. encoder.layers.1.norm1.bias torch.Size([256])
  15. encoder.layers.1.norm2.weight torch.Size([256])
  16. encoder.layers.1.norm2.bias torch.Size([256])
  17. encoder.layers.1.self_attn.in_proj_weight torch.Size([768, 256])
  18. encoder.layers.1.self_attn.in_proj_bias torch.Size([768])
  19. encoder.layers.1.self_attn.out_proj.weight torch.Size([256, 256])
  20. encoder.layers.1.self_attn.out_proj.bias torch.Size([256])
  21. encoder.layers.1.mlp.linear1.weight torch.Size([2048, 256])
  22. encoder.layers.1.mlp.linear1.bias torch.Size([2048])
  23. encoder.layers.1.mlp.linear2.weight torch.Size([256, 2048])
  24. encoder.layers.1.mlp.linear2.bias torch.Size([256])
  25. encoder.layers.2.norm1.weight torch.Size([256])
  26. encoder.layers.2.norm1.bias torch.Size([256])
  27. encoder.layers.2.norm2.weight torch.Size([256])
  28. encoder.layers.2.norm2.bias torch.Size([256])
  29. encoder.layers.2.self_attn.in_proj_weight torch.Size([768, 256])
  30. encoder.layers.2.self_attn.in_proj_bias torch.Size([768])
  31. encoder.layers.2.self_attn.out_proj.weight torch.Size([256, 256])
  32. encoder.layers.2.self_attn.out_proj.bias torch.Size([256])
  33. encoder.layers.2.mlp.linear1.weight torch.Size([2048, 256])
  34. encoder.layers.2.mlp.linear1.bias torch.Size([2048])
  35. encoder.layers.2.mlp.linear2.weight torch.Size([256, 2048])
  36. encoder.layers.2.mlp.linear2.bias torch.Size([256])
  37. encoder.norm.weight torch.Size([256])
  38. encoder.norm.bias torch.Size([256])

ope

  1. ope.iterative_adaptation.layers.0.norm1.weight torch.Size([256])
  2. ope.iterative_adaptation.layers.0.norm1.bias torch.Size([256])
  3. ope.iterative_adaptation.layers.0.norm2.weight torch.Size([256])
  4. ope.iterative_adaptation.layers.0.norm2.bias torch.Size([256])
  5. ope.iterative_adaptation.layers.0.norm3.weight torch.Size([256])
  6. ope.iterative_adaptation.layers.0.norm3.bias torch.Size([256])
  7. ope.iterative_adaptation.layers.0.self_attn.in_proj_weight torch.Size([768, 256])
  8. ope.iterative_adaptation.layers.0.self_attn.in_proj_bias torch.Size([768])
  9. ope.iterative_adaptation.layers.0.self_attn.out_proj.weight torch.Size([256, 256])
  10. ope.iterative_adaptation.layers.0.self_attn.out_proj.bias torch.Size([256])
  11. ope.iterative_adaptation.layers.0.enc_dec_attn.in_proj_weight torch.Size([768, 256])
  12. ope.iterative_adaptation.layers.0.enc_dec_attn.in_proj_bias torch.Size([768])
  13. ope.iterative_adaptation.layers.0.enc_dec_attn.out_proj.weight torch.Size([256, 256])
  14. ope.iterative_adaptation.layers.0.enc_dec_attn.out_proj.bias torch.Size([256])
  15. ope.iterative_adaptation.layers.0.mlp.linear1.weight torch.Size([2048, 256])
  16. ope.iterative_adaptation.layers.0.mlp.linear1.bias torch.Size([2048])
  17. ope.iterative_adaptation.layers.0.mlp.linear2.weight torch.Size([256, 2048])
  18. ope.iterative_adaptation.layers.0.mlp.linear2.bias torch.Size([256])
  19. ope.iterative_adaptation.layers.1.norm1.weight torch.Size([256])
  20. ope.iterative_adaptation.layers.1.norm1.bias torch.Size([256])
  21. ope.iterative_adaptation.layers.1.norm2.weight torch.Size([256])
  22. ope.iterative_adaptation.layers.1.norm2.bias torch.Size([256])
  23. ope.iterative_adaptation.layers.1.norm3.weight torch.Size([256])
  24. ope.iterative_adaptation.layers.1.norm3.bias torch.Size([256])
  25. ope.iterative_adaptation.layers.1.self_attn.in_proj_weight torch.Size([768, 256])
  26. ope.iterative_adaptation.layers.1.self_attn.in_proj_bias torch.Size([768])
  27. ope.iterative_adaptation.layers.1.self_attn.out_proj.weight torch.Size([256, 256])
  28. ope.iterative_adaptation.layers.1.self_attn.out_proj.bias torch.Size([256])
  29. ope.iterative_adaptation.layers.1.enc_dec_attn.in_proj_weight torch.Size([768, 256])
  30. ope.iterative_adaptation.layers.1.enc_dec_attn.in_proj_bias torch.Size([768])
  31. ope.iterative_adaptation.layers.1.enc_dec_attn.out_proj.weight torch.Size([256, 256])
  32. ope.iterative_adaptation.layers.1.enc_dec_attn.out_proj.bias torch.Size([256])
  33. ope.iterative_adaptation.layers.1.mlp.linear1.weight torch.Size([2048, 256])
  34. ope.iterative_adaptation.layers.1.mlp.linear1.bias torch.Size([2048])
  35. ope.iterative_adaptation.layers.1.mlp.linear2.weight torch.Size([256, 2048])
  36. ope.iterative_adaptation.layers.1.mlp.linear2.bias torch.Size([256])
  37. ope.iterative_adaptation.layers.2.norm1.weight torch.Size([256])
  38. ope.iterative_adaptation.layers.2.norm1.bias torch.Size([256])
  39. ope.iterative_adaptation.layers.2.norm2.weight torch.Size([256])
  40. ope.iterative_adaptation.layers.2.norm2.bias torch.Size([256])
  41. ope.iterative_adaptation.layers.2.norm3.weight torch.Size([256])
  42. ope.iterative_adaptation.layers.2.norm3.bias torch.Size([256])
  43. ope.iterative_adaptation.layers.2.self_attn.in_proj_weight torch.Size([768, 256])
  44. ope.iterative_adaptation.layers.2.self_attn.in_proj_bias torch.Size([768])
  45. ope.iterative_adaptation.layers.2.self_attn.out_proj.weight torch.Size([256, 256])
  46. ope.iterative_adaptation.layers.2.self_attn.out_proj.bias torch.Size([256])
  47. ope.iterative_adaptation.layers.2.enc_dec_attn.in_proj_weight torch.Size([768, 256])
  48. ope.iterative_adaptation.layers.2.enc_dec_attn.in_proj_bias torch.Size([768])
  49. ope.iterative_adaptation.layers.2.enc_dec_attn.out_proj.weight torch.Size([256, 256])
  50. ope.iterative_adaptation.layers.2.enc_dec_attn.out_proj.bias torch.Size([256])
  51. ope.iterative_adaptation.layers.2.mlp.linear1.weight torch.Size([2048, 256])
  52. ope.iterative_adaptation.layers.2.mlp.linear1.bias torch.Size([2048])
  53. ope.iterative_adaptation.layers.2.mlp.linear2.weight torch.Size([256, 2048])
  54. ope.iterative_adaptation.layers.2.mlp.linear2.bias torch.Size([256])
  55. ope.iterative_adaptation.norm.weight torch.Size([256])
  56. ope.iterative_adaptation.norm.bias torch.Size([256])

ope.shape_or_objectness

  1. ope.shape_or_objectness.0.weight torch.Size([64, 2])
  2. ope.shape_or_objectness.0.bias torch.Size([64])
  3. ope.shape_or_objectness.2.weight torch.Size([256, 64])
  4. ope.shape_or_objectness.2.bias torch.Size([256])
  5. ope.shape_or_objectness.4.weight torch.Size([2304, 256])
  6. ope.shape_or_objectness.4.bias torch.Size([2304])

回归头

  1. regression_head.regressor.0.layer.0.weight torch.Size([128, 256, 3, 3])
  2. regression_head.regressor.0.layer.0.bias torch.Size([128])
  3. regression_head.regressor.1.layer.0.weight torch.Size([64, 128, 3, 3])
  4. regression_head.regressor.1.layer.0.bias torch.Size([64])
  5. regression_head.regressor.2.layer.0.weight torch.Size([32, 64, 3, 3])
  6. regression_head.regressor.2.layer.0.bias torch.Size([32])
  7. regression_head.regressor.3.weight torch.Size([1, 32, 1, 1])
  8. regression_head.regressor.3.bias torch.Size([1])

辅助头

  1. aux_heads.0.regressor.0.layer.0.weight torch.Size([128, 256, 3, 3])
  2. aux_heads.0.regressor.0.layer.0.bias torch.Size([128])
  3. aux_heads.0.regressor.1.layer.0.weight torch.Size([64, 128, 3, 3])
  4. aux_heads.0.regressor.1.layer.0.bias torch.Size([64])
  5. aux_heads.0.regressor.2.layer.0.weight torch.Size([32, 64, 3, 3])
  6. aux_heads.0.regressor.2.layer.0.bias torch.Size([32])
  7. aux_heads.0.regressor.3.weight torch.Size([1, 32, 1, 1])
  8. aux_heads.0.regressor.3.bias torch.Size([1])
  9. aux_heads.1.regressor.0.layer.0.weight torch.Size([128, 256, 3, 3])
  10. aux_heads.1.regressor.0.layer.0.bias torch.Size([128])
  11. aux_heads.1.regressor.1.layer.0.weight torch.Size([64, 128, 3, 3])
  12. aux_heads.1.regressor.1.layer.0.bias torch.Size([64])
  13. aux_heads.1.regressor.2.layer.0.weight torch.Size([32, 64, 3, 3])
  14. aux_heads.1.regressor.2.layer.0.bias torch.Size([32])
  15. aux_heads.1.regressor.3.weight torch.Size([1, 32, 1, 1])
  16. aux_heads.1.regressor.3.bias torch.Size([1])


Total number of parameters in LOCA: 447974251

Total number of parameters in CCFF: 411099136(这个模块,参数量好大)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2033708.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Vue3封装tabs切换组件

效果如下&#xff1a; 代码如下&#xff1a; <template><div class"tabs-container"><div class"tabs-header"><div v-for"tab in tabs" :key"tab.name" class"tab" click"handleTabClick(tab)&…

TikTok美区账号起号攻略:广告投流和矩阵营销怎么选?

在TikTok美区市场&#xff0c;品牌和卖家可以选择不同的策略来提升曝光率和销售业绩&#xff0c;对于品牌成功至关重要。现在TikTok Shop的两个主流玩法是广告投流和账号矩阵营销&#xff0c;各有其独特优势和适用场景。下面我将对这两种策略综合分析&#xff0c;分别介绍其主要…

告别笨重工具,LICEcap:你的高效GIF录制方案,快来体验!

前言 你是否曾有过这样的烦恼&#xff1a;想要向朋友展示一段精彩的操作&#xff0c;却发现录屏软件要么太过笨重&#xff0c;要么录制的视频文件庞大难以分享&#xff1f;嘿&#xff0c;朋友们&#xff0c;今天小江湖就要给大家安利一款神器——LICEcap&#xff0c;它绝对能让…

速学nvm Nodejs版本管理与nrm源管理

速学nvm Nodejs版本管理与nrm源管理 nvm Nodejs版本管理nvm是什么&#xff1f;nvm安装下载版本&#xff1a;安装步骤&#xff1a;nvm常用指令&#xff1a; nrm源管理nrm是什么&#xff1f;&#xff1f;nrm全局安装nrm常用指令 nvm Nodejs版本管理 NVM 是一种用于管理多个主动节…

自主开发的数据采集监控平台对使用方有什么优势?

蓝鹏测控自主开发的LP-SCADA数据采集监控平台&#xff0c;更具自主性&#xff0c;完全可以根据使用方的需求进行系统定制&#xff0c;如&#xff1a;功能、显示、界面、采集数量、数据分析等。 自主开发 基于WEB的一套数据采集系统平台&#xff0c;同各种设备及三方系统建立通…

用LLM搭建100个应用:从0到1搭建自己的Windows贾维斯

从ChatGPT发布至今&#xff0c;确实所有的应用都值得用大模型重新做一遍。国内外对基底大模型卷了又卷&#xff0c;新生的应用也在模型的迭代过程中&#xff0c;起起伏伏。 但可以坚信的是&#xff0c;AGI的方向和每个时代人们永远在变的不变的需求。 而求外不如求己&#xff0…

人像后期精修 调色+精修笔记(精修+化妆+美术)

色彩调节 关于调色曲线的学习&#xff1a; 学习链接&#xff1a;一看就懂的曲线调色教程【手机摄影后期】_哔哩哔哩_bilibili 从左向右就是由暗部越来越到亮部 越靠近右侧的越是亮部 精修常用技巧 学习视频&#xff1a;【PS精修教程】2024最详细最全的PS人像精修全套68集&a…

机器学习入门(五):K近邻算法API K值选择问题

目录 1. K 近邻算法 API 1.1 Sklearn API介绍 1.2 鸢尾花分类示例代码 1.3 小结 2. K 值选择问题 2.1 K取不同值时带来的影响 2.2 如何确定合适的K值 2.3 GridSearchCV 的用法 2.4 小结 1. K 近邻算法 API K近邻&#xff08;K-Nearest Neighbors, KNN&#xff09;算法作…

(附源码)SSM动漫展示系统的开发-计算机毕设 25454

SSM动漫展示系统的开发 摘 要 21世纪&#xff0c;全球网络化&#xff0c;科技在突飞猛进。我们的生活也随之发生了极大的变化。随着计算机的普及&#xff0c;我们社会和经济生活中的各个领域也在发生改变。人们进行信息交流的深度与广度在不断增加,这使得传统的行业模式也要跟随…

【乐吾乐大屏可视化组态编辑器】发送指令

发送指令 在线使用&#xff1a;https://v.le5le.com/ 发送指令是指将数据通过通信接口下发到设备 1. 拖动图元&#xff08;以按钮为例&#xff09;到画布&#xff0c;右侧切换到交互面板&#xff0c;添加单击事件。 2. 点击“添加动作”&#xff0c;动作类型选择“发送数据”…

图像文本擦除无痕迹!复旦提出EAFormer:最新场景文本分割新SOTA!(ECCV`24)

文章链接&#xff1a;https://arxiv.org/pdf/2407.17020 git链接&#xff1a;https://hyangyu.github.io/EAFormer/ 亮点直击 为了在文本边缘区域实现更好的分割性能&#xff0c;本文提出了边缘感知Transformer&#xff08;EAFormer&#xff09;&#xff0c;该方法明确预测文…

JS 【详解】sourcemap

sourcemap 的作用 JS 上线时要压缩、混淆&#xff0c;线上的 JS 报错信息无法识别行、列&#xff0c;sourcemap 可解决这个问题 sourcemap 的原理 sourcemap 文件中&#xff0c;保存了 JS 代码压缩后和压缩前的对应关系 怎样找到 sourcemap 文件 方法1&#xff1a;将 JS 的后缀…

以太彩光网 VS PON网络 谁更适合企业级园区

耿望阳 中建协绿建与智能分会专家委副主任、华南理工大学建筑设计研究院电气(智能化)顾问总工 光进铜退的背后,折射的是时代的变迁,技术的进步。自2009年起,光纤技术至今早已潜移默化渗透到人们工作和生活每个角落,全光网已经具备了未来的确定性,而对于企业级市场来说,该怎么…

c语言第12天

指针的引入 为函数修改实参提供支持。 为动态内存管理提供支持。 为动态数据结构提供支持。 为内存访问提供另一种途径。 指针概述 内存地址&#xff1a;系统为了内存管理的方便&#xff0c;将内存划分为一个个的内存单元&#xff08;1个内存单元占1个字 节&#xff09;&…

opencv 深度图视差图可视化案例

参考:https://www.cnblogs.com/zyly/p/9373991.html(图片这里面下载的) https://blog.csdn.net/He3he3he/article/details/101053457 双目测距论文: http://www.shcas.net/jsjyup/pdf/2016/9/%E5%9F%BA%E4%BA%8E%E5%8F%8C%E7%9B%AE%E7%AB%8B%E4%BD%93%E8%A7%86%E8%A7%89%E…

【51蛋骗鸡矩阵键盘组合键的使用】2021-12-28

组合键以第一按键所在的行列除外可以和任意的按键组合&#xff0c;每一个都可以和剩下的9个组合。 unsigned char JianPanShaoMiao(/*使用行列反转扫描法*/) { unsigned char H15,L240,Ys0;P1H;if(P1!15){ while(Ys);//消抖HP1;P1L;LP1;while(Ys);//消抖 // while(P1!240);/…

Temu测评自养号的基本概念和目的

在跨境电商领域&#xff0c;自养号的创建与维护已成为提升业务效率、规避平台风险的关键策略。实现稳定、高效、安全的Temu测评自养号运营。 环境系统构建&#xff1a;掌握核心技术&#xff0c;规避依赖风险 市场上的现成解决方案往往缺乏定制化风控能力&#xff0c;自建系统则…

车载 | 硬体: 教你如何进行校准高通QCA6595的Wi-Fi频偏

在高通QCA6595产品在投入使用前&#xff0c;进行频率校准是关键步骤&#xff0c;以保障其与其他设备的顺畅搜索和连接稳定性。本文旨在提供一份全面的操作指南&#xff0c;助您完成校准流程。 首先&#xff0c;根据下图指示&#xff0c;完成QCA6595芯片与电脑、测试仪器之间的…

软件测试下的AI之路(6)

😏作者简介:博主是一位测试管理者,同时也是一名对外企业兼职讲师。 📡主页地址:【Austin_zhai】 🙆目的与景愿:旨在于能帮助更多的测试行业人员提升软硬技能,分享行业相关最新信息。 💎声明:博主日常工作较为繁忙,文章会不定期更新,各类行业或职场问题欢迎大家…

智改数转:传统企业数字化转型的新机遇

引言 在当今全球化与科技高速发展的时代&#xff0c;数字化和智能化浪潮正以前所未有的速度改变着各行各业的运营方式。作为现代经济的重要组成部分&#xff0c;传统企业面临着来自市场和技术的双重压力。面对新兴技术驱动的新商业模式的冲击&#xff0c;以及不断变化的消费者期…