[TPAMI 2024]Vision-Language Models for Vision Tasks: A Survey

news2024/12/12 19:52:10

论文网址:Vision-Language Models for Vision Tasks: A Survey | IEEE Journals & Magazine | IEEE Xplore

论文Github页面:GitHub - jingyi0000/VLM_survey: Collection of AWESOME vision-language models for vision tasks

英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用

目录

1. 心得

2. 论文逐段精读

2.1. Abstract

2.2. Introduction

2.3. Background

2.3.1. Training Paradigms for Visual Recognition

2.3.2. Development of VLMs for Visual Recognition

2.3.3. Relevant Surveys

2.4. VLM Foundations

2.4.1. Network Architectures

2.4.2. VLM Pre-Training Objectives

2.4.3. VLM Pre-Training Frameworks

2.4.4. Evaluation Setups and Downstream Tasks

2.5. Datasets

2.5.1. Datasets for Pre-Training VLMs

2.5.2. Datasets for VLM Evaluation

2.6. Vision-Language Model Pre-Training

2.6.1. VLM Pre-Training With Contrastive Objectives

2.6.2. VLM Pre-Training With Generative Objectives

2.6.3. VLM Pre-Training With Alignment Objectives

2.6.4. Summary and Discussion

2.7. VLM Transfer Learning

2.7.1. Motivation of Transfer Learning

2.7.2. Common Setup of Transfer Learning

2.7.3. Common Transfer Learning Methods

2.7.4. Summary and Discussion

2.8. VLM Knowledge Distillation

2.8.1. Motivation of Distilling Knowledge From VLMs

2.8.2. Common Knowledge Distillation Methods

2.8.3. Summary and Discussion

2.9. Performance Comparison

2.9.1. Performance of VLM Pre-Training

2.9.2. Performance of VLM Transfer Learning

2.9.3. Performance of VLM Knowledge Distillation

2.9.4. Summary

2.10. Future Directions

2.11. Conclusion

3. Reference


1. 心得

(1)依旧放松一下,以及很久没看TPAMI了,感觉一直很认可TPAMI的质量啊,拜读一下

(2)感觉比起长篇大论的n个模型介绍,突出每种模型的重点也是非常不错的。和我之前看的一个TPAMI综述一样,就写了损失。然后数据集介绍太多其实也有点睿智,这篇就是精简了。挺好的挺好的顾得顾得

(3)好就好在有些总结放表格里,没有那种给我硬塞文本的恶心感

(4)感觉如果要介绍一些新颖的模型,可以不从头到尾全部说一遍,而是突出它们某个方面的新颖,就把创新写了就行了

(5)还可以,这边给到一个较高的评价

2. 论文逐段精读

2.1. Abstract

        ①Existing problems: train DNN for each visual task, which is laborious and time costing

        ②Content: a) background of VLM in visual task, b) doundations of VLM, c) datasets, d) pretraining, transfer learning and knowledge distillation methods of VLM, e) benchmarks, f) challenges

laborious  adj.费力的;辛苦的

2.2. Introduction

        ①New paradigm: Pre-training (on large scale data w/ or w/o label), Fune-tuning (for specific labelled training data), and Prediction, see (a) and (b):

        ②Vision-Language Model Pre-training and Zero-shot Prediction which do not need fune-tuning:

        ③VLM publication number on Google Scholar:

frisbee  n.(投掷游戏用的)飞盘;飞碟

2.3. Background

2.3.1. Training Paradigms for Visual Recognition

(1)Traditional Machine Learning and Prediction

        ①Mostly hand-crafted and lightweight but hard to cope with complex or multi tasks

        ②Poor scalability

(2)Deep Learning From Scratch and Prediction

        ①Low speed convergence from scratch

        ②A mount of labels needed

(3)Supervised Pre-Training, Fine-Tuning and Prediction

        ①Speed up convergence

(4)Unsupervised Pre-Training, Fine-Tuning & Prediction

        ①Does not require labelled data

        ②Beter performance due to larger samples learning

(5)VLM Pre-Training and Zero-Shot Prediction

        ①Discarding fine-tuning

        ②Future directions: a) large scale informative image-text data, b) high-capacity models, c) new pre-training objectives

2.3.2. Development of VLMs for Visual Recognition

        ①3 improvements to VLMs:

2.3.3. Relevant Surveys

        ①Framework of their review:

2.4. VLM Foundations

2.4.1. Network Architectures

        ①Number of image-text pairs: N

        ②Features extracted from pairs: \mathcal{D}=\left \{ x^I_n, x^T_n\right \}^N_{n=1}, where x with superscript I denotes image sample with T denotes text

        ③Image encoder and text encoder in DNN: f_\theta / f_\phi

        ④Encoding operation: z_n^I=f_\theta(x_n^I) and z_n^T=f_\theta(x_n^T)

(1)Architectures for Learning Image Features

        ①CNN-based architectures: such as VGG, ResNet and EfficientNet

        ②Transformer-base architectures: such as ViT

(2)Architectures for Learning Language Features

        ①The framework of standard Transformer: 6 blocks in encoder (each with a multi-head attention layer and MLP) and 6 blocks in decoder (each with a multi-head attention layer, a masked multi-head layer and MLP)

2.4.2. VLM Pre-Training Objectives

(1)Contrastive Objectives

        ①Image Contrastive Learning: close with positive keys and faraway from negative keys in embedding space. For B images(实际上作者这里表达得很特殊,他们是说“对于这样的batch size”大小,这是比较贴近代码的表达,如果要概念上的表达其实就看成总共有这么多样本就好), this loss always be:

\mathcal{L}_I^\mathrm{InfoNCE}=-\frac{1}{B}\sum_{i=1}^{B}\log\frac{\exp{(z_i^I\cdot z_+^I/\tau)}}{\sum_{j=1,j\neq i}^{B+1}\exp(z_i^I\cdot z_j^I/\tau)}

where z_i^I denotes query embedding, \{z_j^I\}_{j=1,j\neq i}^{B+1} denotes key embeddings, z_+^I denotes positive keys in the i-th sample, \tau denotes temperature hyper-parameter

        ②Image-Text Contrastive Learning: pull paired embeddings closed and others away:

\begin{gathered} \mathcal{L}_{I\to T} =-\frac1B\sum_{i=1}^B\log\frac{\exp{(z_i^I\cdot z_i^T/\tau)}}{\sum_{j=1}^B\exp(z_i^I\cdot z_j^T/\tau)} \\ \mathcal{L}_{T\to I} =-\frac1B\sum_{i=1}^B\log\frac{\exp{(z_i^T\cdot z_i^I/\tau)}}{\sum_{j=1}^B\exp(z_i^T\cdot z_j^I/\tau)}\\ \mathcal{L}_{\mathrm{infoNCE}}^{IT}=\mathcal{L}_{I\to T}+\mathcal{L}_{T\to I} \end{gathered}

where \mathcal{L}_{I\to T} denotes contrasting the query image with the text keys, \mathcal{L}_{T\to I} denotes contrasting the query text with image keys

        ③Image-Text-Label Contrastive Learning: supervised:

\begin{gathered} \mathcal{L}_{I\to T}^{ITL} =-\sum_{i=1}^B\frac1{|\mathcal{P}(i)|}\sum_{k\in\mathcal{P}(i)}\log\frac{\exp{(z_i^I\cdot z_k^T/\tau)}}{\sum_{j=1}^B\exp(z_i^I\cdot z_j^T/\tau)} \\ \mathcal{L}_{T\to I}^{ITL} =-\sum_{i=1}^B\frac1{|\mathcal{P}(i)|}\sum_{k\in\mathcal{P}(i)}\log\frac{\exp{(z_i^T\cdot z_k^I/\tau)}}{\sum_{j=1}^B\exp(z_i^T\cdot z_j^I/\tau)}\\ \mathcal{L}_{\mathrm{infoNCE}}^{ITL}=\mathcal{L}_{I\to T}^{ITL}+\mathcal{L}_{T\to I}^{ITL} \end{gathered}

where k\in\mathcal{P}(i)=\{k|k\in B,y_k=y_i\}y denotes the class label of (z^I,z^T)(相当于多增加了一个样本类循环)

(2)Generative Objectives

        ①Masked Image Modelling: learns cross-patch correlation by masking a set of patches and reconstructing images. The loss usually is:

\mathcal{L}_{MIM}=-\frac1B\sum_{i=1}^B\log f_\theta(\overline{x}_i^I\mid\hat{x}_i^I)

where \overline{x}_i^I denotes masked patches, \hat{x}_i^I denotes unmasked patches(这“|”什么玩意儿啊条件概率吗但是说不通?在不mask的情况下mask的概率???怎么感觉反了呢还是我有问题

        ②Masked Language Modelling: mask at a specific ratio:

\mathcal{L}_{MLM}=-\frac1B\sum_{i=1}^B\log f_\phi(\overline{x}_i^T\mid\hat{x}_i^T)

        ③Masked Cross-Modal Modelling: randomly masks a subset of image patches and a subset of text tokens then reconstruct by unmasked ones:

\mathcal{L}_{MCM}=-\frac{1}{B}\sum_{i=1}^{B}[\log f_{\theta}(\overline{x}_{i}^{I}|\hat{x}_{i}^{I},\hat{x}_{i}^{T})+\log f_{\phi}(\overline{x}_{i}^{T}|\hat{x}_{i}^{I},\hat{x}_{i}^{T})]

        ④Image-to-Text Generation: through image and text pairs to predict text:

\mathcal{L}_{ITG}=-\sum_{l=1}^L \log f_\theta(x^T\mid x_{<l}^T,z^I)

where L denotes the number of tokens, z^I is the embedding of the image paired with x^T

(3)Alignment Objectives

        ①Image-Text Matching: BCE loss:

\mathcal{L}_{IT}=p\log\mathcal{S}(z^I,z^T)+(1-p)\log(1-\mathcal{S}(z^I,z^T))

where \mathcal{S}\left ( \cdot \right ) measures the alignment probability between the image and text, p=1 when matches otherwise 0

        ②Region-Word Matching: model local cross-modal correlation in dense scenes:

\mathcal{L}_{RW}=p\log\mathcal{S}^r(r^I,w^T)+(1-p)\log(1-\mathcal{S}^r(r^I,w^T))

where (r^I,w^T) denotes a region-word pair, p=1 when matches otherwise 0

2.4.3. VLM Pre-Training Frameworks

        ①two-tower, two-leg and one-tower pre-training approaches:

2.4.4. Evaluation Setups and Downstream Tasks

(1)Zero-Shot Prediction

        ①Image Classification: apply prompt engineering and compare embeddings of images and texts

        ②Semantic Segmentation: comparing the embeddings of the given image pixels and texts

        ③Object Detection: comparing the embeddings of the given object proposals and texts

        ④Image-Text Retrieval: retrieve the demanded samples from one modality given the cues from another modality, text-to-image or image-to-text

(2)Linear Probing

        ①freeze pre-trained VLM→get embedding→train a linear classifier to classify these embeddings

2.5. Datasets

         ①Widely Used Image-Text Datasets for VLM Pre-Training:

        ②Widely-Used Visual Recognition Datasets for VLM Evaluation:

2.5.1. Datasets for Pre-Training VLMs

        ①Collection of image-text data is easier and cheaper than traditional crowd-labelled data

        ②⭐Some researches utilize auxiliary datasets to provide additional information for better vision-language modelling, such as GLIP leverages Object365 for extracting region-level features

2.5.2. Datasets for VLM Evaluation

        ①Count each type of datasets

2.6. Vision-Language Model Pre-Training

        ①Vision-Language Model Pre-Training Methods:

2.6.1. VLM Pre-Training With Contrastive Objectives

(1)Image Contrastive Learning

        ①e.g. SLIP utilizes infoNCE loss to learn the discriminative image features

(2)Image-Text Contrastive Learning

        ①Learning the correlation between pair image-text, and pull irrelevant matchings away:

(3)Image-Text-Label Contrastive Learning

        ①Encodding image-text-label to one shared space:

(4)Discussion

        ①Challenge 1: Joint optimizing positive and negative pairs is complicated and challenging

        ②Challenge 2: Heuristic temperature hyper-parameter selection

2.6.2. VLM Pre-Training With Generative Objectives

(1)Masked Image Modelling

        ①Image patches mask strategy:

(2)Masked Language Modelling

        ①Text mask strategy:

(3)Masked Cross-Modal Modelling

        ①Mask image and text at the same time

(4)Image-to-Text Generation

        ①Encode images and then decode them to match the texts

(5)Discussion

        ①Learning context information

2.6.3. VLM Pre-Training With Alignment Objectives

(1)Image-Text Matching

        ①Match image and text pairs

(2)Region-Word Matching

        ①Match region and text pairs:

(3)Discussion

        ①Alignment always be context information or correlation enhancing

2.6.4. Summary and Discussion

        ①Recent VLM pre-training focuses on learning global vision-language correlation or models local fine-grained vision-language correlation via region-word matching

2.7. VLM Transfer Learning

2.7.1. Motivation of Transfer Learning

        ①Chanllenges for pretrained VLM: a) different downstream distribution,b) different downstream task

2.7.2. Common Setup of Transfer Learning

        ①Unsupervised methods are more efficient and promising

2.7.3. Common Transfer Learning Methods

        ①3 types of VLM transfer models:

(1)Transfer Via Prompt Tuning

        ①Transfer with Text Prompt Tuning: 

        ②Transfer with Visual Prompt Tuning: 

        ③Transfer with Text-Visual Prompt Tuning: tune image and text together

        ④Discussion: Challenge of this is low flexibility by following the manifold (distribution) of the original VLMs in prompting

(2)Transfer Via Feature Adaptation

        ①Fine-tune the feature by additional feature adapter:

but has intellectual property problem

(3)Other Transfer Methods

        ①Lists other methods

2.7.4. Summary and Discussion

        ①2 main methods of VLM transfer learning: prompt tuning and feature adapter

2.8. VLM Knowledge Distillation

2.8.1. Motivation of Distilling Knowledge From VLMs

        ①VLM knowledge distillation distils general and robust VLM knowledge to task-specific models without the restriction of VLM architecture

intact  adj.完整的;完整;完好无损

2.8.2. Common Knowledge Distillation Methods

(1)Knowledge Distillation for Object Detection

        ①Introduced basic and prompt based knowledge distillation for open vocabulary object detection

(2)Knowledge Distillation for Semantic Segmentation

        ①Also basic and weak supervised distillation methods

2.8.3. Summary and Discussion

        ①More flexible than transfer learning

2.9. Performance Comparison

2.9.1. Performance of VLM Pre-Training

        ①Performance comparison on image classification:

        ②Data and model size test:

        ③The main source of VLM advantages: a) large samples, b) large model, c) task-agnostic learning

        ④Segmentation performance:

        ⑤Detection performance:

        ⑥Limitation: a) saturates when continuously expanding the scale of the model, 2) computing costs in pre-training, c) excessive computation and memory overheads in both training and inference

2.9.2. Performance of VLM Transfer Learning

        ①Image classification performance:

2.9.3. Performance of VLM Knowledge Distillation

        ①Object detection performance:

        ②Semantic segmentation performance:

2.9.4. Summary

        ①The baseline tests are not unified

2.10. Future Directions

(1)For VLM pretraining:

        ①Fine-grained vision-language correlation modelling

        ②Unification of vision and language learning

        ③Pre-training VLMs with multiple languages

        ④Data-efficient VLMs: increase supervision among image-text pairs training

        ⑤Pre-training VLMs with LLMs: 

(2)For VLM transfer learning:

        ①Unsupervised VLM transfer

        ②VLM transfer with visual prompt/adapter

        ③Test-time VLM transfer

        ④VLM transfer with LLMs

(3)VLM knowledge distillation

        ①Extract knowledge from multi VLMs

        ②Other visual tasks, such as instance segmentation, panoptic segmentation, person re-identification etc.

panoptic  adj.全景的;(用图)表示物体(一眼可见,显示)全貌的;一目了然的

2.11. Conclusion

        good

3. Reference

Zhang, J. et al. (2024) Vision-Language Models for Vision Tasks: A Survey. TPAMI, 46(8): 5625-5644. doi: 10.1109/TPAMI.2024.3369699

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2254004.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

桂湾公园的地面免费停车场(50个左右)

之前一直以为桂湾公园只有P1和P2地下停车场可以免费停车。没想到桂湾公园还有地面停车场&#xff0c;停车位大概是50个。 具体位置在桂湾公园5号门地上停车场。 桂湾公园-5号门 广东省深圳市南山区桂湾河南街与鲤鱼门西二街交叉口西北20米 停车场入口对面是红星美凯龙&#x…

SpringBoot连接多数据源MySQL、SqlServer等(MyBatisPlus测试)

SpringBoot连接多数据源MySQL、SqlServer等&#xff08;MyBatisPlus测试&#xff09; 在实际的项目开发中&#xff0c;我们往往需要同时连接多个数据源对数据进行处理。本文将详细介绍在SpringBoot下配合MybatisPlus如何连接多数据源&#xff0c;实例将会使用连接MySQL、SqlSe…

基于NVIDIA NIM 平台的知识问答系统实现客服功能

前言&#xff1a; NVIDIA联合CSDN推出了《NVIDIA NIM黑客松训练营》&#xff0c;通过对着提供的实验手册&#xff0c;学习了基于NVIDIA的NIM平台知识问答系统&#xff0c;简单的一段代码就可以实现一个AI智能问答系统。而且这次活动注册账号即可获得到免费的1000tokens&#x…

(12)时间序列预测之MICN(CNN)

文章目录 前言1. challenge 一、网络结构1. MHDecomp2. Trend-cyclical Prediction Block3. Seasonal Prediction BlockMIC LayerMerge 实验结果1.长时预测 总结参考 文章信息 模型&#xff1a; MICN (Multi-scale Isometric Convolution Network)关键词&#xff1a; 长时预测…

设计模式——Facade(门面)设计模式

摘要 本文介绍了外观设计模式&#xff0c;这是一种通过简单接口封装复杂系统的设计模式。它简化了客户端与子系统之间的交互&#xff0c;降低了耦合度&#xff0c;并提供了统一的调用接口。文章还探讨了该模式的优缺点&#xff0c;并提供了类图实现和使用场景。 1. 外观设计模…

opencv-android编译遇到的相关问题处理

1、opencv-android sdk下载 下载地址&#xff1a;https://opencv.org/releases/ 下载安卓SDK即可 2、解压下载好的SDK 3、导入opencv的SDK到安卓项目中 导入步骤在/OpenCV-android-sdk/sdk/build.gradle文件的注释中写的非常详细&#xff0c;大家可安装官方给出的步骤导入。…

go语言读取yaml配置文件内容

1、config.yaml配置文件内容假设如下 name: "example" version: 1.0 settings:timeout: 30debug: truefeatures:- feature1- feature22、定义结构体 go语言定义结构体匹配yaml内容 package mainimport ("fmt""log""os""gopkg.…

STL算法之其它算法_下

random_shuffle 这个算法将[first,last)的元素次序随机排列。也就说&#xff0c;在N!中可能的元素排列中随机选出一种&#xff0c;此处N为last-first。 N个元素的序列&#xff0c;其排列方式为N!中&#xff0c;random_shuffle会产生一个均匀分布&#xff0c;因此任何一个排列被…

模拟简单的iOT工作流

没有实际接触过iOT的流程&#xff0c;应该实际使用比这个接口返回要复杂&#xff0c;只是演示~希望能参与实际的接口接入&#xff0c;而不是只展示个假数据。 启动RabbitQ 使用的是3.8.5 启动命令 RabbitMQ Service - start RabbitMQ Command Prompt rabbitmqctl start_app …

【快速入门 LVGL】-- 1、STM32 工程移植 LVGL

目录 一、LVGL 简述 二、复制一个STM32工程 三、下载 LVGL 四、裁剪 源文件 五、工程添加 LVGL 文件 六、注册 显示 七、注册 触摸屏 八、LVGL 心跳、任务刷新 九、开跑 LVGL 十、控件的事件添加、响应处理 十 一、几个好玩小事情 十 二、显示中文 ~~ 约定 ~~ 在…

关于线扫相机的使用和注意事项

引言 线扫相机作为工业视觉系统中的核心设备之一&#xff0c;以其高分辨率和高速成像的特点被广泛应用于印刷质量检测、电子元件检测、纺织品缺陷检测等领域。本文从线扫相机的基本原理出发&#xff0c;探讨其使用方法&#xff0c;并总结在实际应用中的注意事项&#xff0c;为…

MybatisPlus字段类型处理器TypeHandler

个人博客&#xff1a;无奈何杨&#xff08;wnhyang&#xff09; 个人语雀&#xff1a;wnhyang 共享语雀&#xff1a;在线知识共享 Github&#xff1a;wnhyang - Overview 简介 官网&#xff1a;字段类型处理器 在 MyBatis 中&#xff0c;类型处理器&#xff08;TypeHandle…

c++编译版本问题#error C++17 or later compatible compiler is required to use xx

问题解决方向 网上多数给出的解决方法是找到setup.py&#xff0c;然后修改extra_compile_args参数中的cxx&#xff0c;由-stdc14改为-stdc17&#xff0c;但是这个方法在我这里没用。 所以我重新理解了下这个error&#xff0c;应该是说为了编译安装当前的库&#xff0c;需要的…

【AI大模型】大型语言模型LLM基础概览:技术原理、发展历程与未来展望

目录 &#x1f354; 大语言模型 (LLM) 背景 &#x1f354; 语言模型 (Language Model, LM) 2.1 基于规则和统计的语言模型&#xff08;N-gram&#xff09; 2.2 神经网络语言模型 2.3 基于Transformer的预训练语言模型 2.4 大语言模型 &#x1f354; 语言模型的评估指标 …

一文理解多模态大语言模型——下

作者&#xff1a;Sebastian Raschka 博士&#xff0c; 翻译&#xff1a;张晶&#xff0c;Linux Fundation APAC Open Source Evangelist 编者按&#xff1a;本文并不是逐字逐句翻译&#xff0c;而是以更有利于中文读者理解的目标&#xff0c;做了删减、重构和意译&#xff0c…

uC/OSII学习笔记(二)任务的堆栈检验

加入OSTaskCreateExt()创建拓展任务函数的使用。 加入OSTaskStkChk()堆栈检验函数的使用。 堆栈检验函数可检查任务堆栈的使用字节数量和空闲字节数量。 具体使用方法如下&#xff1a; 1.创建拓展任务OSTaskCreateExt()用于堆栈检验&#xff0c;堆栈检验必须用拓展任务OSTaskCr…

WPF+LibVLC开发播放器-进度条显示和拖动控制

进度条显示和拖动控制 视频教程界面上代码实现进度条显示进度进度条拖动视频进度 效果 视频教程 WPFLibVLC开发播放器-进度条控制 界面上 界面上线增加一个Slider控件&#xff0c;当做播放进度条 <SliderName"PlaySlider"Grid.Row"1"Width"800&qu…

【Rust WebAssembly 入门实操遇到的问题】

Rust WebAssembly 入门实操遇到的问题 什么是WebAssembly跟着教程走wasm-pack build error总结 什么是WebAssembly WebAssembly&#xff08;简称Wasm&#xff09;是一种基于堆栈的虚拟机的二进制指令 格式。Wasm 被设计为编程语言的可移植编译目标&#xff0c;支持在 Web 上部…

同为科技(TOWE)柔性定制化PDU插座

随着科技的进步&#xff0c;越来越多的精密电子设备&#xff0c;成为工作生活密不可分的工具。 电子电气设备的用电环境也变得更为复杂&#xff0c;所以安全稳定的供电是电子电气设备的生命线。 插座插排作为电子电气设备最后十米范围内供配电最终核心部分&#xff0c;便捷、安…

AI RPA 影刀基础教程:开启自动化之旅

RPA 是什么 RPA 就是机器人流程自动化&#xff0c;就是将重复的工作交给机器人来执行。只要是标准化的、重复的、有逻辑行的操作&#xff0c;都可以用 RPA 提效 准备 安装并注册影刀 影刀RPA - 影刀官网 安装 Chrome 浏览器 下载链接&#xff1a;Google Chrome 网络浏览器 …