2020-2023小样本学习(FSL)顶会论文及开源代码合集(已分类整理)

news2024/11/25 10:28:10

这次分享的是近四年(2020-2023)各大顶会中的小样本学习(FSL)论文,有160+篇,涵盖了FSL三大类方法:数据、模型、算法,以及FSL的应用、技术、理论等领域

由于论文数量太多,我就不一一分析总结了,建议大家收藏下慢慢研读。

全部160+篇论文原文及开源代码文末直接领取。

数据(12篇)

Towards better understanding and better generalization of low-shot classification in histology images with contrastive learning

标题:通过对比学习来更好地理解和提高组织病理图像中的小样本分类的泛化能力

方法介绍:本文通过设置三个跨域任务来推动组织病理图像的小样本学习研究,模拟实际临床问题。为实现高效标注和更好的泛化能力,作者提出结合对比学习和潜在增强来构建小样本系统。对比学习可以在无手动标注下学习有用表示,而潜在增强以非监督方式传递基数据集的语义变化。这两者可充分利用无标注训练数据,可扩展到其他数据饥渴问题。

  1. FlipDA: Effective and robust data augmentation for few-shot learning

  2. PromDA: Prompt-based data augmentation for low-resource NLU tasks

  3. Generating representative samples for few-shot classification

  4. FeLMi : Few shot learning with hard mixup

  5. Understanding cross-domain few-shot learning based on domain similarity and few-shot difficulty

  6. Label hallucination for few-shot classification

  7. STUNT: Few-shot tabular learning with self-generated tasks from unlabeled tables

  8. Unsupervised meta-learning via few-shot pseudo-supervised contrastive learning

  9. Progressive mix-up for few-shot supervised multi-source domain transfer

  10. Cross-level distillation and feature denoising for cross-domain few-shot classification

  11. Tuning language models as training data generators for augmentation-enhanced few-shot learning

模型(35篇)

多任务学习

When does self-supervision improve few-shot learning?

标题:自监督学习在什么情况下可以改进小样本学习?

方法介绍:虽然自监督学习的收益可能随着更大的训练数据集而增加,但我们也观察到,当用于元学习和自监督的图像分布不同时,自监督学习实际上可能会损害性能。通过系统地变化域移度和在多个域上分析几种元学习算法的性能,作者进行了详细的分析研究。基于这一分析,作者提出了一种从大规模通用无标注图像池中自动选择适合特定数据集的自监督学习图像的技术,可以进一步改进性能。

  1. Pareto self-supervised training for few-shot learning

  2. Bridging multi-task learning and meta-learning: Towards efficient training and effective adaptation

  3. Task-level self-supervision for cross-domain few-shot learning

嵌入/度量学习

Few-shot learning as cluster-induced voronoi diagrams: A geometric approach

标题:将小样本学习视为由簇诱导的Voronoi图:一种几何方法

方法介绍:小样本学习仍面临泛化能力不足的挑战,本文从几何视角出发,发现流行的 ProtoNet 模型本质上是特征空间中的 Voronoi 图。通过利用“由簇诱导的 Voronoi 图”的技术,可以逐步改进空间分割,在小样本学习的多个阶段提升准确率和鲁棒性。这一基于该图的框架数学优雅、几何可解释,可以补偿极端数据不足,防止过拟合,并实现快速几何推理。

  1. Few-shot learning with siamese networks and label tuning

  2. Matching feature sets for few-shot image classification

  3. EASE: Unsupervised discriminant subspace learning for transductive few-shot learning

  4. Cross-domain few-shot learning with task-specific adapters

  5. Rethinking generalization in few-shot classification

  6. Hybrid graph neural networks for few-shot learning

  7. Hubs and hyperspheres: Reducing hubness and improving transductive few-shot learning with hyperspherical embeddings

  8. Revisiting prototypical network for cross domain few-shot learning

  9. Transductive few-shot learning with prototype-based label propagation by iterative graph refinement

  10. Few-sample feature selection via feature manifold learning

  11. Interval bound interpolation for few-shot learning with few tasks

  12. A closer look at few-shot classification again

  13. TART: Improved few-shot text classification using task-adaptive reference transformation

外部存储器辅助学习

Dynamic memory induction networks for few-shot text classification

标题:动态记忆诱导网络用于短文本分类

方法介绍:本文提出了动态记忆诱导网络,用于短文本的少样本分类。该模型利用动态路由为基于记忆的少样本学习提供更大灵活性,以便更好地适应支持集,这是少样本分类模型的关键能力。在此基础上,作者进一步开发了包含查询信息的诱导模型,旨在增强元学习的泛化能力。

  1. Few-shot visual learning with contextual memory and fine-grained calibration

  2. Learn from concepts: Towards the purified memory for few-shot learning

  3. Prototype memory and attention mechanisms for few shot image generation

  4. Hierarchical variational memory for few-shot learning across domains

  5. Remember the difference: Cross-domain few-shot semantic segmentation via meta-memory transfer

  6. Consistent prototype learning for few-shot continual relation extraction

生成式建模

Few-shot relation extraction via bayesian meta-learning on relation graphs

标题:通过关系图上的贝叶斯元学习实现短文本关系提取

方法介绍:作者提出了一种新的贝叶斯元学习方法,用于有效学习关系原型向量的后验分布,其中关系原型向量的先验由定义在全局关系图上的图神经网络参数化。此外,为了有效优化原型向量的后验分布,作者使用了相关于MAML算法的随机梯度兰weibo乎动力学,它可以处理原型向量的不确定性,整个框架可以端到端高效优化。

  1. Interventional few-shot learning

  2. Modeling the probabilistic distribution of unlabeled data for one-shot medical image segmentation

  3. SCHA-VAE: Hierarchical context aggregation for few-shot generation

  4. Diversity vs. Recognizability: Human-like generalization in one-shot generative models

  5. Generalized one-shot domain adaptation of generative adversarial networks

  6. Towards diverse and faithful one-shot adaption of generative adversarial networks

  7. Few-shot cross-domain image generation via inference-time latent-code learning

  8. Adaptive IMLE for few-shot pretraining-free generative modelling

  9. MetaModulation: Learning variational feature hierarchies for few-shot learning with fewer tasks

算法(24篇)

优化已有参数

  1. Revisit finetuning strategy for few-shot learning to transfer the emdeddings

  2. Prototypical calibration for few-shot learning of language models

  3. Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners

  4. Supervised masked knowledge distillation for few-shot transformers

  5. Hint-Aug: Drawing hints from foundation vision transformers towards boosted few-shot parameter-efficient tuning

  6. Few-shot learning with visual distribution calibration and cross-modal distribution alignment

  7. MetricPrompt: Prompting model as a relevance metric for few-shot text classification

  8. Multitask pre-training of modular prompt for chinese few-shot learning

  9. Cold-start data selection for better few-shot language model fine-tuning: A prompt-based uncertainty propagation approach

  10. Instruction induction: From few examples to natural language task descriptions

  11. Hierarchical verbalizer for few-shot hierarchical text classification

优化元学习中的参数

  1. How to train your MAML to excel in few-shot classification

  2. Meta-learning with fewer tasks through task interpolation

  3. Dynamic kernel selection for improved generalization and memory efficiency in meta-learning

  4. What matters for meta-learning vision regression tasks?

  5. Stochastic deep networks with linear competing units for model-agnostic meta-learning

  6. Robust meta-learning with sampling noise and label noise via Eigen-Reptile

  7. Attentional meta-learners for few-shot polythetic classification

  8. PLATINUM: Semi-supervised model agnostic meta-learning using submodular mutual information

  9. FAITH: Few-shot graph classification with hierarchical task graphs

  10. A contrastive rule for meta-learning

  11. Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks

学习搜索步骤

  1. Optimization as a model for few-shot learning

  2. Meta Navigator: Search for a good adaptation policy for few-shot learning

应用(57篇)

计算机视觉

  1. Analogy-forming transformers for few-shot 3D parsing

  2. Universal few-shot learning of dense prediction tasks with visual token matching

  3. Meta learning to bridge vision and language models for multimodal few-shot learning

  4. Few-shot geometry-aware keypoint localization

  5. AsyFOD: An asymmetric adaptation paradigm for few-shot domain adaptive object detection

  6. A strong baseline for generalized few-shot semantic segmentation

  7. StyleAdv: Meta style adversarial training for cross-domain few-shot learning

  8. DiGeo: Discriminative geometry-aware learning for generalized few-shot object detection

  9. Hierarchical dense correlation distillation for few-shot segmentation

  10. CF-Font: Content fusion for few-shot font generation

  11. MoLo: Motion-augmented long-short contrastive learning for few-shot action recognition

  12. MIANet: Aggregating unbiased instance and general information for few-shot semantic segmentation

  13. FreeNeRF: Improving few-shot neural rendering with free frequency regularization

  14. Exploring incompatible knowledge transfer in few-shot image generation

  15. Where is my spot? few-shot image generation via latent subspace optimization

  16. FGNet: Towards filling the intra-class and inter-class gaps for few-shot segmentation

  17. GeCoNeRF: Few-shot neural radiance fields via geometric consistency

机器人技术

  1. One solution is not all you need: Few-shot extrapolation via structured MaxEnt RL

  2. Bowtie networks: Generative modeling for joint few-shot recognition and novel-view synthesis

  3. Demonstration-conditioned reinforcement learning for few-shot imitation

  4. Hierarchical few-shot imitation with skill transition models

  5. Prompting decision transformer for few-shot policy generalization

  6. Stage conscious attention network (SCAN): A demonstration-conditioned policy for few-shot imitation

  7. Online prototype alignment for few-shot policy transfer

自然语言处理

  1. A dual prompt learning framework for few-shot dialogue state tracking

  2. CLUR: Uncertainty estimation for few-shot text classification with contrastive learning

  3. Few-shot document-level event argument extraction

  4. MetaAdapt: Domain adaptive few-shot misinformation detection via meta learning

  5. Code4Struct: Code generation for few-shot event structure prediction

  6. MANNER: A variational memory-augmented model for cross domain few-shot named entity recognition

  7. Few-shot event detection: An empirical study and a unified view

  8. CodeIE: Large code generation models are better few-shot information extractors

  9. Few-shot in-context learning on knowledge base question answering

  10. Linguistic representations for fewer-shot relation extraction across domains

  11. Few-shot reranking for multi-hop QA via language model prompting

知识图谱

  1. Adaptive attentional network for few-shot knowledge graph completion

  2. Learning inter-entity-interaction for few-shot knowledge graph completion

  3. Few-shot relational reasoning via connection subgraph pretraining

  4. Hierarchical relational learning for few-shot knowledge graph completion

  5. The unreasonable effectiveness of few-shot learning for machine translation

声音信号处理

  1. Audio2Head: Audio-driven one-shot talking-head generation with natural head motion

  2. Few-shot low-resource knowledge graph completion with multi-view task representation generation

  3. Normalizing flow-based neural process for few-shot knowledge graph completion

推荐系统

  1. Few-shot news recommendation via cross-lingual transfer

  2. ColdNAS: Search to modulate for user cold-start recommendation

  3. Contrastive collaborative filtering for cold-start item recommendation

  4. SMINet: State-aware multi-aspect interests representation network for cold-start users recommendation

  5. Multimodality helps unimodality: Cross-modal few-shot learning with multimodal models

  6. M2EU: Meta learning for cold-start recommendation via enhancing user preference estimation

  7. Aligning distillation for cold-start item recommendation

其他

  1. Context-enriched molecule representations improve few-shot drug discovery

  2. Sequential latent variable models for few-shot high-dimensional time-series forecasting

  3. Transfer NAS with meta-learned Bayesian surrogates

  4. Few-shot domain adaptation for end-to-end communication

  5. Contrastive meta-learning for few-shot node classification

  6. Task-equivariant graph few-shot learning

  7. Leveraging transferable knowledge concept graph embedding for cold-start cognitive diagnosis

理论(7篇)

  1. Bridging the gap between practice and PAC-Bayes theory in few-shot meta-learning

  2. bounds for meta-learning: An information-theoretic analysis

  3. Generalization bounds for meta-learning via PAC-Bayes and uniform stability

  4. Unraveling model-agnostic meta-learning via the adaptation learning rate

  5. On the importance of firth bias reduction in few-shot classification

  6. Global convergence of MAML and theory-inspired neural architecture search for few-shot learning

  7. Smoothed embeddings for certified few-shot learning

小样本/零样本学习(15篇)

  1. Finetuned language models are zero-shot learners

  2. Zero-shot stance detection via contrastive learning

  3. JointCL: A joint contrastive learning framework for zero-shot stance detection

  4. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification

  5. Nearest neighbor zero-shot inference

  6. Continued pretraining for better zero- and few-shot promptability

  7. InstructDial: Improving zero and few-shot generalization in dialogue through instruction tuning

  8. Prompt-and-Rerank: A method for zero-shot and few-shot arbitrary textual style transfer with small language models

  9. Learning instructions with unlabeled data for zero-shot cross-task generalization

  10. Zero-shot cross-lingual transfer of prompt-based tuning with a unified multilingual prompt

  11. Finetune like you pretrain: Improved finetuning of zero-shot vision models

  12. SemSup-XC: Semantic supervision for zero and few-shot extreme classification

  13. Zero- and few-shot event detection via prompt-based meta learning

  14. HINT: Hypernetwork instruction tuning for efficient zero- and few-shot generalisation

  15. What does the failure to reason with "respectively" in zero/few-shot settings tell us about language models? acl 2023

小样本学习变体(12篇)

  1. FiT: Parameter efficient few-shot transfer learning for personalized and federated image classification

  2. Towards addressing label skews in one-shot federated learning

  3. Data-free one-shot federated learning under very high statistical heterogeneity

  4. Contrastive meta-learning for partially observable few-shot learning

  5. On the soft-subnetwork for few-shot class incremental learning

  6. Warping the space: Weight space rotation for class-incremental few-shot learning

  7. Neural collapse inspired feature-classifier alignment for few-shot class-incremental learning

  8. Learning with fantasy: Semantic-aware virtual contrastive constraint for few-shot class-incremental learning

  9. Few-shot class-incremental learning via class-aware bilateral distillation

  10. Glocal energy-based learning for few-shot open-set recognition

  11. Open-set likelihood maximization for few-shot learning

  12. Federated few-shot learning

数据集/基准(5篇)

  1. FewNLU: Benchmarking state-of-the-art methods for few-shot natural language understanding

  2. Bongard-HOI: Benchmarking few-shot visual reasoning for human-object interactions

  3. Hard-Meta-Dataset++: Towards understanding few-shot performance on difficult tasks

  4. MEWL: Few-shot multimodal word learning with referential uncertainty

  5. UNISUMM and SUMMZOO: Unified model and diverse benchmark for few-shot summarization

关注下方【学姐带你玩AI】🚀🚀🚀

回复“小样本160”获取论文+源代码合集

码字不易,欢迎大家点赞评论收藏!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1022563.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【文末赠书】SRE求职必会 —— 可观测性平台可观测性工程(Observability Engineering)

文章目录 〇、导读一、实现可观测性平台的技术要点是什么?二、兼容全域信号量三、所谓全域信号量有哪些?四、统一采集和上传工具五、统一的存储后台六、自由探索和综合使用数据七、总结★推荐阅读《可观测性工程》直播预告直播主题直播时间预约直播 视频…

福建厦门航空飞机发动机零部件检测3D测量尺寸偏差比对-CASAIM中科广电

航空航天是一个创新型发展国家的尖端命脉,代表着一个国家科学技术的先进水平。在航空航天工业的发展和组成领域中,对于在制造业中的航空航天产品零部件精度要求十分严苛,从前期的设计、中期建造、后期维修检测,任何一个环节、任何…

vue一直自动换行问题解决

html换行主要是由于< div >标签引起的&#xff0c;而vue的一些前端组件本身就会自带< div >&#xff0c;比如el-input的标签拆分出来之后就形成了如下的内容 因此之前我采用 <el-form-item prop"code" v-if"captchaOnOff"><el-inpu…

【算法挨揍日记】day06——1004. 最大连续1的个数 III、1658. 将 x 减到 0 的最小操作数

1004. 最大连续1的个数 III 1004. 最大连续1的个数 III 题目描述&#xff1a; 给定一个二进制数组 nums 和一个整数 k&#xff0c;如果可以翻转最多 k 个 0 &#xff0c;则返回 数组中连续 1 的最大个数 。 解题思路&#xff1a; 首先题目要我们求出的最多翻转k个0后&#x…

BMS电池管理系统的蓝牙芯片 国产高性能 低功耗蓝牙Soc芯片PHY6222

电池管理系统是对电池进行监控与控制的系统&#xff0c;将采集的电池信息实时反馈给用户&#xff0c;同时根据采集的信息调节参数&#xff0c;充分发挥电池的性能。但是&#xff0c;前技术中&#xff0c;在管理多个电池时&#xff0c;需要人员现场调试与设置&#xff0c;导致其…

优化Java代码效率和算法设计,提升性能

在Java开发中&#xff0c;代码效率低下和算法不合理可能导致程序性能下降。下面将从以下几个方面探讨如何优化Java代码和算法设计&#xff0c;以提高程序的性能&#xff1a; 通过这些优化策略&#xff0c;我们可以显著提升Java程序的性能和响应速度。 一、选择合适的数据结构…

这个库,让Python与Excel完美结合

迷途小书童 读完需要 5分钟 速读仅需 2 分钟 1 简介 在现代数据分析和处理中&#xff0c;Python 和 Excel 都扮演着非常重要的角色。如果&#xff0c;能将这两者无缝结合在一起并发挥它们各自的优势&#xff0c;是一个令人兴奋的想法。幸运的是&#xff0c;PyXLL 这个 Excel 插…

Biome-BGC生态系统模型与Python融合技术

Biome-BGC是利用站点描述数据、气象数据和植被生理生态参数&#xff0c;模拟日尺度碳、水和氮通量的有效模型&#xff0c;其研究的空间尺度可以从点尺度扩展到陆地生态系统。 在Biome-BGC模型中&#xff0c;对于碳的生物量积累&#xff0c;采用光合酶促反应机理模型计算出每天…

如何实现在本地 Linux 主机上实现对企业级夜莺监控分析工具的远程连接

文章目录 前言1. Linux 部署Nightingale2. 本地访问测试3. Linux 安装cpolar4. 配置Nightingale公网访问地址5. 公网远程访问Nightingale管理界面6. 固定Nightingale公网地址 前言 夜莺监控是一款开源云原生观测分析工具&#xff0c;采用 All-in-One 的设计理念&#xff0c;集…

构建无缝的服务网格体验:分享在生产环境中构建和管理服务网格的最佳实践

&#x1f337;&#x1f341; 博主猫头虎 带您 Go to New World.✨&#x1f341; &#x1f984; 博客首页——猫头虎的博客&#x1f390; &#x1f433;《面试题大全专栏》 文章图文并茂&#x1f995;生动形象&#x1f996;简单易学&#xff01;欢迎大家来踩踩~&#x1f33a; &a…

Java的序列化

写在前面 本文看下序列化和反序列化相关的内容。 源码 。 1&#xff1a;为什么&#xff0c;什么是序列化和反序列化 Java对象是在jvm的堆中的&#xff0c;而堆其实就是一块内存&#xff0c;如果jvm重启数据将会丢失&#xff0c;当我们希望jvm重启也不要丢失某些对象&#xff…

Linux学习-Redis主从和哨兵

主从复制 一主一从结构 # 配置host61为主服务器 [roothost61 ~]# yum -y install redis [roothost61 ~]# vim /etc/redis.conf bind 192.168.88.61 #设置服务使用的Ip地址 port 6379 #设置服务使用的端口号 使用默认端口即可 [roothost61 ~]# systemctl start redis [rooth…

深入探索图像处理:从基础到高级应用

&#x1f482; 个人网站:【工具大全】【游戏大全】【神级源码资源网】&#x1f91f; 前端学习课程&#xff1a;&#x1f449;【28个案例趣学前端】【400个JS面试题】&#x1f485; 寻找学习交流、摸鱼划水的小伙伴&#xff0c;请点击【摸鱼学习交流群】 图像处理是计算机视觉领…

高数(上) 第一章:函数、极限、连续

文章目录 一、函数1.函数的概念、基本初等函数2.函数的性质 /函数四性态1.单调性2.奇偶性(3)导函数的奇偶性 3.周期性4.有界性5.对称性 3.基本不等式4.开根要带绝对值 二、极限1.极限的概念①数列极限②函数极限需要区分左右极限的三种问题 &#xff08;左右极限有区别&#xf…

[网鼎杯 2020 朱雀组]Nmap 通过nmap写入木马 argcmd过滤实现逃逸

这道题也很好玩 啊 原本以为是ssrf 或者会不会是rce 结果是通过nmap写入木马 我们来玩一下 传入木马 映入眼帘是nmap 我们首先就要了解nmap的指令 Nmap 相关参数-iL 读取文件内容&#xff0c;以文件内容作为搜索目标 -o 输出到文件-oN 标准保存-oX XML保存-oG Grep保存-oA…

Python所有方向的学习路线,你们要的知识体系在这,千万别做了无用功!

一直以来都有很多想学习Python的朋友们问我&#xff0c;学Python怎么学&#xff1f;爬虫和数据分析怎么学&#xff1f;web开发的学习路线能教教我吗&#xff1f; 我先告诉大家一个点&#xff0c;不管你是报了什么培训班&#xff0c;还是自己在通过各种渠道自学&#xff0c;你一…

测试团队的建设和管理

一.测试团队的建设 软件的质量不是靠测试出来的&#xff0c;而是靠产品开发团队&#xff08;需求分析工程师&#xff0c;系统设计工程师&#xff0c;程序员&#xff0c;测试工程师&#xff0c;技术支持工程师等&#xff09;所有成员的共同努力来获得的。 软件测试团队不仅仅是指…

计算机毕业设计 基于SSM+Vue的校园短期闲置资源置换平台的设计与实现 Java实战项目 附源码+文档+视频讲解

博主介绍&#xff1a;✌从事软件开发10年之余&#xff0c;专注于Java技术领域、Python人工智能及数据挖掘、小程序项目开发和Android项目开发等。CSDN、掘金、华为云、InfoQ、阿里云等平台优质作者✌ &#x1f345;文末获取源码联系&#x1f345; &#x1f447;&#x1f3fb; 精…

静态手势识别和动态手势识别的区别和技术路线简介

人类能够轻松识别身体和手语。这是可能的&#xff0c;因为视觉和突触相互作用是在大脑发育过程中形成的。为了在计算机中复制这种技能&#xff0c;需要解决一些问题&#xff1a;如何分离图像中感兴趣的对象以及哪种图像捕获技术和分类技术更合适等等。 计算的发展和新技术的易用…

2023年黑客零基础从入门到精通学习成长路线(超多图、非常详细),看完这一篇就够了。

怎样规划学习路线&#xff1f; 如果你是一个安全行业新人&#xff0c;我建议你先从网络安全或者Web安全/渗透测试这两个方向先学起&#xff0c;一是市场需求量高&#xff0c;二则是发展相对成熟入门比较容易。 值得一提的是&#xff0c;学网络安全&#xff0c;是先网络后安全…