【论文阅读 WWW‘23】Zero-shot Clarifying Question Generation for Conversational Search

news2024/11/14 18:38:05

文章目录

  • 前言
  • Motivation
  • Contributions
  • Method
    • Facet-constrained Question Generation
    • Multiform Question Prompting and Ranking
  • Experiments
  • Dataset
  • Result
    • Auto-metric evaluation
    • Human evaluation
  • Knowledge

前言

  • 最近对一些之前的文章进行了重读,因此整理了之前的笔记
  • 理解不当之处,请多多指导
  • 概括:本文利用facet word,基于 GPT-2 进行了 zero-shot 的限制生成,使生成的问题更容易包含facet word。同时利用了prompt,使用8种模板,对每个模板都生成一个结果,然后使用一些排序算法自动挑选出一个最终结果。
  • 更多论文可见:ShiyuNee/Awesome-Conversation-Clarifying-Questions-for-Information-Retrieval: Papers about Conversation and Clarifying Questions (github.com)

Motivation

Generate clarifying questions in a zero-shot setting to overcome the cold start problem and data bias.

cold start problem: 缺少数据导致难应用,难应用导致缺少数据

data bias: 获得包括所有可能topic的监督数据不现实,在这些数据上训练也会有 bias

Contributions

  • the first to propose a zero-shot clarifying question generation system, which attempts to address the cold-start challenge of asking clarifying questions in conversational search.
  • the first to cast clarifying question generation as a constrained language generation task and show the advantage of this configuration.
  • We propose an auxiliary evaluation strategy for clarifying question generations, which removes the information-scarce question templates from both generations and references.

Method

Backbone: a checkpoint of GPT-2

  • original inference objective is to predict the next token given all previous texts

在这里插入图片描述

Directly append the query q q q and facet f f f as input and let GPT-2 generate cq will cause two challenges:

  • it does not necessarily cover facets in the generation.
  • the generated sentences are not necessarily in the tone of clarifying questions

We divide our system into two parts:

  • facet-constrained question generation(tackle the first challenge)
  • multi-form question prompting and ranking(tackle the second challenge, rank different clarifying questions generated by different templates)

Facet-constrained Question Generation

Our model utilizes the facet words not as input but as constraints. We employ an algorithm called Neurologic Decoding. Neurologic Decoding is based on beam search.

  • in t t t​ step, assuming the already generated candidates in the beam are 𝐶 = { 𝑐 1 : 𝑘 } 𝐶 = \{𝑐_{1:𝑘} \} C={c1:k}, k k k is the beam size, c i = x 1 : ( t − 1 ) i c_i=x^i_{1:(t-1)} ci=x1:(t1)i is the i i i th candidate, x 1 : ( t − 1 ) i x^i_{1:(t-1)} x1:(t1)i are tokens generated from decoding step 1 to ( t − 1 ) (t-1) (t1)

    [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-98Ld4wAG-1678024307327)在这里插入图片描述

    • explain about why this method could better constrain the decoder to generate facet-related questions:
      • ( 2 ) t o p − β (2)top- \beta (2)topβ​ is the main reason for promoting facet words in generations. Because of this filtering, Neurologic Decoding tends to discard generations with fewer facet words regardless of their generation probability
      • ( 3 ) (3) (3)​ the group is the key for Neurologic Decoding to explore as many branches as possible. Because this grouping method keeps the most cases $(2^{| 𝑓 |} ) $of facet word inclusions, allowing the decoder to cover the most possibilities of ordering constraints in generation
        • because if we choose top K candidates directly, there may be some candidates containing same facets, this results in less situation containing diverse facets. Towards choosing the best candidate in each group and then choose top K candidates, every candidate will contain different facets.

Multiform Question Prompting and Ranking

Use clarifying question templates as the starting text of the generation and let the decoder generate the rest of question body.

  • if the q q q is “I am looking for information about South Africa.” Then we give the decoder “I am looking for information about South Africa. [SEP] would you like to know” as input and let it generate the rest.
  • we use multiple prompts(templates) to both cover more ways of clarification and avoid making users bored

For each query, we will append these eight prompts to the query and form eight input and generate eight questions.

  • use ranking methods to choose the best one as the returned question

Experiments

Zero-shot clarifying question generation with existing baselines

  • Q-GPT-0
    • input: query
  • QF-GPT-0:
    • input: facet + query
  • Prompt-based GPT-0: includes a special instructional prompt as input
    • input: q “Ask a question that contains words in the list [f]”
  • Template-0: a template-guided approach using GPT-2
    • input: add the eight question templates during decoding and generate the rest of the question

Existing facet-driven baselines(finetuned):

  • Template-facet: append the facet word right after the question template

在这里插入图片描述

  • QF-GPT: a GPT-2 finetuning version of QF-GPT-0.
    • finetunes on a set of tuples in the form as f [SEP] q [BOS] cq [EOS]
  • Prompt-based finetuned GPT: a finetuning version of Prompt-based GPT-0
    • finetune GPT-2 with the structure: 𝑞 “Ask a question that contains words in the list [𝑓 ].” 𝑐𝑞

Note: simple facets-input finetuning is highly inefficient in informing the decoder to generate facet-related questions by observing a facet coverage rate of only 20%

Dataset

ClariQ-FKw: has rows of (q,f,cq) tuples.

  • q is an open-domain search query, f is a search facet, cq is a human-generated clarifying question
  • The facet inClariQ is in the form of a faceted search query. ClariQ-FKw extracts the keyword of the faceted query as its facet column and samples a dataset with 1756 training examples and 425 evaluation examples

Our proposed system does not access the training set while the other supervised learning systems can access the training set for finetuning.

Result

Auto-metric evaluation

在这里插入图片描述

RQ1: How well can we do in zero-shot clarifying question generation with existing baselines

  • all these baselines(the first four rows) struggle to produce any reasonable generations except for Template-0(but it’s question body is not good)
  • we find existing zero-shot GPT-2-based approaches cannot solve the clarifying question generation task effectively.

RQ2: the effectiveness of facet information for facet-specific clarifying question generation

  • compare our proposed zero-shot facet constrained (ZSFC) methods with a facet-free variation of ZSFC named Subject-constrained which uses subject of the query as constraints.
  • our study show that adequate use of facet information can significantly improve clarifying question generation quality

RQ3: whether our proposed zeroshot approach can perform the same or even better than existing facet-driven baselines

  • We see that from both tables, our zero-shot facet-driven approaches are always better than the finetuning baselines

Note: Template-facet rewriting is a simple yet strong baseline that both finetuning-based methods are actually worse than it.

Human evaluation

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5eC8PWul-1678024307328)在这里插入图片描述

Knowledge

Approaches to clarifying query ambiguity can be roughly divided into three categories:

  • Query Reformulation: iteratively refine the query
    • is more efficient in context-rich situations
  • Query Suggestion: offer related queries to the user
    • is good for leading search topics, discovering user needs
  • Asking Clarifying Questions: proactively engages users to provide additional context.
    • could be exclusively helpful to clarify ambiguous queries without context.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/390180.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

ubuntu安装使用putty

一、安装 安装虚拟机串口 sudo apt-get install putty sudo apt install -y setserial 二、使用 虚拟机连接串口 sudo setserial -g /dev/ttyS* 查看硬件对应串口 找到不是unknown的串口 sudo putty

插件化开发入门

一、背景顾名思义,插件化开发就是将某个功能代码封装为一个插件模块,通过插件中心的配置来下载、激活、禁用、或者卸载,主程序无需再次重启即可获取新的功能,从而实现快速集成。当然,实现这样的效果,必须遵…

【博学谷学习记录】超强总结,用心分享丨人工智能 自然语言处理 文本特征处理小结

目录文本特征处理作用常见的文本特征处理方法添加n-gram特征说明提取n-gram文本长度规范说明实现导包问题记录心得文本特征处理作用 文本特征处理包括为语料添加具有普适性的文本特征, 如:n-gram特征 以及对加入特征之后的文本语料进行必要的处理, 如: 长度规范. 这些特征处…

vue3的插槽slots

文章目录普通插槽Test.vueFancyButton.vue具名插槽Test.vueBaseLayout.vue作用域插槽默认插槽Test.vueBaseLayout.vue具名作用域插槽Test.vueBaseLayout.vue普通插槽 父组件使用子组件时,在子组件闭合标签中提供内容模板,插入到子组件定义的出口的地方 …

云桌面技术初识:VDI,IDV,VOI,RDS

VDI(Virtual Desktop Infrastucture,虚拟桌面架构),俗称虚拟云桌面 VDI构架采用的“集中存储、集中运算”构架,所有的桌面以虚拟机的方式运行在服务器硬件虚拟化层上,桌面以图像传输的方式发送到客户端。 …

序列化与反序列化概念

序列化是指将对象的状态信息转换为可以存储或传输的形式的过程。 在Java中创建的对象,只要没有被回收就可以被复用,但是,创建的这些对象都是存在于JVM的堆内存中,JVM处于运行状态时候,这些对象可以复用, 但…

taobao.item.delete( 删除单条商品 )

¥开放平台免费API必须用户授权 删除单条商品 公共参数 请求地址: HTTP地址 http://gw.api.taobao.com/router/rest 公共请求参数: 公共响应参数: 请求参数 响应参数 点击获取key和secret 请求示例 TaobaoClient client new DefaultTaobaoClient(url, appkey, s…

【深一点学习】自己实现一下卷积和池化操作,理解超参数意义,理清数学计算方式

二维卷积层 卷积神经网络(convolutional neural network)是含有卷积层(convolutional layer)的神经网络。本章中介绍的卷积神经网络均使用最常见的二维卷积层。它有高和宽两个空间维度,常用来处理图像数据。 二维互相…

Python、JavaScript、C、C++和Java可视化代码执行工具

Python、JavaScrip、C、C和Java可视化代码执行工具 该工具通过可视化代码执行来帮助您学习Python、JavaScript、C、C和Java编程。可在可视化区动态展示执行过程中的调用栈、相关变量以及对应的变量值。https://pythontutor.com/ 下面以执行下面python这段代码为例 class MyCla…

9万字“联、管、用”三位一体雪亮工程整体建设方案

本资料来源公开网络,仅供个人学习,请勿商用。部分资料内容: 1、 总体设计方案 围绕《公共安全视频监控建设联网应用”十三五”规划方案》中的总体架构和一总两分结构要求的基础上,项目将以“加强社会公共安全管理,提高…

leetcode打卡-贪心算法

455.分发饼干 leetcode题目链接:https://leetcode.cn/problems/assign-cookies leetcode AC记录: 代码如下: public int findContentChildren(int[] g, int[] s) {Arrays.sort(g);Arrays.sort(s);int res 0;int sIndex 0;int gIndex 0…

Kafka生产者的粘性分区算法

分区算法分类 kafka在生产者投递消息时,会根据是否有key采取不用策略来获取分区。 存在key时会根据key计算一个hash值,然后采用hash%分区数的方式获取对应的分区。 而不存在key时采用随机算法选取分区,然后将所有的消息封装到这个batch上直…

2023/3/5 Vue学习笔记 - 生命周期函数探究-2

1 beforeCreated 在组件实例初始化完成之后立即调用。会在实例初始化完成、props 解析之后、data() 和 computed 等选项处理之前立即调用。 组件的组件实例初始化动作:初始化一个空的Vue实例对象,此时,这个对象身上只有一个默认的声明周期函…

Eureka注册中心快速入门

一、提供者与消费者**服务提供者:**一次业务中,被其他微服务调用的服务。(提供接口给其他微服务)**服务消费者:**一次业务中,调用其他微服务的服务。(调用其它微服务提供的接口)比如…

如何分辨on-policy和off-policy

on-policy的定义:behavior policy和target-policy相同的是on-policy,不同的是off-policy。 behavior policy:采样数据的策略,影响的是采样出来s,a的分布。 target policy:就是被不断迭代修改的策略。 如果是基于深度…

JavaSE学习笔记总结day18(完结!!!)

今日内容 零、 复习昨日 一、作业 二、进程与线程 三、创建线程 四、线程的API 五、线程状态 六、线程同步 零、 复习昨日 晨考 一、作业 见答案 二、进程与线程[了解] 一个进程就是一个应用程序,进程包含线程 一个进程至少包含一个线程,大部分都是有多条线程在执行任务(多线…

Win系统蓝牙设备频繁卡顿/断连 - 解决方案

Win系统蓝牙设备频繁卡顿/断连 - 解决方案前言常见网卡Intel无线网卡(推荐)Realtek无线网卡总结查看本机网卡解决方案更新驱动更换网卡(推荐)前言 无线网卡有2个模块,一个是WiFi,一个是蓝牙,因…

Kubernetes之存储管理(下)

动态卷供应 上篇文章讲述的持久性存储,是先创建pv,然后才能创建pvc。如果不同的命名空间里同时要创建不同的pvc,那么就需要提前创建好pv,这样才能为pvc提供存储。但是这种方式太过繁琐,可以使用storageClass&#xff…

yolov5算法,训练模型,模型检测

嘟嘟嘟嘟!工作需要,所以学习了下yolov5算法。是干什么的呢? 通俗来说,可以将它看做是一个小孩儿,通过成年人(开发人员)提供的大量图片的学习,让自己知道我看到的哪些场景需要提醒给成…

MySQL底层存储B-Tree和B+Tree原理分析

1.B-Tree的原理分析 (1)什么是B-Tree B-树,全称是 Balanced Tree,是一种多路平衡查找树。 一个节点包括多个key (数量看业务),具有M阶的B树,每个节点最多有M-1个Key。 节点的key元素个数就是指这个节点能…