Salesforce Research at NAACL 2022

3 min read

Conference Overview

This weekend marks the start of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). NAACL provides a regional focus for members of the Association for Computational Linguistics (ACL) in North America. NAACL organizes annual conferences, promotes cooperation and information exchange among related scientific and professional societies, and encourages/facilitates ACL membership by people and institutions in the Americas. (NAACL Website)

NAACL 2022 will take place in a hybrid format this year, hosting people both virtually and in-person in Seattle, Washington from July 10th-15th, 2022.

Salesforce AI Research Publications at NAACL 2022

Salesforce Research is pleased to announce a total of 9 accepted papers from our team of leading researchers.

Our accepted authors will present their work at NAACL through pre-recorded talks and in-person poster sessions during the main conference. We look forward to sharing some of our exciting new research, whether virtually or face-to-face in Seattle!

Salesforce Researchers are shown in bold in the publication descriptions below.

A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis

Ehsan Hosseini-Asl, Wenhao Liu, Caiming Xiong

We propose reformulating the aspect-based sentiment analysis as language generation and using a generative autoregressive language model. The results show improved few-shot performance by a large margin. Moreover, this new task reformulation indicates significant variance reduction in comparison to the previous arts.

Quiz Design Task: Helping Teachers Create Quizzes With Automated Question Generation

Philippe Laban, Jason Wu, Lidiya Murakhovs'ka, Wenhao Liu, Caiming Xiong

We explore whether recent progress in automatic question generation can be useful to teachers when they are designing reading comprehension quizzes. In our study, we find that there has been some progress, with recent models generating around 68% acceptable quiz questions, but also find that models still struggle to generate questions that fit the general context, even if they are better at generating fluent and answerable questions.

QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization

Alexander Fabbri, Jason Wu, Wenhao Liu, Caiming Xiong

In this work, we analyze entailment and question answering (QA)-based metrics for factual consistency in summarization. We propose an optimized metric called QAFactEval that achieves state-of-the-art performance on the SummaC benchmark and can be combined with an entailment metric for a further performance boost.

MixQG: Neural Question Generation with Mixed Answer Types

Lidiya Murakhovs’ka, Jason Wu, Philippe Laban, Tong Niu, Wenhao Liu, Caiming Xiong

In this work, we present MixQG, a question generation model pre-trained on a collection of nine QA datasets with a mix of answer types. We show through experiments that the resulting model is a strong starting point for further fine-tuning which achieves state-of-the-art results on target datasets in commonly-used similarity metrics as well as our human evaluation.

Exploring Neural Models for Query-Focused Summarization

Jesse Vig, Alexander Fabbri, Wojciech Kryściński, Jason Wu, Wenhao Liu

We explore query-focused summarization models, which summarize a document with respect to a particular question. We compare existing models and introduce new modeling extensions that achieve state-of-the art performance on the QMSum dataset, up to a margin of 3.38 ROUGE-1 when combined with transfer learning strategies.

Improving Faithfulness in Abstractive Summarization with Entity Coverage Control

Haopeng Zhang, Semih Yavuz, Wojciech Kryściński, Kazuma Hashimoto, Yingbo Zhou

We propose a method to remedy entity-level extrinsic hallucinations with Entity Coverage Control (ECC). We first compute entity coverage precision and prepend the corresponding control code for each training example, which implicitly guides the model to recognize faithfulness contents in the training phase. We further extend our method via intermediate fine-tuning on large but noisy data extracted from Wikipedia to unlock zero-shot summarization.

Multimodal Dialogue State Tracking

Hung Le, Nancy F. Chen, Steven Hoi

We introduce a new machine learning task that tracks the information states of visual objects in the context of video-grounded dialogues. We design this task from the basis of conventional dialogue state tracking and propose a strong Transformer-based baseline model with self-supervised learning objectives.

VGNMN: Video-grounded Neural Module Networks for Video-Grounded Dialogue Systems

Hung Le, Nancy F. Chen, Steven Hoi

We propose an interpretable response generation approach for video-grounded dialogues. We first decompose dialogue utterances into modular steps as a reasoning process, including reasoning on dialogue and video contexts. We utilize these steps to initiate neural components and facilitate more interpretable outputs to generate meaningful responses.

Retrieval-Augmented Multilingual Keyphrase Generation with Retriever-Generator Iterative Training

Yifan Gao, Qingyu Yin, Zheng Li, Rui Meng, Tong Zhao, Bing Yin, Irwin King, Michael Lyu

In this study, we call attention to a new setting, named multilingual keyphrase generation. We investigate keyphrase generation in low-resource languages and contribute two new datasets, EcommerceMKP and AcademicMKP. Technically, we propose a retrieval-augmented method to mitigate the data shortage problem in non-English languages, and it delivers significant improvements.