This year marks the 24th annual Empirical Methods in Natural Language Processing (EMNLP) conference reimagined for the first time ever in a fully virtual format. EMNLP is a leading conference in the area of Natural Language Processing covering a broad spectrum of diverse research areas that are concerned with computational approaches to natural language.
Salesforce Research is proud to announce a total of 16 accepted papers from our team of leading research scientists:
Salesforce Research offers researchers and PhD students the opportunity to work with large-scale industrial data that potentially leads to interesting research which is usually not accessible in an academic setting, freedom to continue their own research focus and connect it with real CRM applications, flexibility to choose research topics that fit their interests, and the opportunity to attend virtual conferences with our researchers to showcase accepted papers.
Nine of the contributing authors are past or current Salesforce Research interns, shedding a light on the kind of impactful projects an internship at Salesforce Research provides PhD candidates. Those interns are: Chien Sheng Wu - 5 papers (3 1st - 2 contributing), Samson Tan (1st Author), Weishi Wang (1st Author), Yue Wang (1st Author), Yifan Gao (1st Author), Hung Le (2 - 1st Author), Jianguo Zhang (1st Author), Wojciech Kryscinski (1st Author), and Congying Xia (1st Author).
“It is a wonderful journey to work on challenging problems of dialog systems. I’ve learned a lot working with the Salesforce Research Team and I enjoy the insightful meetings with our researchers. I’m excited that our work has been published at conferences, and it’s an amazing feeling seeing the few-shot intent detection model that I worked on, applied to our own internal systems. To see a tangible impact on the company has made me really proud to be part of this program and this team.” - Jianguo Zhang, Salesforce Research Intern
The accepted papers below will be presented by members of our team through prerecorded talks and slides during the main conference on November 16th-20th, 2020. Additionally, for the first time ever we are opening up 2 of our EMNLP sessions to students interested outside of the conference as well. Questions and comments about this work can be discussed using the online EMNLP forum, through twitter accounts of the authors which we will link below, or by emailing us at firstname.lastname@example.org. We look forward to sharing some of our exciting new research with you next week!
Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start
Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher and Caiming Xiong
TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue
Chien-Sheng Wu, Steven C.H. Hoi, Richard Socher and Caiming Xiong
Evaluating the Factual Consistency of Abstractive Text Summarization
Wojciech Kryściński, Bryan McCann, Caiming Xiong, Richard Socher
Probing Task-Oriented Dialogue Representation from Language Models
Chien-Sheng Wu and Caiming Xiong
Mind Your Inflections! Improving NLP for Non-Standard Englishes with Base-Inflection Encoding
Samson Tan, Shafiq Joty, Lav R. Varshney, and Min-Yen Kan
Response Selection for Multi-Party Conversations with Dynamic Topic Tracking
Weishi Wang, Shafiq Joty, and Steven C.H. Hoi
VD-BERT: A Unified Vision and Dialog Transformer with BERT
Yue Wang (Research Intern), Shafiq Joty, Michael R. Lyu, Irwin King, Caiming Xiong, and Steven C.H. Hoi
Discern: Discourse-Aware Entailment Reasoning Network for Conversational Machine Reading
Yifan Gao (Research Intern), Chien-Sheng Wu, Jingjing Li, Shafiq Joty, Steven C.H. Hoi, Caiming Xiong, Irwin King, and Michael R. Lyu
BiST: Bi-directional Spatio-Temporal Reasoning for Video-Grounded Dialogues
Hung Le (Research Intern), Doyen Sahoo, Nancy Chen and Steven C.H. Hoi
UniConv: A Unified Conversational Neural Architecture for Multi-domain Task-oriented Dialogues
Hung Le (Research Intern), Doyen Sahoo, Chenghao Liu, Nancy Chen and Steven C.H. Hoi
Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference
Jianguo Zhang (Research Intern), Kazuma Hashimoto, Wenhao Liu, Chien-Sheng Wu,Yao Wan, Philip Yu, Richard Socher and Caiming Xiong
The Thieves on Sesame Street are Polyglots: Extracting Multilingual Models from Monolingual APIs
Nitish Shirish Keskar, Bryan McCann, Caiming Xiong and Richard Socher
Simple Data Augmentation with the Mask Token Improves Domain Adaptation for Dialog Act Tagging
Semih Yavuz, Kazuma Hashimoto, Wenhao Liu, Nitish Shirish Keskar, Richard Socher and Caiming Xiong
Bridging Textual and Tabular Data for Cross-Domain Text-to-SQL Semantic Parsing
Victoria Lin, Richard Socher and Caiming Xiong
Improving Limited Labeled Dialogue State Tracking with Self-Supervision
Chien-Sheng Wu, Steven C.H. Hoi, and Caiming Xiong
Composed Variational Natural Language Generation for Few-shot Intents
Congying Xia, Caiming Xiong, Philip Yu and Richard Socher
IntEx-SemPar: 1st Workshop on Interactive and Executable Semantic Parsing
Ben Bogin, Srinivasan Iyer, Victoria Lin, Alane Suhr, Panupong Pasupat, Pengcheng Yin, Tao Yu, Rui Zhang, Victor Zhong, Dragomir Radev, Caiming Xiong
If you are interested in learning more about our research program, other published works, and full-time and internship opportunities please head to our website.