Salesforce Research at ICLR 2021

3 min read

This year marks the 9th annual conference on International Conference on Learning Representations (ICLR) taking place in a fully virtual format from May 4th through May 8th, 2021. ICLR is a premier academic conference in the field of representation learning, generally referred to as deep learning or feature learning. ICLR is a top-tier venue for presenting and publishing cutting-edge research on all aspects of representation learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics (ICLR 2021 Website).

Salesforce Research offers researchers and PhD students the opportunity to work with large-scale industrial data that leads to state-of-the-art research which is usually not accessible in an academic setting, freedom to continue their own research focus and connect it with real CRM applications, flexibility to choose research topics that fit their interests, and the opportunity to attend conferences with our researchers to showcase their research.

Salesforce Research is proud to announce a total of 7 accepted papers from our team of leading researchers.

Three of the contributing authors are past Salesforce Research interns, showing just how impactful an internship on the Salesforce Research team can be to PhD candidates. These interns are: Bailin Wang, Junwen Bai, and Shiyang Li. Hear what our interns have to say about our program and learn more here.

The accepted papers below will be presented by members of our team through prerecorded talks and slides during the main conference from May 4th through 8th, 2021. Our team will be hosting 7 roundtable events throughout the conference to give attendees a chance to meet our recruiters and researchers in a smaller group setting. We’re excited to continue our commitment to diversity through our continued partnerships with the LatinX in AI, Black in AI, and Women in Machine Learning organizations. We will be hosting different events throughout the week with these organizations, as well as partnering with these organizations through the rest of the 2021 conference cycle in the LatinX in AI Mentorship Program and the Black in AI Graduate Mentorship Program.

We look forward to sharing some of our exciting new research with you next week!

Our Publications at ICLR 2021

BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig, Ali Madani, Lav R. Varshney, Caiming Xiong, Richard Socher, Nazneen Fatema Rajani
We show how language models trained on protein sequences in an unsupervised setting "rediscover" fundamental biological properties.

CoCo: Controllable Counterfactuals for Evaluating Dialogue State Trackers
Shiyang Li, Semih Yavuz, Kazuma Hashimoto, Jia Li, Tong Niu, Nazneen Rajani, Xifeng Yan, Yingbo Zhou, Caiming Xiong
We propose a controllable, principled, and model-agnostic approach to generate novel conversation scenarios for evaluating and improving the generalization of task-oriented dialogue systems.

GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing
Tao Yu , Chien-Sheng Wu, Xi Victoria Lin, Bailin Wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, Richard Socher, Caiming Xiong
GraPPa is the SOTA table-and-language understanding language model pre-trained with human-annotated and grammar-augmented SQL data.

Mirostat: A neural text decoding algorithm that directly controls perplexity
Sourya Basu*, Govardana Sachithanandam Ramachandran, Nitish Shirish Keskar, Lav Varshney+
We provide a new text decoding algorithm that directly controls perplexity and hence, several important attributes of text generated from neural language models.

MoPro: Webly Supervised Learning with Momentum Prototypes
Junnan Li, Caiming Xiong, Steven Hoi
MoPro is a new algorithm that effectively trains deep networks using noisy data freely available on the Web.

Prototypical Contrastive Learning of Unsupervised Representations
Junnan Li, Pan Zhou, Caiming Xiong, Steven Hoi
We propose a new self-supervised learning method that bridges contrastive learning and clustering into an expectation-maximization framework.

Representation Learning for Sequence Data with Deep Autoencoding Predictive Components
Junwen Bai, Weiran Wang, Yingbo Zhou, Caiming Xiong
We propose a new self-supervised learning method that maximizes the predictive information of latent feature representations for sequence data.

If you are interested in learning more about our research program, other published works, and full-time and internship opportunities please head to our website.