Salesforce Research at NeurIPS 2020

3 min read

This year marks the 34th annual conference on Neural Information Processing Systems (NeurIPS) reimagined for the first time ever in a fully virtual format. NeurIPS is a leading conference in the area of machine learning and neural information processing systems in their biological, technological, mathematical, and theoretical aspects. Neural information processing is a field which benefits from a combined view of biological, physical, mathematical, and computational sciences.

Salesforce Research is proud to announce a total of 5 accepted papers and 8 accepted workshop papers from our team of leading researchers including 1 oral paper, 1 spotlight paper, and 3 poster papers.

Salesforce Research offers researchers and PhD students the opportunity to work with large-scale industrial data that leads to state-of-the-art research which is usually not accessible in an academic setting, freedom to continue their own research focus and connect it with real CRM applications, flexibility to choose research topics that fit their interests, and the opportunity to attend conferences with our researchers to showcase accepted papers.

Three of the contributing authors are past or current Salesforce Research interns, shedding a light on the kind of impactful projects an internship at Salesforce Research provides PhD candidates. Those interns are: Chien Sheng Wu, Junwen Bai, and Minshuo Chen. Hear what our interns have to say about our program and learn more here.

Quote Card_Jianguo Zhang-01 (1).png

The accepted papers below will be presented by members of our team through prerecorded talks and slides during the main conference on December 6th-12th, 2020. Additionally, our team will be hosting 11 roundtable events throughout the conference to give attendees a chance to meet our recruiters and researchers in a smaller group setting as well as a fun AI Trivia Challenge event on December 8th, 2020.

We’re also excited to announce our continued commitment to diversity through new partnerships with the LatinX in AI Workshop and the Black in AI Workshop on December 7th, 2020. We’ll be partnering with these organizations through the 2020/2021 conference cycle in the Latinx in AI Mentorship Program and the Black in AI Graduate Mentorship Program.


Questions and comments about this work can be discussed using the online NeurIPS forum, on Twitter, or by emailing us at salesforceresearch@salesforce.com. We look forward to sharing some of our exciting new research with you next week!

Our Publications at NeurIPS 2020

ACCEPTED PAPERS

Theory-Inspired Path-Regularized Differential Network Architecture Search
Pan Zhou, Caiming Xiong, Richard Socher and Steven Hoi
Oral paper, NeurIPS2020

A Simple Language Model for Task-Oriented Dialogue
Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz and Richard Socher
Spotlight, NeurIPS2020

Towards Theoretically Understanding Why Sgd Generalizes Better Than Adam in Deep Learning
Pan Zhou, Jiashi Feng, Chao Ma, Caiming Xiong, Steven Hoi and Weinan E
Poster paper, NeurIPS2020

Online Structured Meta-learning
Huaxiu Yao, Yingbo Zhou, Mehrdad Mahdavi, Zhenhui (Jessie) Li, Richard Socher and Caiming Xiong
Poster paper, NeurIPS2020

Towards Understanding Hierarchical Learning: Benefits of Neural Representations
Minshuo Chen, Yu Bai, Jason Lee, Tuo Zhao, Huan Wang, Caiming Xiong and Richard Socher
Poster paper, NeurIPS2020

ACCEPTED WORKSHOP PAPERS

How Important is the Train-Validation Split in Meta-Learning?
Yu Bai, Minshuo Chen, Pan Zhou, Tuo Zhao, Jason D. Lee, Sham Kakade, Huan Wang, Caiming Xiong
NeurIPS 2020 workshop on Meta-Learning.

Task Similarity Aware Meta Learning: Theory-inspired Improvement on MAML
Pan Zhou, Yingtian Zou, Xiaotong Yuan, Jiashi Feng, Caiming Xiong, Steven C.H. Hoi
NeurIPS 2020 workshop on Meta-Learning.

Representation Learning for Sequence Data with Deep Autoencoding Predictive Components
Junwen Bai, Weiran Wang, Yingbo Zhou, Caiming Xiong
NeurIPS 2020 workshop on Self-Supervised Learning for Speech and Audio Processing

DIME: An Information-Theoretic Difficulty Measure for AI Datasets
Peiliang Zhang, Huan Wang, Nikhil Naik, Caiming Xiong, Richard Socher
NeurIPS 2020 Workshop: Deep Learning through Information Geometry

Prototypical Contrastive Learning of Unsupervised Representations
Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, Steven C.H. Hoi
NeurIPS 2020 workshop: Self-Supervised Learning -- Theory and Practice

Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Stanislaw Jastrzebski, Devansh Arpit, Oliver Åstrand, Giancarlo Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, Krzysztof Geras
NeurIPS 2020 workshop on Optimization for Machine Learning.

Copyspace: Where to Write on Images?
Jessica Lundin, Michael Sollami, Alan Ross, Brian Lonsdorf, David Woodward, Owen Schoppe, Sönke Rohde
NeurIPS 2020 workshop on Machine Learning for Creativity and Design

SOrT-ing VQA Models : Contrastive Gradient Learning for Improved Consistency
Sameer Dharur, Purva Tendulkar, Dhruv Batra, Devi Parikh, Ramprasaath R. Selvaraju
NeurIPS 2020 workshop on Interpretable Inductive Biases and Physically Structured Learning

HOSTED WORKSHOP

ML for Economic Policy Workshop
Friday, December 11, 2020
Start Time: 12:00pm EST  

If you are interested in learning more about our research program, other published works, and full-time and internship opportunities please head to our website.