Salesforce AI Research at NeurIPS 2022

4 min read

Conference Overview

Next week, the Thirty-sixth annual Conference on Neural Information Processing Systems (NeurIPS) will be held in New Orleans, Louisiana from Monday, November 28th, through Friday, December 9th. NeurIPS will include invited talks, demonstrations, oral and poster presentations of accepted papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for the exchange of ideas (NeurIPS Website). NeurIPS 2022 will begin with an in-person component at the New Orleans Convention Center (Nov. 28 - Dec 3rd), and a virtual component the second week (Dec 5th - 9th).

Salesforce AI Research Sponsorship and Events

Conference Sponsorship: Salesforce AI Research is proud to support NeurIPS 2022 as a Diamond level sponsor. Our team of researchers and recruiters will be showcasing demos, discussing career opportunities and chatting with attendees at our booth (#503) all week. Our booth hours are as follows:

Monday, November 28 | 9:00am - 5:00pm, Welcome Reception 6:00pm - 8:00pm

Tuesday, November 29 | 9:00am - 5:00pm

Wednesday, November 30 | 9:00am - 5:00pm

Thursday, December 1 | 9:00am - 1:30pm

LatinX in AI: We’re excited to continue our partnership with the LatinX in AI Community. We will be participating in the LatinX in AI (LXAI) Workshop at NeurIPS on Monday, November 28th.

Networking Event: Salesforce AI Research will host an invite-only Networking and Trivia event on Wednesday, November 30th from 5:00pm - 8:00pm at the Bower Bar in New Orleans.

Salesforce AI Research Publications at NeurIPS 2022

Salesforce Research is pleased to announce a total of 7 accepted Oral and Poster papers from our team of leading researchers.

Our accepted authors will present their work at NeurIPS throughout the main conference; with specific times, dates, and locations indicated below. We look forward to sharing some of our exciting new research with you!

Salesforce Researchers are shown in bold in the publication descriptions below.

Accepted Oral Paper

Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent

Yu Bai, Chi Jin, Song Mei, Ziang Song, Tiancheng Yu

We design the first line of algorithms for minimizing the trigger regret and learning correlated equilibria in Extensive-Form Games against adversarial opponents under bandit feedback. Our algorithms arise from connections to normal-form games, but are efficiently implementable and achieve sharper rates compared with naive such ones.

Thu 1 Dec 4:30 p.m. CST — 6 p.m. CST Hall J #538

Accepted Poster Papers

CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning

Hung Le, Wang Yue, Akhilesh Gotmare, Silvio Savarese, Steven Hoi

CodeRL is a groundbreaking new framework for program synthesis through holistic integration of pretrained language models and deep reinforcement learning. By utilizing unit test feedback as part of model training and inference, and integrating with an improved CodeT5 model, CodeRL achieves SoTA results on competition-level programming tasks.

Tue 29 Nov 11:30 a.m. CST — 1 p.m. CST Hall J #138

Ensemble of Averages: Improving model selection and boosting performance in domain generalization

Devansh Arpit, Huan Wang, Yingbo Zhou, Caiming Xiong

A simple hyper-parameter free strategy of using the simple moving average of model parameters during training and ensemble achieves SOTA on domain generalization benchmarks, and can be explained using the Bias-Variance trade-off.

Tue 29 Nov 4:30 p.m. CST — 6 p.m. CST Hall J #732

Identifying Good Directions to Escape the NTK Regime and Efficiently Learn Low-degree Plus Sparse Polynomials

Eshaan Nichani, Yu Bai, Jason D. Lee

We show that neural networks trained with gradient descent can probably escape the Neural Tangent Kernel (NTK) regime and achieve better sample efficiency than the NTK for learning certain natural functions, by leveraging properties of the NTK spectrum and newly designed regularizers.

Thu 1 Dec 4:30 p.m. CST — 6 p.m. CST Hall J #921

Policy Optimization for Markov Games: Unified Framework and Faster Convergence

Runyu Zhang, Qinghua Liu, Huan Wang, Caiming Xiong, Na Li, Yu Bai

We show that a natural optimistic policy optimization algorithm achieves the current best convergence rate for finding equilibria in Markov Games, in both two-player zero-sum and multi-player general-sum settings. We also provide a framework that unifies most existing policy optimization algorithms and their analyses.

Wed 30 Nov 11:30 a.m. CST — 1 p.m. CST Hall J #816

Refining Low-Resource Unsupervised Translation by Language Disentanglement of Multilingual Translation Model

Xuan-Phi Nguyen, Shafiq Joty, Wu Kui, Ai Ti Aw

A four-stage refinement procedure that finetunes a multilingual unsupervised NMT model to significantly outperform the baseline and achieve state of the art in low-resource unsupervised translation tasks.

Wed 30 Nov 4:30 p.m. CST — 6 p.m. CST Hall J #626

Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games

Ziang Song, Song Mei, Yu Bai

We design new algorithms for learning correlated equilibria in Extensive-Form Games, achieving the current best convergence rate under full-information feedback and the first sample-efficient learning result under bandit feedback.

Tue 29 Nov 11:30 a.m. CST — 1 p.m. CST Hall J #824

Career Opportunities with Salesforce AI Research

2023 Research Intern - Salesforce/Tableau Research

California - Palo Alto, Washington - Seattle (Tableau), Singapore (Linked here)

As a research intern, you will work with a team of research scientists and engineers on a project that ideally leads to a submission to a top-tier conference.

AI Research Resident - Salesforce Research

California - Palo Alto

A 12-month research training program intended to kickstart or further one's experience in AI research. Residents will gain valuable hands-on experience in fundamental and applied AI research, working closely with Salesforce researchers. Will kick off in August 2023.

Explore More

To learn more about these and other research projects, please visit our website at salesforceairesearch.com.