AI for Global Climate Cooperation: Salesforce Research and Mila Announce Climate Change Collaboration and Competition

TL;DR:  Salesforce Research and Mila announce AI for Global Climate Cooperation, a working group collaboration and competition to design negotiation protocols and climate agreements. We plan to coauthor a peer-reviewed scientific paper with top-performing teams; insights will be distilled into a policy brief shared with leading policymakers, informing future

05 Aug 2022 • Stephan Zheng #AI for Global Climate Cooperation

AI Coding with CodeRL: Toward Mastering Program Synthesis with Deep Reinforcement Learning

TL;DR: CodeRL is a new framework for program synthesis through holistic integration of pretrained language models and deep reinforcement learning. By utilizing unit test feedback as part of model training and inference, and integrating with an improved CodeT5 model, CodeRL achieves state-of-the-art results on competition-level programming tasks. The following

19 Jul 2022 • Henry Hung Le #reinforcement-learning

Salesforce Research at ICML 2022

Conference Overview This weekend will kick off the thirty-ninth International Conference on Machine Learning (ICML). This conference specifically aims to bring together professionals who are dedicated to the advancement of Machine Learning (ML) in Artificial Intelligence. Participants at ICML come from many different backgrounds, including academic and industrial researchers, entrepreneurs

17 Jul 2022 • Mia Ferrer #conferences

Salesforce Research at NAACL 2022

Conference Overview This weekend marks the start of the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). NAACL provides a regional focus for members of the Association for Computational Linguistics (ACL) in North America. NAACL organizes annual conferences, promotes cooperation and information exchange among

10 Jul 2022 • Mia Ferrer #NAACL 2022

Salesforce Research at CVPR 2022

Conference Overview The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) is the annual conference on Computer Vision. CVPR is composed of both the main conference, as well as workshops and other courses, to provide a unique learning experience and networking opportunities in the field of Computer Vision. CVPR

20 Jun 2022 • Mia Ferrer #computer vision

TaiChi: Open Source Library for Few-Shot NLP

AUTHORS: Sharvin Shah, Jin Qu, Donald Rose TL;DR: TaiChi is an open source library for few-shot NLP, designed for data scientists and software engineers who want to get some quick results or build proof-of-concept products but don’t have much experience with few-shot learning (FSL). The library abstracts complex

15 Jun 2022 • Jin Qu #NLP

Turbocharge Multi-Agent Reinforcement Learning with WarpDrive and PyTorch Lightning

TL;DR: WarpDrive is a flexible, lightweight, easy-to-use end-to-end reinforcement learning (RL) framework; enables orders-of-magnitude faster training on a single GPU. PyTorch Lightning enables you to modularize experimental code, and build production-ready workloads fast. Together, they can help significantly accelerate multi-agent RL R&D. Reinforcement Learning: Agents Learn by

20 May 2022 • Sunil Srinivasa #WarpDrive

Salesforce Research at ACL 2022

Conference Overview This year marks the 60th annual meeting of the Association for Computational Linguistics Conference (ACL [https://www.2022.aclweb.org/]). ACL is the premier international scientific and professional society for people working on computational problems involving human language, a field often referred to as either computational linguistics or

19 May 2022 • Mia Ferrer #NLP

Science Advances Publishes AI Economist Research on Improving Tax Policies With Reinforcement Learning

TL;DR: The AI Economist, a reinforcement learning (RL) system, learns dynamic tax policies that optimize equality along with productivity in simulated economies, outperforming alternative tax systems. We have now expanded this research, which is being published in the interdisciplinary scientific journal Science Advances. Humans or AI: Which Can Design

05 May 2022 • Stephan Zheng #AI Economist

Salesforce Research at ICLR 2022

Conference Overview This year marks the Tenth International Conference on Learning Representations ( ICLR [https://iclr.cc/Conferences/2022]), one of the premier academic conferences dedicated to advancing research in representation learning - a type of machine learning also referred to as feature learning or deep learning. ICLR features the latest

25 Apr 2022 • Mia Ferrer #ICLR

Conversational AI Programming with CodeGen: Let AI Write Code For You

Links: Research Paper [https://arxiv.org/abs/2203.13474], Github [https://github.com/salesforce/CodeGen] -------------------------------------------------------------------------------- Can you imagine a machine writing an app for you, just by telling it what you want? As futuristic as this scenario sounds, it’s actually here today. Salesforce AI Research outlines conversational AI

29 Mar 2022 • Erik Nijkamp #conversational AI

Embracing Ethical AI at NeurIPS 2021

December 21, 2021 The leading AI research conference, NeurIPS 2021, has recently wrapped up, spanning seven very full days, 2,344 accepted papers, eight invited talks, ten tutorials, and nearly 70 workshops. Though there was diverse and innovative thought leadership on display, I found myself drawn to the particular topics

21 Dec 2021 • Anna Bethke #AI-fairness

CodeT5: The Code-aware Encoder-Decoder based Pre-trained Programming Language Models

TL; DR: Introducing CodeT5 --- the first code-aware, encoder-decoder-based pre-trained programming language model, which enables a wide range of code intelligence applications including code understanding and generation tasks. CodeT5 achieves state-of-the-art performance on 14 sub-tasks in the CodeXGLUE code intelligence benchmark. Given the goal of improving software development productivity with

03 Sep 2021 • Yue Wang #code-intelligence

Data-Driven, Interpretable, and Robust Policy Design using the AI Economist

This blog accompanies the interactive demo [https://einstein.ai/the-ai-economist/ai-policy-foundation-and-covid-case-study] and paper [https://arxiv.org/abs/2108.02904]! Dive deeper into the underlying simulation code [https://www.github.com/salesforce/ai-economist ] and simulation card [https://github.com/salesforce/ai-economist/blob/master/COVID-19_Simulation-Card.pdf] . In this blog post, we

09 Aug 2021 • Stephan Zheng

"The Triangle of Trust in Conversational Ethics and Design: Where Bots, Language and AI Intersect" Workshop Summary

August 5, 2021 In June 2021, four of us from the Salesforce AI Ethics and Conversational Design teams collaborated with the Montreal AI Ethics Institute [https://montrealethics.ai/] (MAIEI) to facilitate a workshop on the responsible creation and implementation of chatbots and conversational assistants. Connor Wright [https://www.linkedin.com/

05 Aug 2021 • Yoav Schlesinger #ethics

Learning without Labels

With data rapidly being generated by millions of people, it's not feasible to label all of it. Learn about the recent advancements in ML for how to train vision models with unlabelled data using self-supervised learning.

21 Jun 2021 • Michael Sollami #deeplearning

Slack your way to QA - How past conversations can answer future questions.

How many emails and working-related conversations do you have every day? The average office worker receives about 121 emails [https://www.campaignmonitor.com/blog/email-marketing/2019/05/shocking-truth-about-how-many-emails-sent/] daily and uncountable messages on platforms such as Slack [https://slack.com/], Team [https://www.microsoft.com/en-us/microsoft-teams/group-chat-software], or iMessage

07 Jun 2021 • Chien-Sheng Wu

Salesforce Research at ICLR 2021

This year marks the 9th annual conference on International Conference on Learning Representations (ICLR) taking place in a fully virtual format from May 4th through May 8th, 2021. ICLR is a premier academic conference in the field of representation learning, generally referred to as deep learning or feature learning. ICLR

26 Apr 2021 • Mia Ferrer #ICLR

When are Neural Networks more powerful than Neural Tangent Kernels?

The empirical success of deep learning has posed significant challenges to machine learning theory: Why can we efficiently train neural networks with gradient descent despite its highly non-convex optimization landscape? Why do over-parametrized networks generalize well? The recently proposed Neural Tangent Kernel (NTK) theory offers a powerful framework for understanding

29 Mar 2021 • Yu Bai #deep learning theory

Applying AI Ethics Research in Practice

February 2020 Summary from FAccT 2020 CRAFT Session > AI Ethics practitioners in industry look to researchers for insights on how to best identify and mitigate harmful bias in their organization’s AI solutions and create more fair or equitable outcomes. However, it can be a challenge to apply those

03 Mar 2021 • Kathy Baxter #ethics

Salesforce Research at NeurIPS 2020

This year marks the 34th annual conference on Neural Information Processing Systems (NeurIPS [https://neurips.cc/]) reimagined for the first time ever in a fully virtual format. NeurIPS is a leading conference in the area of machine learning and neural information processing systems in their biological, technological, mathematical, and theoretical

30 Nov 2020 • Denna Mafie

CoMatch: Advancing Semi-supervised Learning with Contrastive Graph Regularization

> TL; DR: We propose a new semi-supervised learning method which achieves state-of-the-art performance by learning jointly-evolved class probabilities and image representations. What are the existing semi-supervised learning methods? Semi-supervised learning aims to leverage few labeled data and a large amount of unlabeled data. As a long-standing and widely-studied topic

23 Nov 2020 • Junnan Li

Salesforce Research at EMNLP 2020

This year marks the 24th annual Empirical Methods in Natural Language Processing (EMNLP) [https://2020.emnlp.org/] conference reimagined for the first time ever in a fully virtual format. EMNLP is a leading conference in the area of Natural Language Processing covering a broad spectrum of diverse research areas that

11 Nov 2020 • Denna Mafie #research

A Language Detector for Identifying Machine-Generated Text

In recent years, the natural language processing (NLP) community has seen the development of increasingly powerful language models [1, 2], capable of generating textual output that is indistinguishable from human-written text. This includes our own model called CTRL [https://blog.salesforceairesearch.com/introducing-a-conditional-transformer-language-model-for-controllable-generation/] [3] (Conditional Transformer Language Model) for controllable

22 Oct 2020 • Yoav Schlesinger

The First Simulation Card for Ethical AI Simulations

We recently released Foundation, an open-source framework [https://github.com/salesforce/ai-economist] to build economic simulations. Foundation has been designed with flexibility and AI research in mind, and can be modified by anyone. AI simulations offer researchers the power to generate data and evaluate outcomes of virtual economies that capture

20 Oct 2020 • Stephan Zheng

Model Cards for AI Model Transparency

At Salesforce, we take seriously our mission to create and deliver AI technology that is responsible, accountable, transparent, empowering, and inclusive. These principles ensure that our AI is safe, ethical, and engenders trust.

29 Sep 2020 • Yoav Schlesinger #ethics

Theory-Inspired Network Architecture Search

> TL;DR: We theoretically analyze the differential architecture search (DARTS) for understanding the role and impact of skip connections, which inspires a new method for Neural Architecture Search (NAS) using group-structured sparse gates and path-depth-wise regularization to overcome the limitation of existing NAS methods for AutoML. In our work

25 Sep 2020 • Pan Zhou

GeDi: A Powerful New Method for Controlling Language Models

We use smaller language models as generative classifiers to guide generation from larger language models. We show that this method can make generations friendlier, reduce bias and toxicity, and achieve zero-shot controllable generation of unseen topics.

22 Sep 2020 • Ben Krause

MoPro: Webly Supervised Learning with Momentum Prototypes

> TL; DR: We propose a new webly-supervised learning method which achieves state-of-the-art representation learning performance by training on large amounts of freely available noisy web images. Deep neural networks are known to be hungry for labeled data. Current state-of-the-art CNNs are trained with supervised learning on datasets such as

17 Sep 2020 • Junnan Li #webly supervised learning

How Salesforce Infuses Ethics into its AI

For all the good that AI can bring, responsible tech companies understand they must recognize, prepare for, and mitigate the potential unintended, harmful effects. That’s why Salesforce sees ethics as foundational to AI — and why we’re sharing a closer look at how we infuse an ethical process into

14 Aug 2020 • Katherine Siu #artificial intelligence