When are Neural Networks more powerful than Neural Tangent Kernels?

The empirical success of deep learning has posed significant challenges to machine learning theory: Why can we efficiently train neural networks with gradient descent despite its highly non-convex optimization landscape? Why do over-parametrized networks generalize well? The recently proposed Neural Tangent Kernel (NTK) theory offers a powerful framework for understanding

29 Mar 2021 • Yu Bai #deep learning theory

Applying AI Ethics Research in Practice

February 2020 Summary from FAccT 2020 CRAFT Session > AI Ethics practitioners in industry look to researchers for insights on how to best identify and mitigate harmful bias in their organization’s AI solutions and create more fair or equitable outcomes. However, it can be a challenge to apply those research

03 Mar 2021 • Kathy Baxter #ethics

CASTing Your Model: Learning to Localize Improves Self-Supervised Representations

> TL; DR: We find that current self-supervised learning approaches suffer from poor visual grounding and receive improper supervisory signal when trained on complex scene images. We introduce CAST to improve visual grounding during pretraining and show that it yields significantly better transferable features. Self-supervised learning and its grounding problem Self-Supervised

09 Dec 2020 • Ramprasaath R. Selvaraju

Salesforce Research at NeurIPS 2020

This year marks the 34th annual conference on Neural Information Processing Systems (NeurIPS [https://neurips.cc/]) reimagined for the first time ever in a fully virtual format. NeurIPS is a leading conference in the area of machine learning and neural information processing systems in their biological, technological, mathematical, and theoretical

30 Nov 2020 • Denna Mafie

CoMatch: Advancing Semi-supervised Learning with Contrastive Graph Regularization

> TL; DR: We propose a new semi-supervised learning method which achieves state-of-the-art performance by learning jointly-evolved class probabilities and image representations. What are the existing semi-supervised learning methods? Semi-supervised learning aims to leverage few labeled data and a large amount of unlabeled data. As a long-standing and widely-studied topic in

23 Nov 2020 • Junnan Li

Salesforce Research at EMNLP 2020

This year marks the 24th annual Empirical Methods in Natural Language Processing (EMNLP) [https://2020.emnlp.org/] conference reimagined for the first time ever in a fully virtual format. EMNLP is a leading conference in the area of Natural Language Processing covering a broad spectrum of diverse research areas that

11 Nov 2020 • Denna Mafie #research

A Language Detector for Identifying Machine-Generated Text

In recent years, the natural language processing (NLP) community has seen the development of increasingly powerful language models [1, 2], capable of generating textual output that is indistinguishable from human-written text. This includes our own model called CTRL [https://blog.salesforceairesearch.com/introducing-a-conditional-transformer-language-model-for-controllable-generation/] [3] (Conditional Transformer Language Model) for controllable

22 Oct 2020 • Yoav Schlesinger

The First Simulation Card for Ethical AI Simulations

We recently released Foundation, an open-source framework [https://github.com/salesforce/ai-economist] to build economic simulations. Foundation has been designed with flexibility and AI research in mind, and can be modified by anyone. AI simulations offer researchers the power to generate data and evaluate outcomes of virtual economies that capture

20 Oct 2020 • Stephan Zheng

Model Cards for AI Model Transparency

At Salesforce, we take seriously our mission to create and deliver AI technology that is responsible, accountable, transparent, empowering, and inclusive. These principles ensure that our AI is safe, ethical, and engenders trust.

29 Sep 2020 • Yoav Schlesinger #ethics

Theory-Inspired Network Architecture Search

> TL;DR: We theoretically analyze the differential architecture search (DARTS) for understanding the role and impact of skip connections, which inspires a new method for Neural Architecture Search (NAS) using group-structured sparse gates and path-depth-wise regularization to overcome the limitation of existing NAS methods for AutoML. In our work [1]

25 Sep 2020 • Pan Zhou

GeDi: A Powerful New Method for Controlling Language Models

We use smaller language models as generative classifiers to guide generation from larger language models. We show that this method can make generations friendlier, reduce bias and toxicity, and achieve zero-shot controllable generation of unseen topics.

22 Sep 2020 • Ben Krause

MoPro: Webly Supervised Learning with Momentum Prototypes

> TL; DR: We propose a new webly-supervised learning method which achieves state-of-the-art representation learning performance by training on large amounts of freely available noisy web images. Deep neural networks are known to be hungry for labeled data. Current state-of-the-art CNNs are trained with supervised learning on datasets such as ImageNet

17 Sep 2020 • Junnan Li #webly supervised learning

How Salesforce Infuses Ethics into its AI

For all the good that AI can bring, responsible tech companies understand they must recognize, prepare for, and mitigate the potential unintended, harmful effects. That’s why Salesforce sees ethics as foundational to AI — and why we’re sharing a closer look at how we infuse an ethical process into

14 Aug 2020 • Katherine Siu #artificial intelligence

The AI Economist: Join the Moonshot

We are launching an open source collaborative project to build an AI Economist that can be used to guide policy making in the real world. We invite you to join us in our mission to help improve the world with AI and economics.

06 Aug 2020 • Stephan Zheng

Salesforce Research at ACL 2020

The 58th Association for Computational Linguistics [https://acl2020.org/] (ACL) Conference kicked off this week and runs from Sunday, Jul 5 to Friday, Jul 10 in a fully virtual format. ACL is the premier conference of the field of computational linguistics, covering a broad spectrum of diverse research areas that

06 Jul 2020 • Audrey Cook

Explaining Solutions to Physical Reasoning Tasks

We show that deep neural models can describe common sense physics in a valid and sufficient way that is also generalizable. Our ESPRIT framework is trained on a new dataset with physics simulations and descriptions that we collected and have open-sourced.

05 May 2020 • Nazneen Rajani #research

ERASER: A Benchmark to Evaluate Rationalized NLP Models

Many NLP applications today deploy state-of-the-art deep neural networks that are essentially black-boxes. One of the goals of Explainable AI (XAI) is to have AI models reveal why and how they make their predictions so that these predictions are interpretable by a human. But work in this direction has been

08 Nov 2019 • Nazneen Rajani #research

Living Ethics in AI: How to Expand from Principles to Impact

Earlier this year, Danielle Cass and I ran a workshop with 23 ethical AI practitioners from 15 organizations and shared out insights of what they are doing that has been successful and the open questions or challenges they are working through. A few months later, Matt Marshall, Founder of VentureBeat,

06 Aug 2019 • Kathy Baxter #ethics

Beyond the Algorithm: Learn How to Build Trusted AI with Trailhead

Published: May 30, 2019 With the advent of low-code developer tools, people of varying skill levels and backgrounds — not just those with PhDs — are not only able to leverage the benefits of artificial intelligence (AI), but also build intelligent systems. As access to AI widens and the depth of its

30 May 2019 • Kathy Baxter #ethics

Q&A with Salesforce Research Intern Kevin (Chih-Yao) Ma on Self-Monitoring Navigation Agent via Auxiliary Progress Estimation

For some, the world we live in today can be represented as data in high-dimensional spatiotemporal space—which we humans typically use language to describe, interpret, and reason about. For Salesforce Research Intern, Kevin (Chih-Yao) Ma, this topic became a key focal point in his research “ Self-Monitoring Navigation Agent via

01 May 2019 • Alexandria Murray #news

Salesforce Research at ICLR 2019

May 6th - May 9th @ Ernest N. Morial Convention Center, New Orleans [https://www.google.com/maps/place/New+Orleans+Ernest+N.+Morial+Convention+Center/@29.9432797,-90.0640833,15z/data=!4m5!3m4!1s0x0:0x4439e9b5b2d17a17!8m2!3d29.9432797!4d-90.0640833] ABOUT: Salesforce is excited to be a diamond sponsor of

22 Apr 2019 • Alexandria Murray #news

Q&A with Salesforce Research Intern Akhilesh Gotmare on how "Optimization and Machine Learning" led him to ICLR.

In the research paper, “A Closer Look at Deep Learning Heuristics: Learning rate restarts, Warmup and Distillatio [https://openreview.net/forum?id=r14EOsCqKX]n” Futureforce PhD Intern Akhilesh Gotmare, [https://openreview.net/profile?email=akhilesh.gotmare%40epfl.ch] worked with Research Scientist Nitish Shirish Keskar [https://openreview.net/profile?email=

16 Apr 2019 • Alexandria Murray #news

Ethics in AI research papers and articles

This is my obsessively curated list of research papers and articles on ethics in AI that I have been collecting over the years. Ones in bold are those that I refer back to and found particularly useful. Let me know if I am missing your favorites.

20 Jan 2019 • Kathy Baxter #ethics