WarpDrive v2 Release Supports Numba to Simplify Machine Learning Workloads and Make Building Simulations Easier on NVIDIA GPUs

TL;DR: Deep reinforcement learning (RL), a powerful learning framework to train AI agents, can be slow as it requires repeated interaction with a simulation of the environment. Our original WarpDrive accelerates multi-agent deep RL on NVIDIA GPUs, enabling 10-100x speedups compared to alternative CPU+GPU implementations of multi-agent simulations.

02 Nov 2022 • #WarpDrive

DeepTime: Using Deep Time-Index Meta-Learning to Improve Non-Stationary Time-Series Forecasting

TL;DR: The performance of existing time-series forecasting methods can degrade due to non-stationarity, where the statistical distribution of time-series data changes over time. Our new DeepTime method overcomes non-stationarity issues by leveraging a “forecasting as meta-learning” framework on deep time-index models. DeepTime achieves competitive accuracy on the long-sequence time-series

13 Oct 2022 • #DeepTime

Summer 2022 Salesforce Research Roundup

As we say a fond farewell to summer (bummer!), let's look back and review some of the stellar work reported on by Salesforce AI researchers during the past few months. (For more details, we encourage you to click the link for each project to read the full blog post.) --------------------------------------------------------------------------------

30 Sep 2022 • #Summer 2022

Meet LAVIS: A One-stop Library for Language-Vision AI Research and Applications

TL;DR: LAVIS (short for LAnguage-VISion) is an open-source deep learning library for language-vision research and applications, offering comprehensive support for a wide range of tasks, datasets, and state-of-the-art models. Featuring a unified interface and modular design, it’s easy to use off-the-shelf and to extend with new capabilities. With

20 Sep 2022 • #LAVIS

ETSformer: Exponential Smoothing Transformers for Time-Series Forecasting

TL;DR: We developed a new time-series forecasting model called ETSformer that leverages the power of two frameworks. By combining the classical intuition of seasonal-trend decomposition and exponential smoothing with modern transformers – as well as introducing novel exponential smoothing and frequency attention mechanisms – ETSformer achieves state-of-the-art performance. Background Before diving

23 Aug 2022 • #ETSformer

AI for Global Climate Cooperation: Salesforce Research and Mila Announce Climate Change Collaboration and Competition

TL;DR:  Salesforce Research and Mila announce AI for Global Climate Cooperation, a working group collaboration and competition to design negotiation protocols and climate agreements. We plan to coauthor a peer-reviewed scientific paper with top-performing teams; insights will be distilled into a policy brief shared with leading policymakers, informing future

05 Aug 2022 • #AI for Global Climate Cooperation

TaiChi: Open Source Library for Few-Shot NLP

AUTHORS: Sharvin Shah, Jin Qu, Donald Rose TL;DR: TaiChi is an open source library for few-shot NLP, designed for data scientists and software engineers who want to get some quick results or build proof-of-concept products but don’t have much experience with few-shot learning (FSL). The library abstracts complex

15 Jun 2022 • #NLP

OmniXAI: Making Explainable AI Easy for Any Data, Any Models, Any Tasks

TL;DR:OmniXAI (short for Omni eXplainable AI) is designed to address many of the pain points in explaining decisions made by AI models. This open-source library aims to provide data scientists, machine learning engineers, and researchers with a one-stop Explainable AI (XAI) solution to analyze, debug, and interpret their

14 Jun 2022 • #OmniXAI

ALPRO: Understanding Video and Language by Aligning Visual Regions and Text Entities

TL;DR: We propose ALPRO, a new video-and-language representation learning framework which achieves state-of-the-art performance on video-text retrieval and video question answering by learning fine-grained alignment between video regions and textual entities via entity prompts. For more background (a review of key concepts used in this post), please see the

31 May 2022 • #ALPRO

RnG-KBQA: Rank-and-Generate Approach for Question Answering Over Knowledge Bases

Lead Author: Xi Ye TL;DR: We propose RnG-KBQA, a Rank-and-Generate Approach for Question Answering over Knowledge Bases, which enables answering natural language questions over large-scale knowledge bases. Our approach is capable of answering questions about topics never seen in the training data, which makes it generalizable to a broad

23 May 2022 • #KBQA

Turbocharge Multi-Agent Reinforcement Learning with WarpDrive and PyTorch Lightning

TL;DR: WarpDrive is a flexible, lightweight, easy-to-use end-to-end reinforcement learning (RL) framework; enables orders-of-magnitude faster training on a single GPU. PyTorch Lightning enables you to modularize experimental code, and build production-ready workloads fast. Together, they can help significantly accelerate multi-agent RL R&D. Reinforcement Learning: Agents Learn by Maximizing

20 May 2022 • #WarpDrive

Conversational AI Programming with CodeGen: Let AI Write Code For You

Links: Research Paper [https://arxiv.org/abs/2203.13474], Github [https://github.com/salesforce/CodeGen] -------------------------------------------------------------------------------- Can you imagine a machine writing an app for you, just by telling it what you want? As futuristic as this scenario sounds, it’s actually here today. Salesforce AI Research outlines conversational AI

29 Mar 2022 • #conversational AI

Code-Mixing on Sesame Street: Multilingual Adversaries for Multilingual Models

TL;DR: Today’s NLP models, for all their recent successes, have certain limitations. Case in point: they exhibit poor performance when processing multilingual code-mixed sentences (each containing multiple languages). Our new approach addresses this problem by constructing code-mixed inputs designed to degrade (or “attack”) the model, exposing the limitations

24 Jan 2022 • #code-mixing