Two years ago, Salesforce unveiled Einstein, our AI platform to enable any company to deliver smarter, personalized and more predictive customer experiences. In just 2 years, we've released over 30 AI-powered features across our CRM, which combined power 4B+ predictions/day and we are looking to engage with the best and brightest minds to accelerate our AI services further.
This summer, Salesforce Research announced our inaugural deep learning research grant for university researchers and faculty, non-profit organizations, and NGOs. Our goal is to identify and support diverse individuals with innovative ideas to join us in shaping the future of AI.
We were incredibly impressed by the quality and variety of proposals submitted! We are proud to announce the five winners that we will be working with as we shape the future of AI. Each winner will receive a $50K grant to advance their work.
Tianfu Wu, North Carolina State University
Computer Vison & Natural Language Processing
Learning Deep Grammar Networks for Visual Question Answering
In this project, we will focus on visual question answering (VQA) tasks for which we study unified deep grammar networks to not only improve performance, but advance the explainability of the QA process.
Mohit Bansal, University of North Carolina, Chapel Hill
Natural Language Processing
Multi-Task Multimodal Translation and Content Selection
In this proposal, we focus on the hierarchical and parallel multi-task training of several multimodal translation and content selection tasks: video captioning, document summarization, and video highlight prediction.
Junyi Jessy Li & Katrin Erk, University of Texas at Austin
Natural Language Processing
Hierarchical Graph-based Advice Summarization from Online Forums
To significantly improve the efficiency of gathering advice from online forums, we propose hierarchical summarization of advice—where readers will be able to ‘zoom in’ — by inferencing and clustering the discourse structure of posts.
Quanquan Gu, University of CA, Los Angeles
Understanding and Advancing Nonconvex Optimization for Deep Learning
In this proposal, we aim at a better understanding of non-convex optimization for deep learning. Such understandings holds the promise of making deep learning more predictable and can be used as a guidance to design new network architectures.
Zachary Chase Lipton, Carnegie Mellon University
Failing Loudly: Detecting, Quantifying, and Interpreting Distribution Shift
We propose to build upon our recent work investigating robust deep learning systems capable detecting and correcting for shifting label distributions. In the proposed work, we will focus on the more general problem of detecting natural shifts in data distributions.