Code-Mixing on Sesame Street: Multilingual Adversaries for Multilingual Models
TL;DR: Today’s NLP models, for all their recent successes, have certain limitations. Case in point: they exhibit poor performance when processing multilingual code-mixed sentences (each containing multiple languages). Our new approach addresses this problem by constructing code-mixed inputs designed to degrade (or “attack”) the model, exposing the limitations
24 Jan 2022 • #code-mixing