Ochanomizu University
Microsoft Research
Kyoto University
National Institute for Japanese Language and Linguistics
Ochanomizu University
抄録(英)
We propose an AdversariaL training algorithm for commonsense InferenCE (ALICE). We apply small perturbations to word embeddings and minimize the resultant adversarial risk to regularize the model. We exploit a novel combination of two different approaches to estimate these perturbations: 1) using the true label and 2) using the model prediction. Without relying on any human-crafted features, knowledge bases, or additional datasets other than the target datasets, our model boosts the fine-tuning performance of RoBERTa, achieving competitive results on multiple reading comprehension datasets that require commonsense inference.
出版者
Association for Computational Linguistics
雑誌名
Proceedings of the 5th Workshop on Representation Learning for NLP (RepL4NLP-2020)