Gunjan Bhattarai
2024
To Learn or Not to Learn: Replaced Token Detection for Learning the Meaning of Negation
Gunjan Bhattarai
|
Katrin Erk
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
State-of-the-art language models perform well on a variety of language tasks, but they continue to struggle with understanding negation cues in tasks like natural language inference (NLI). Inspired by Hossain et al. (2020), who show under-representation of negation in language model pretraining datasets, we experiment with additional pretraining with negation data for which we introduce two new datasets. We also introduce a new learning strategy for negation building on ELECTRA’s (Clark et al., 2020) replaced token detection objective. We find that continuing to pretrain ELECTRA-Small’s discriminator leads to substantial gains on a variant of RTE (Recognizing Textual Entailment) with additional negation. On SNLI (Stanford NLI) (Bowman et al., 2015), there are no gains due to the extreme under-representation of negation in the data. Finally, on MNLI (Multi-NLI) (Williams et al., 2018), we find that performance on negation cues is primarily stymied by neutral-labeled examples.
Search