%0 Conference Proceedings %T Predicting Antonyms in Context using BERT %A Niwa, Ayana %A Nishiguchi, Keisuke %A Okazaki, Naoaki %Y Belz, Anya %Y Fan, Angela %Y Reiter, Ehud %Y Sripada, Yaji %S Proceedings of the 14th International Conference on Natural Language Generation %D 2021 %8 August %I Association for Computational Linguistics %C Aberdeen, Scotland, UK %F niwa-etal-2021-predicting %X We address the task of antonym prediction in a context, which is a fill-in-the-blanks problem. This task setting is unique and practical because it requires contrastiveness to the other word and naturalness as a text in filling a blank. We propose methods for fine-tuning pre-trained masked language models (BERT) for context-aware antonym prediction. The experimental results demonstrate that these methods have positive impacts on the prediction of antonyms within a context. Moreover, human evaluation reveals that more than 85% of predictions using the proposed method are acceptable as antonyms. %R 10.18653/v1/2021.inlg-1.6 %U https://aclanthology.org/2021.inlg-1.6 %U https://doi.org/10.18653/v1/2021.inlg-1.6 %P 48-54