%0 Conference Proceedings %T HIT at SemEval-2022 Task 2: Pre-trained Language Model for Idioms Detection %A Chu, Zheng %A Yang, Ziqing %A Cui, Yiming %A Chen, Zhigang %A Liu, Ming %Y Emerson, Guy %Y Schluter, Natalie %Y Stanovsky, Gabriel %Y Kumar, Ritesh %Y Palmer, Alexis %Y Schneider, Nathan %Y Singh, Siddharth %Y Ratan, Shyam %S Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, United States %F chu-etal-2022-hit %X The same multi-word expressions may have different meanings in different sentences. They can be mainly divided into two categories, which are literal meaning and idiomatic meaning. Non-contextual-based methods perform poorly on this problem, and we need contextual embedding to understand the idiomatic meaning of multi-word expressions correctly. We use a pre-trained language model, which can provide a context-aware sentence embedding, to detect whether multi-word expression in the sentence is idiomatic usage. %R 10.18653/v1/2022.semeval-1.28 %U https://aclanthology.org/2022.semeval-1.28 %U https://doi.org/10.18653/v1/2022.semeval-1.28 %P 221-227