Shuqun Li
2021
Label-Enhanced Hierarchical Contextualized Representation for Sequential Metaphor Identification
Shuqun Li
|
Liang Yang
|
Weidong He
|
Shiqi Zhang
|
Jingjie Zeng
|
Hongfei Lin
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Recent metaphor identification approaches mainly consider the contextual text features within a sentence or introduce external linguistic features to the model. But they usually ignore the extra information that the data can provide, such as the contextual metaphor information and broader discourse information. In this paper, we propose a model augmented with hierarchical contextualized representation to extract more information from both sentence-level and discourse-level. At the sentence level, we leverage the metaphor information of words that except the target word in the sentence to strengthen the reasoning ability of our model via a novel label-enhanced contextualized representation. At the discourse level, the position-aware global memory network is adopted to learn long-range dependency among the same words within a discourse. Finally, our model combines the representations obtained from these two parts. The experiment results on two tasks of the VUA dataset show that our model outperforms every other state-of-the-art method that also does not use any external knowledge except what the pre-trained language model contains.
2020
ALBERT-BiLSTM for Sequential Metaphor Detection
Shuqun Li
|
Jingjie Zeng
|
Jinhui Zhang
|
Tao Peng
|
Liang Yang
|
Hongfei Lin
Proceedings of the Second Workshop on Figurative Language Processing
In our daily life, metaphor is a common way of expression. To understand the meaning of a metaphor, we should recognize the metaphor words which play important roles. In the metaphor detection task, we design a sequence labeling model based on ALBERT-LSTM-softmax. By applying this model, we carry out a lot of experiments and compare the experimental results with different processing methods, such as with different input sentences and tokens, or the methods with CRF and softmax. Then, some tricks are adopted to improve the experimental results. Finally, our model achieves a 0.707 F1-score for the all POS subtask and a 0.728 F1-score for the verb subtask on the TOEFL dataset.
Search
Co-authors
- Liang Yang 2
- Jingjie Zeng 2
- Hongfei Lin 2
- Weidong He 1
- Shiqi Zhang 1
- show all...