Mingmin Wu
2024
Refine, Align, and Aggregate: Multi-view Linguistic Features Enhancement for Aspect Sentiment Triplet Extraction
Guixin Su
|
Mingmin Wu
|
Zhongqiang Huang
|
Yongcheng Zhang
|
Tongguan Wang
|
Yuxue Hu
|
Ying Sha
Findings of the Association for Computational Linguistics: ACL 2024
Aspect Sentiment Triplet Extraction (ASTE) aims to extract the triplets of aspect terms, their associated sentiment and opinion terms. Previous works based on different modeling paradigms have achieved promising results. However, these methods struggle to comprehensively explore the various specific relations between sentiment elements in multi-view linguistic features, which is the prior indication effect for facilitating sentiment triplets extraction, requiring to align and aggregate them to capture the complementary higher-order interactions. In this paper, we propose Multi-view Linguistic Features Enhancement (MvLFE) to explore the aforementioned prior indication effect in the “Refine, Align, and Aggregate” learning process. Specifically, we first introduce the relational graph attention network to encode the word-pair relations represented by each linguistic feature and refine them to pay more attention to the aspect-opinion pairs. Next, we employ the multi-view contrastive learning to align them at a fine-grained level in the contextual semantic space to maintain semantic consistency. Finally, we utilize the multi-semantic cross attention to capture and aggregate the complementary higher-order interactions between diverse linguistic features to enhance the aspect-opinion relations. Experimental results on several benchmark datasets show the effectiveness and robustness of our model, which achieves state-of-the-art performance.
Refining Idioms Semantics Comprehension via Contrastive Learning and Cross-Attention
Mingmin Wu
|
Guixin Su
|
Yongcheng Zhang
|
Zhongqiang Huang
|
Ying Sha
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Chinese idioms on social media demand a nuanced understanding for correct usage. The Chinese idiom cloze test poses a unique challenge for machine reading comprehension due to the figurative meanings of idioms deviating from their literal interpretations, resulting in a semantic bias in models’ comprehension of idioms. Furthermore, given that the figurative meanings of many idioms are similar, their use as suboptimal options can interfere with optimal selection. Despite achieving some success in the Chinese idiom cloze test, existing methods based on deep learning still struggle to comprehensively grasp idiom semantics due to the aforementioned issues. To tackle these challenges, we introduce a Refining Idioms Semantics Comprehension Framework (RISCF) to capture the comprehensive idioms semantics. Specifically, we propose a semantic sense contrastive learning module to enhance the representation of idiom semantics, diminishing the semantic bias between figurative and literal meanings of idioms. Meanwhile, we propose an interference-resistant cross-attention module to attenuate the interference of suboptimal options, which considers the interaction between the candidate idioms and the blank space in the context. Experimental results on the benchmark datasets demonstrate the effectiveness of our RISCF model, which outperforms state-of-the-art methods significantly.
Search
Co-authors
- Guixin Su 2
- Zhongqiang Huang 2
- Yongcheng Zhang 2
- Ying Sha 2
- Tongguan Wang 1
- show all...
- Yuxue Hu 1