Yongqi Luo
2025
CACA: Context-Aware Cross-Attention Network for Extractive Aspect Sentiment Quad Prediction
Bingfeng Chen
|
Haoran Xu
|
Yongqi Luo
|
Boyan Xu
|
Ruichu Cai
|
Zhifeng Hao
Proceedings of the 31st International Conference on Computational Linguistics
Aspect Sentiment Quad Prediction(ASQP) enhances the scope of aspect-based sentiment analysis by introducing the necessity to predict both explicit and implicit aspect and opinion terms. Existing leading generative ASQP approaches do not modeling the contextual relationship of the review sentence to predict implicit terms. However, introducing the contextual information into the pre-trained language models framework is non-trivial due to the inflexibility of the generative encoder-decoder architecture. To well utilize the contextual information, we propose an extractive ASQP framework, CACA, which features with Context-Aware Cross-Attention Network. When implicit terms are present, the Context-Aware Cross-Attention Network enhances the alignment of aspects and opinions, through alternating updates of explicit and implicit representations. Additionally, contrastive learning is introduced in the implicit representation learning process. Experimental results on three benchmarks demonstrate the effectiveness of CACA. Our implementation will be open-sourced at https://github.com/DMIRLAB-Group/CACA.
2024
S2GSL: Incorporating Segment to Syntactic Enhanced Graph Structure Learning for Aspect-based Sentiment Analysis
Bingfeng Chen
|
Qihan Ouyang
|
Yongqi Luo
|
Boyan Xu
|
Ruichu Cai
|
Zhifeng Hao
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Previous graph-based approaches in Aspect-based Sentiment Analysis(ABSA) have demonstrated impressive performance by utilizing graph neural networks and attention mechanisms to learn structures of static dependency trees and dynamic latent trees. However, incorporating both semantic and syntactic information simultaneously within complex global structures can introduce irrelevant contexts and syntactic dependencies during the process of graph structure learning, potentially resulting in inaccurate predictions. In order to address the issues above, we propose S2GSL, incorporating Segment to Syntactic enhanced Graph Structure Learning for ABSA. Specifically, S2GSL is featured with a segment-aware semantic graph learning and a syntax-based latent graph learning enabling the removal of irrelevant contexts and dependencies, respectively. We further propose a self-adaptive aggregation network that facilitates the fusion of two graph learning branches, thereby achieving complementarity across diverse structures. Experimental results on four benchmarks demonstrate the effectiveness of our framework.
Search
Fix data
Co-authors
- Ruichu Cai 2
- Bingfeng Chen 2
- Zhifeng Hao 2
- Boyan Xu 2
- Qihan Ouyang 1
- show all...