Xianquan Wang
2024
Dynamic Multi-granularity Attribution Network for Aspect-based Sentiment Analysis
Yanjiang Chen
|
Kai Zhang
|
Feng Hu
|
Xianquan Wang
|
Ruikang Li
|
Qi Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Aspect-based sentiment analysis (ABSA) aims to predict the sentiment polarity of a specific aspect within a given sentence. Most existing methods predominantly leverage semantic or syntactic information based on attention scores, which are susceptible to interference caused by irrelevant contexts and often lack sentiment knowledge at a data-specific level. In this paper, we propose a novel Dynamic Multi-granularity Attribution Network (DMAN) from the perspective of attribution. Initially, we leverage Integrated Gradients to dynamically extract attribution scores for each token, which contain underlying reasoning knowledge for sentiment analysis. Subsequently, we aggregate attribution representations from multiple semantic granularities in natural language, enhancing a profound understanding of the semantics. Finally, we integrate attribution scores with syntactic information to capture the relationships between aspects and their relevant contexts more accurately during the sentence understanding process. Extensive experiments on five benchmark datasets demonstrate the effectiveness of our proposed method.
I-AM-G: Interest Augmented Multimodal Generator for Item Personalization
Xianquan Wang
|
Likang Wu
|
Shukang Yin
|
Zhi Li
|
Yanjiang Chen
|
Hufeng Hufeng
|
Yu Su
|
Qi Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
The emergence of personalized generation has made it possible to create texts or images that meet the unique needs of users. Recent advances mainly focus on style or scene transfer based on given keywords. However, in e-commerce and recommender systems, it is almost an untouched area to explore user historical interactions, automatically mine user interests with semantic associations, and create item representations that closely align with user individual interests.In this paper, we propose a brand new framework called **I**nterest-**A**ugmented **M**ultimodal **G**enerator (**I-AM-G**). The framework first extracts tags from the multimodal information of items that the user has interacted with, and the most frequently occurred ones are extracted to rewrite the text description of the item. Then, the framework uses a decoupled text-to-text and image-to-image retriever to search for the top-K similar item text and image embeddings from the item pool. Finally, the Attention module for user interests fuses the retrieved information in a cross-modal manner and further guides the personalized generation process in collaboration with the rewritten text.We conducted extensive and comprehensive experiments to demonstrate that our framework can effectively generate results aligned with user preferences, which potentially provides a new paradigm of **Rewrite and Retrieve** for personalized generation.
Search
Co-authors
- Yanjiang Chen 2
- Qi Liu 2
- Kai Zhang 1
- Feng Hu 1
- Ruikang Li 1
- show all...