Yancheng Wang
2025
AGCL: Aspect Graph Construction and Learning for Aspect-level Sentiment Classification
Zhongquan Jian
|
Daihang Wu
|
Shaopan Wang
|
Yancheng Wang
|
Junfeng Yao
|
Meihong Wang
|
Qingqiang Wu
Proceedings of the 31st International Conference on Computational Linguistics
Prior studies on Aspect-level Sentiment Classification (ALSC) emphasize modeling interrelationships among aspects and contexts but overlook the crucial role of aspects themselves as essential domain knowledge. To this end, we propose AGCL, a novel Aspect Graph Construction and Learning method, aimed at furnishing the model with finely tuned aspect information to bolster its task-understanding ability. AGCL’s pivotal innovations reside in Aspect Graph Construction (AGC) and Aspect Graph Learning (AGL), where AGC harnesses intrinsic aspect connections to construct the domain aspect graph, and then AGL iteratively updates the introduced aspect graph to enhance its domain expertise, making it more suitable for the ALSC task. Hence, this domain aspect graph can serve as a bridge connecting unseen aspects with seen aspects, thereby enhancing the model’s generalization capability. Experiment results on three widely used datasets demonstrate the significance of aspect information for ALSC and highlight AGL’s superiority in aspect learning, surpassing state-of-the-art baselines greatly. Code is available at https://github.com/jian-projects/agcl.
2024
RecMind: Large Language Model Powered Agent For Recommendation
Yancheng Wang
|
Ziyan Jiang
|
Zheng Chen
|
Fan Yang
|
Yingxue Zhou
|
Eunah Cho
|
Xing Fan
|
Yanbin Lu
|
Xiaojiang Huang
|
Yingzhen Yang
Findings of the Association for Computational Linguistics: NAACL 2024
While the recommendation system (RS) has advanced significantly through deep learning, current RS approaches usually train and fine-tune models on task-specific datasets, limiting their generalizability to new recommendation tasks and their ability to leverage external knowledge due to model scale and data size constraints. Thus, we designed an LLM-powered autonomous recommender agent, RecMind, which is capable of leveraging external knowledge, utilizing tools with careful planning to provide zero-shot personalized recommendations. We propose a Self-Inspiring algorithm to improve the planning ability. At each intermediate step, the LLM “self-inspires” to consider all previously explored states to plan for the next step. This mechanism greatly improves the model’s ability to comprehend and utilize historical information in planning for recommendation. We evaluate RecMind’s performance in various recommendation scenarios. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation baseline methods in various tasks and achieves comparable performance to a fully trained recommendation model P5.
Search
Fix data
Co-authors
- Zheng Chen 1
- Eunah Cho 1
- Xing Fan 1
- Xiaojiang Huang 1
- Zhongquan Jian 1
- show all...