Shi-ang Qi
2024
PepRec: Progressive Enhancement of Prompting for Recommendation
Yakun Yu
|
Shi-ang Qi
|
Baochun Li
|
Di Niu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
With large language models (LLMs) achieving remarkable breakthroughs in natural language processing (NLP) domains, recent researchers have actively explored the potential of LLMs for recommendation systems by converting the input data into textual sentences through prompt templates. Although semantic knowledge from LLMs can help enrich the content information of items, to date it is still hard for them to achieve comparable performance to traditional deep learning recommendation models, partly due to a lack of ability to leverage collaborative filtering. In this paper, we propose a novel training-free prompting framework, PepRec, which aims to capture knowledge from both content-based filtering and collaborative filtering to boost recommendation performance with LLMs, while providing interpretation for the recommendation. Experiments based on two real-world datasets from different domains show that PepRec significantly outperforms various traditional deep learning recommendation models and prompt-based recommendation systems.
2023
ConKI: Contrastive Knowledge Injection for Multimodal Sentiment Analysis
Yakun Yu
|
Mingjun Zhao
|
Shi-ang Qi
|
Feiran Sun
|
Baoxun Wang
|
Weidong Guo
|
Xiaoli Wang
|
Lei Yang
|
Di Niu
Findings of the Association for Computational Linguistics: ACL 2023
Multimodal Sentiment Analysis leverages multimodal signals to detect the sentiment of a speaker. Previous approaches concentrate on performing multimodal fusion and representation learning based on general knowledge obtained from pretrained models, which neglects the effect of domain-specific knowledge. In this paper, we propose Contrastive Knowledge Injection (ConKI) for multimodal sentiment analysis, where specific-knowledge representations for each modality can be learned together with general knowledge representations via knowledge injection based on an adapter architecture. In addition, ConKI uses a hierarchical contrastive learning procedure performed between knowledge types within every single modality, across modalities within each sample, and across samples to facilitate the effective learning of the proposed representations, hence improving multimodal sentiment predictions. The experiments on three popular multimodal sentiment analysis benchmarks show that ConKI outperforms all prior methods on a variety of performance metrics.
Search
Co-authors
- Yakun Yu 2
- Di Niu 2
- Baochun Li 1
- Mingjun Zhao 1
- Feiran Sun 1
- show all...