RecGPT: Generative Pre-training for Text-based Recommendation

Hoang Ngo, Dat Quoc Nguyen


Abstract
We present the first domain-adapted and fully-trained large language model, RecGPT-7B, and its instruction-following variant, RecGPT-7B-Instruct, for text-based recommendation. Experimental results on rating prediction and sequential recommendation tasks show that our model, RecGPT-7B-Instruct, outperforms previous strong baselines. We are releasing our RecGPT models as well as their pre-training and fine-tuning datasets to facilitate future research and downstream applications in text-based recommendation. Public “huggingface” links to our RecGPT models and datasets are available at: https://github.com/VinAIResearch/RecGPT
Anthology ID:
2024.acl-short.29
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
302–313
Language:
URL:
https://aclanthology.org/2024.acl-short.29
DOI:
Bibkey:
Cite (ACL):
Hoang Ngo and Dat Quoc Nguyen. 2024. RecGPT: Generative Pre-training for Text-based Recommendation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302–313, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
RecGPT: Generative Pre-training for Text-based Recommendation (Ngo & Nguyen, ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-short.29.pdf