Yueqing Liang
2025
Taxonomy-Guided Zero-Shot Recommendations with LLMs
Yueqing Liang
|
Liangwei Yang
|
Chen Wang
|
Xiongxiao Xu
|
Philip S. Yu
|
Kai Shu
Proceedings of the 31st International Conference on Computational Linguistics
With the emergence of large language models (LLMs) and their ability to perform a variety of tasks, their application in recommender systems (RecSys) has shown promise. However, we are facing significant challenges when deploying LLMs into RecSys, such as limited prompt length, unstructured item information, and un-constrained generation of recommendations, leading to sub-optimal performance. To address these issues, we propose a novel Taxonomy-guided Recommendation (TaxRec) framework to empower LLM with category information in a systematic approach. Specifically, TaxRec features a two-step process: one-time taxonomy categorization and LLM-based recommendation. In the one-time taxonomy categorization phase, we organize and categorize items, ensuring clarity and structure of item information. In the LLM-based recommendation phase, we feed the structured items into LLM prompts, achieving efficient token utilization and controlled feature generation. This enables more accurate, contextually relevant, and zero-shot recommendations without the need for domain-specific fine-tuning. Experimental results demonstrate that TaxRec significantly enhances recommendation quality compared to traditional zero-shot approaches, showcasing its efficacy as a personal recommender with LLMs. Code is available at: https://github.com/yueqingliang1/TaxRec.
Piecing It All Together: Verifying Multi-Hop Multimodal Claims
Haoran Wang
|
Aman Rangapur
|
Xiongxiao Xu
|
Yueqing Liang
|
Haroon Gharwi
|
Carl Yang
|
Kai Shu
Proceedings of the 31st International Conference on Computational Linguistics
Existing claim verification datasets often do not require systems to perform complex reasoning or effectively interpret multimodal evidence. To address this, we introduce a new task: multi-hop multimodal claim verification. This task challenges models to reason over multiple pieces of evidence from diverse sources, including text, images, and tables, and determine whether the combined multimodal evidence supports or refutes a given claim. To study this task, we construct MMCV, a large-scale dataset comprising 15k multi-hop claims paired with multimodal evidence, generated and refined using large language models, with additional input from human feedback. We show that MMCV is challenging even for the latest state-of-the-art multimodal large language models, especially as the number of reasoning hops increases. Additionally, we establish a human performance benchmark on a subset of MMCV. We hope this dataset and its evaluation task will encourage future research in multimodal multi-hop claim verification.
Search
Fix data
Co-authors
- Kai Shu 2
- Xiongxiao Xu 2
- Haroon Gharwi 1
- Aman Rangapur 1
- Chen Wang 1
- show all...