Kaize Shi
2025
LLaMA-E: Empowering E-commerce Authoring with Object-Interleaved Instruction Following
Kaize Shi
|
Xueyao Sun
|
Dingxian Wang
|
Yinlin Fu
|
Guandong Xu
|
Qing Li
Proceedings of the 31st International Conference on Computational Linguistics
E-commerce authoring entails creating engaging, diverse, and targeted content to enhance preference elicitation and retrieval experience. While Large Language Models (LLMs) have revolutionized content generation, they often fall short in e-commerce applications due to their limited memorization of domain-specific features. This paper proposes LLaMA-E, the unified e-commerce authoring models that address the contextual preferences of customers, sellers, and platforms, the essential objects in e-commerce operation. We design the instruction set derived from tasks of ads generation, query-enhanced product title rewriting, product classification, purchase intent speculation, and general e-commerce Q&A. The instruction formulation ensures the interleaved cover of the presented and required object features, allowing the alignment of base models to parameterize e-commerce knowledge comprehensively. The proposed LLaMA-E models achieve state-of-the-art evaluation performance and exhibit the advantage in zero-shot practical applications. To our knowledge, this is the first LLM tailored to empower authoring applications with comprehensive scenario understanding by integrating features focused on participated objects.
2023
AMR-TST: Abstract Meaning Representation-based Text Style Transfer
Kaize Shi
|
Xueyao Sun
|
Li He
|
Dingxian Wang
|
Qing Li
|
Guandong Xu
Findings of the Association for Computational Linguistics: ACL 2023
Abstract Meaning Representation (AMR) is a semantic representation that can enhance natural language generation (NLG) by providing a logical semantic input. In this paper, we propose the AMR-TST, an AMR-based text style transfer (TST) technique. The AMR-TST converts the source text to an AMR graph and generates the transferred text based on the AMR graph modified by a TST policy named style rewriting. Our method combines both the explainability and diversity of explicit and implicit TST methods. The experiments show that the proposed method achieves state-of-the-art results compared with other baseline models in automatic and human evaluations. The generated transferred text in qualitative evaluation proves the AMR-TST have significant advantages in keeping semantic features and reducing hallucinations. To the best of our knowledge, this work is the first to apply the AMR method focusing on node-level features to the TST task.
Search
Fix data
Co-authors
- Qing Li 2
- Xueyao Sun 2
- Dingxian Wang 2
- Guandong Xu 2
- Yinlin Fu 1
- show all...
- Li He 1