Xiangyang Luo
2024
Efficient Sparse Attention needs Adaptive Token Release
Chaoran Zhang
|
Lixin Zou
|
Dan Luo
|
Xiangyang Luo
|
Zihao Li
|
Min Tang
|
Chenliang Li
Findings of the Association for Computational Linguistics: ACL 2024
Prefix-diffusion: A Lightweight Diffusion Model for Diverse Image Captioning
Guisheng Liu
|
Yi Li
|
Zhengcong Fei
|
Haiyan Fu
|
Xiangyang Luo
|
Yanqing Guo
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
While impressive performance has been achieved in image captioning, the limited diversity of the generated captions and the large parameter scale remain major barriers to the real-word application of these systems. In this work, we propose a lightweight image captioning network in combination with continuous diffusion, called Prefix-diffusion. To achieve diversity, we design an efficient method that injects prefix image embeddings into the denoising process of the diffusion model. In order to reduce trainable parameters, we employ a pre-trained model to extract image features and further design an extra mapping network. Prefix-diffusion is able to generate diverse captions with relatively less parameters, while maintaining the fluency and relevance of the captions benefiting from the generative capabilities of the diffusion model. Our work paves the way for scaling up diffusion models for image captioning, and achieves promising performance compared with recent approaches.
2022
Multi-Attribute Controlled Text Generation with Contrastive-Generator and External-Discriminator
Guisheng Liu
|
Yi Li
|
Yanqing Guo
|
Xiangyang Luo
|
Bo Wang
Proceedings of the 29th International Conference on Computational Linguistics
Though existing researches have achieved impressive results in controlled text generation, they focus mainly on single-attribute control. However, in applications like automatic comments, the topic and sentiment need to be controlled simultaneously. In this work, we propose a new framework for multi-attribute controlled text generation. To achieve this, we design a contrastive-generator that can effectively generate texts with more attributes. In order to increase the convergence of the text on the desired attributes, we adopt an external-discriminator to distinguish whether the generated text holds the desired attributes. Moreover, we propose top-n weighted decoding to further improve the relevance of texts to attributes. Automated evaluations and human evaluations show that our framework achieves remarkable controllability in multi-attribute generation while keeping the text fluent and diverse. It also yields promising performance on zero-shot generation.
Search
Co-authors
- Guisheng Liu 2
- Yi Li 2
- Yanqing Guo 2
- Chaoran Zhang 1
- Lixin Zou 1
- show all...