Shanlin Zhou
2025
KIA: Knowledge-Guided Implicit Vision-Language Alignment for Chest X-Ray Report Generation
Heng Yin
|
Shanlin Zhou
|
Pandong Wang
|
Zirui Wu
|
Yongtao Hao
Proceedings of the 31st International Conference on Computational Linguistics
Report generation (RG) faces challenges in understanding complex medical images and establishing cross-modal semantic alignment in radiology image-report pairs. Previous methods often overlook fine-grained cross-modal interaction, leading to insufficient understanding of detailed information. Recently, various large multimodal models have been proposed for image-text tasks. However, such models still underperform on rare domain tasks like understanding complex medical images. To address these limitations, we develop a new framework of Knowledge-guided Implicit vision-language Alignment for radiology report generation, named KIA. To better understand medical reports and images and build alignment between them, multi-task implicit alignment is creatively introduced, forming comprehensive understanding of medical images and reports. Additionally, to further meet medical refinement requirements, we design novel masking strategies guided by medical knowledge to enhance pathological observation and anatomical landm
2023
ToViLaG: Your Visual-Language Generative Model is Also An Evildoer
Xinpeng Wang
|
Xiaoyuan Yi
|
Han Jiang
|
Shanlin Zhou
|
Zhihua Wei
|
Xing Xie
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Recent large-scale Visual-Language Generative Models (VLGMs) have achieved unprecedented improvement in multimodal image/text generation. However, these models might also generate toxic content, e.g., offensive text and pornography images, raising significant ethical risks. Despite exhaustive studies on toxic degeneration of language models, this problem remains largely unexplored within the context of visual-language generation. This work delves into the propensity for toxicity generation and susceptibility to toxic data across various VLGMs. For this purpose, we built ToViLaG, a dataset comprising 32K co-toxic/mono-toxic text-image pairs and 1K innocuous but evocative text that tends to stimulate toxicity. Furthermore, we propose WInToRe, a novel toxicity metric tailored to visual-language generation, which theoretically reflects different aspects of toxicity considering both input and output. On such a basis, we benchmarked the toxicity of a diverse spectrum of VLGMs and discovered that some models do more evil than expected while some are more vulnerable to infection, underscoring the necessity of VLGMs detoxification. Therefore, we develop an innovative bottleneck-based detoxification method. Our method could reduce toxicity while maintaining comparable generation quality, providing a promising initial solution to this line of research.
2022
CHAE: Fine-Grained Controllable Story Generation with Characters, Actions and Emotions
Xinpeng Wang
|
Han Jiang
|
Zhihua Wei
|
Shanlin Zhou
Proceedings of the 29th International Conference on Computational Linguistics
Story generation has emerged as an interesting yet challenging NLP task in recent years. Some existing studies aim at generating fluent and coherent stories from keywords and outlines; while others attempt to control the global features of the story, such as emotion, style and topic. However, these works focus on coarse-grained control on the story, neglecting control on the details of the story, which is also crucial for the task. To fill the gap, this paper proposes a model for fine-grained control on the story, which allows the generation of customized stories with characters, corresponding actions and emotions arbitrarily assigned. Extensive experimental results on both automatic and human manual evaluations show the superiority of our method. It has strong controllability to generate stories according to the fine-grained personalized guidance, unveiling the effectiveness of our methodology. Our code is available at https://github.com/victorup/CHAE.
Search
Fix data
Co-authors
- Han Jiang 2
- Xinpeng Wang 2
- Zhihua Wei 2
- Yongtao Hao 1
- Pandong Wang 1
- show all...