Jianxin Wang
2024
MARE: Multi-Aspect Rationale Extractor on Unsupervised Rationale Extraction
Han Jiang
|
Junwen Duan
|
Zhe Qu
|
Jianxin Wang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Unsupervised rationale extraction aims to extract text snippets to support model predictions without explicit rationale annotation.Researchers have made many efforts to solve this task. Previous works often encode each aspect independently, which may limit their ability to capture meaningful internal correlations between aspects. While there has been significant work on mitigating spurious correlations, our approach focuses on leveraging the beneficial internal correlations to improve multi-aspect rationale extraction. In this paper, we propose a Multi-Aspect Rationale Extractor (MARE) to explain and predict multiple aspects simultaneously. Concretely, we propose a Multi-Aspect Multi-Head Attention (MAMHA) mechanism based on hard deletion to encode multiple text chunks simultaneously. Furthermore, multiple special tokens are prepended in front of the text with each corresponding to one certain aspect. Finally, multi-task training is deployed to reduce the training overhead. Experimental results on two unsupervised rationale extraction benchmarks show that MARE achieves state-of-the-art performance. Ablation studies further demonstrate the effectiveness of our method. Our codes have been available at https://github.com/CSU-NLP-Group/MARE.
Multi-modal Concept Alignment Pre-training for Generative Medical Visual Question Answering
Quan Yan
|
Junwen Duan
|
Jianxin Wang
Findings of the Association for Computational Linguistics: ACL 2024
Medical Visual Question Answering (Med-VQA) seeks to accurately respond to queries regarding medical images, a task particularly challenging for open-ended questions. This study unveils the Multi-modal Concept Alignment Pre-training (MMCAP) approach for generative Med-VQA, leveraging a knowledge graph sourced from medical image-caption datasets and the Unified Medical Language System. MMCAP advances the fusion of visual and textual medical knowledge via a graph attention network and a transformer decoder. Additionally, it incorporates a Type Conditional Prompt in the fine-tuning phase, markedly boosting the accuracy and relevance of answers to open-ended questions. Our tests on benchmark datasets illustrate MMCAP’s superiority over existing methods, demonstrating its high efficiency in data-limited settings and effective knowledge-image alignment capability.
2023
CDA: A Contrastive Data Augmentation Method for Alzheimer’s Disease Detection
Junwen Duan
|
Fangyuan Wei
|
Jin Liu
|
Hongdong Li
|
Tianming Liu
|
Jianxin Wang
Findings of the Association for Computational Linguistics: ACL 2023
Alzheimer’s Disease (AD) is a neurodegenerative disorder that significantly impacts a patient’s ability to communicate and organize language. Traditional methods for detecting AD, such as physical screening or neurological testing, can be challenging and time-consuming. Recent research has explored the use of deep learning techniques to distinguish AD patients from non-AD patients by analysing the spontaneous speech. These models, however, are limited by the availability of data. To address this, we propose a novel contrastive data augmentation method, which simulates the cognitive impairment of a patient by randomly deleting a proportion of text from the transcript to create negative samples. The corrupted samples are expected to be in worse conditions than the original by a margin. Experimental results on the benchmark ADReSS Challenge dataset demonstrate that our model achieves the best performance among language-based models.
Search
Co-authors
- Junwen Duan 3
- Han Jiang 1
- Zhe Qu 1
- Fangyuan Wei 1
- Jin Liu 1
- show all...