Min Wang
2021
Unimodal and Crossmodal Refinement Network for Multimodal Sequence Fusion
Xiaobao Guo
|
Adams Kong
|
Huan Zhou
|
Xianfeng Wang
|
Min Wang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Effective unimodal representation and complementary crossmodal representation fusion are both important in multimodal representation learning. Prior works often modulate one modal feature to another straightforwardly and thus, underutilizing both unimodal and crossmodal representation refinements, which incurs a bottleneck of performance improvement. In this paper, Unimodal and Crossmodal Refinement Network (UCRN) is proposed to enhance both unimodal and crossmodal representations. Specifically, to improve unimodal representations, a unimodal refinement module is designed to refine modality-specific learning via iteratively updating the distribution with transformer-based attention layers. Self-quality improvement layers are followed to generate the desired weighted representations progressively. Subsequently, those unimodal representations are projected into a common latent space, regularized by a multimodal Jensen-Shannon divergence loss for better crossmodal refinement. Lastly, a crossmodal refinement module is employed to integrate all information. By hierarchical explorations on unimodal, bimodal, and trimodal interactions, UCRN is highly robust against missing modality and noisy data. Experimental results on MOSI and MOSEI datasets illustrated that the proposed UCRN outperforms recent state-of-the-art techniques and its robustness is highly preferred in real multimodal sequence fusion scenarios. Codes will be shared publicly.
2018
Yuan at SemEval-2018 Task 1: Tweets Emotion Intensity Prediction using Ensemble Recurrent Neural Network
Min Wang
|
Xiaobing Zhou
Proceedings of the 12th International Workshop on Semantic Evaluation
We perform the LSTM and BiLSTM model for the emotion intensity prediction. We only join the third subtask in Task 1:Affect in Tweets. Our system rank 6th among all the teams.
2017
YNUDLG at IJCNLP-2017 Task 5: A CNN-LSTM Model with Attention for Multi-choice Question Answering in Examinations
Min Wang
|
Qingxun Liu
|
Peng Ding
|
Yongbin Li
|
Xiaobing Zhou
Proceedings of the IJCNLP 2017, Shared Tasks
In this paper, we perform convolutional neural networks (CNN) to learn the joint representations of question-answer pairs first, then use the joint representations as the inputs of the long short-term memory (LSTM) with attention to learn the answer sequence of a question for labeling the matching quality of each answer. We also incorporating external knowledge by training Word2Vec on Flashcards data, thus we get more compact embedding. Experimental results show that our method achieves better or comparable performance compared with the baseline system. The proposed approach achieves the accuracy of 0.39, 0.42 in English valid set, test set, respectively.
Search
Co-authors
- Xiaobing Zhou 2
- Qingxun Liu 1
- Peng Ding 1
- Yongbin Li 1
- Xiaobao Guo 1
- show all...