Ming Zhao
2022
EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition
Zaijing Li
|
Fengxiao Tang
|
Ming Zhao
|
Yusen Zhu
Findings of the Association for Computational Linguistics: ACL 2022
Emotion recognition in conversation (ERC) aims to analyze the speaker’s state and identify their emotion in the conversation. Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency. In order to extract multi-modal information and the emotional tendency of the utterance effectively, we propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule. Furthermore, we design an end-to-end ERC model called EmoCaps, which extracts emotion vectors through the Emoformer structure and obtain the emotion classification results from a context analysis model. Through the experiments with two benchmark datasets, our model shows better performance than the existing state-of-the-art models.
2021
On the Evaluation of Vision-and-Language Navigation Instructions
Ming Zhao
|
Peter Anderson
|
Vihan Jain
|
Su Wang
|
Alexander Ku
|
Jason Baldridge
|
Eugene Ie
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions. However, existing instruction generators have not been comprehensively evaluated, and the automatic evaluation metrics used to develop them have not been validated. Using human wayfinders, we show that these generators perform on par with or only slightly better than a template-based generator and far worse than human instructors. Furthermore, we discover that BLEU, ROUGE, METEOR and CIDEr are ineffective for evaluating grounded navigation instructions. To improve instruction evaluation, we propose an instruction-trajectory compatibility model that operates without reference instructions. Our model shows the highest correlation with human wayfinding outcomes when scoring individual instructions. For ranking instruction generation systems, if reference instructions are available we recommend using SPICE.
Search
Co-authors
- Zaijing Li 1
- Fengxiao Tang 1
- Yusen Zhu 1
- Peter Anderson 1
- Vihan Jain 1
- show all...