Weiguo Gao
2023
PAI at SemEval-2023 Task 2: A Universal System for Named Entity Recognition with External Entity Information
Long Ma
|
Kai Lu
|
Tianbo Che
|
Hailong Huang
|
Weiguo Gao
|
Xuan Li
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
The MultiCoNER II task aims to detect complex, ambiguous, and fine-grained named entities in low-context situations and noisy scenarios like the presence of spelling mistakes and typos for multiple languages. The task poses significant challenges due to the scarcity of contextual information, the high granularity of the entities(up to 33 classes), and the interference of noisy data. To address these issues, our team PAI proposes a universal Named Entity Recognition (NER) system that integrates external entity information to improve performance. Specifically, our system retrieves entities with properties from the knowledge base (i.e. Wikipedia) for a given text, then concatenates entity information with the input sentence and feeds it into Transformer-based models. Finally, our system wins 2 first places, 4 second places, and 1 third place out of 13 tracks. The code is publicly available at https://github.com/diqiuzhuanzhuan/semeval-2023.
2021
DialogueTRM: Exploring Multi-Modal Emotional Dynamics in a Conversation
Yuzhao Mao
|
Guang Liu
|
Xiaojie Wang
|
Weiguo Gao
|
Xuan Li
Findings of the Association for Computational Linguistics: EMNLP 2021
Emotion dynamics formulates principles explaining the emotional fluctuation during conversations. Recent studies explore the emotion dynamics from the self and inter-personal dependencies, however, ignoring the temporal and spatial dependencies in the situation of multi-modal conversations. To address the issue, we extend the concept of emotion dynamics to multi-modal settings and propose a Dialogue Transformer for simultaneously modeling the intra-modal and inter-modal emotion dynamics. Specifically, the intra-modal emotion dynamics is to not only capture the temporal dependency but also satisfy the context preference in every single modality. The inter-modal emotional dynamics aims at handling multi-grained spatial dependency across all modalities. Our models outperform the state-of-the-art with a margin of 4%-16% for most of the metrics on three benchmark datasets.
Search
Co-authors
- Xuan Li 2
- Long Ma 1
- Kai Lu 1
- Tianbo Che 1
- Hailong Huang 1
- show all...