2022
pdf
bib
abs
VarMAE: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding
Dou Hu
|
Xiaolong Hou
|
Xiyang Du
|
Mengyuan Zhou
|
Lianxin Jiang
|
Yang Mo
|
Xiaofeng Shi
Findings of the Association for Computational Linguistics: EMNLP 2022
Pre-trained language models have been widely applied to standard benchmarks. Due to the flexibility of natural language, the available resources in a certain domain can be restricted to support obtaining precise representation. To address this issue, we propose a novel Transformer-based language model named VarMAE for domain-adaptive language understanding. Under the masked autoencoding objective, we design a context uncertainty learning module to encode the token’s context into a smooth latent distribution. The module can produce diverse and well-formed contextual representations. Experiments on science- and finance-domain NLU tasks demonstrate that VarMAE can be efficiently adapted to new domains with limited resources.
pdf
bib
abs
PAIC at SemEval-2022 Task 5: Multi-Modal Misogynous Detection in MEMES with Multi-Task Learning And Multi-model Fusion
Jin Zhi
|
Zhou Mengyuan
|
Mengfei Yuan
|
Dou Hu
|
Xiyang Du
|
Lianxin Jiang
|
Yang Mo
|
XiaoFeng Shi
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes our system used in the SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification (MAMI). Multimedia automatic misogyny recognition consists of the identification of misogynous memes, taking advantage of both text and images as sources of information. The task will be organized around two main subtasks: Task A is a binary classification task, which should be identified either as misogynous or not misogynous. Task B is a multi-label classification task, in which the types of misogyny should be identified in potential overlapping categories, such as stereotype, shaming, objectification, and violence. In this paper, we proposed a system based on multi-task learning for multi-modal misogynous detection in memes. Our system combined image features with text features to train a multi-label classification. The prediction results were obtained by the simple weighted average method of the results with different fusion models, and the results of Task A were corrected by Task B. Our system achieves a test accuracy of 0.755 on Task A (ranking 3rd on the final leaderboard) and the accuracy of 0.731 on Task B (ranking 1st on the final leaderboard).
pdf
bib
abs
stce at SemEval-2022 Task 6: Sarcasm Detection in English Tweets
Mengfei Yuan
|
Zhou Mengyuan
|
Lianxin Jiang
|
Yang Mo
|
Xiaofeng Shi
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes the systematic approach applied in “SemEval-2022 Task 6 (iSarcasmEval) : Intended Sarcasm Detection in English and Arabic”. In particular, we illustrate the proposed system in detail for SubTask-A about determining a given text as sarcastic or non-sarcastic in English. We start with the training data from the officially released data and then experiment with different combinations of public datasets to improve the model generalization. Additional experiments conducted on the task demonstrate our strategies are effective in completing the task. Different transformer-based language models, as well as some popular plug-and-play proirs, are mixed into our system to enhance the model’s robustness. Furthermore, statistical and lexical-based text features are mined to improve the accuracy of the sarcasm detection. Our final submission achieves an F1-score for the sarcastic class of 0.6052 on the official test set (the top 1 of the 43 teams in “SubTask-A-English” on the leaderboard).
pdf
bib
abs
PALI at SemEval-2022 Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts
Zhou Mengyuan
|
Dou Hu
|
Mengfei Yuan
|
Jin Zhi
|
Xiyang Du
|
Lianxin Jiang
|
Yang Mo
|
Xiaofeng Shi
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes our system used in the SemEval-2022 Task 7(Roth et al.): Identifying Plausible Clarifications of Implicit and Under-specified Phrases. Semeval Task7 is an more complex cloze task, different than normal cloze task, only requiring NLP system could find the best fillers for sentence. In Semeval Task7, NLP system not only need to choose the best fillers for each input instance, but also evaluate the quality of all possible fillers and give them a relative score according to context semantic information. We propose an ensemble of different state-of-the-art transformer-based language models(i.e., RoBERTa and Deberta) with some plug-and-play tricks, such as Grouped Layerwise Learning Rate Decay (GLLRD) strategy, contrastive learning loss, different pooling head and an external input data preprecess block before the information came into pretrained language models, which improve performance significantly. The main contributions of our sys-tem are 1) revealing the performance discrepancy of different transformer-based pretraining models on the downstream task; 2) presenting an efficient learning-rate and parameter attenuation strategy when fintuning pretrained language models; 3) adding different constrative learning loss to improve model performance; 4) showing the useful of the different pooling head structure. Our system achieves a test accuracy of 0.654 on subtask1(ranking 4th on the leaderboard) and a test Spearman’s rank correlation coefficient of 0.785 on subtask2(ranking 2nd on the leaderboard).
2021
pdf
bib
abs
RG PA at SemEval-2021 Task 1: A Contextual Attention-based Model with RoBERTa for Lexical Complexity Prediction
Gang Rao
|
Maochang Li
|
Xiaolong Hou
|
Lianxin Jiang
|
Yang Mo
|
Jianping Shen
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
In this paper we propose a contextual attention based model with two-stage fine-tune training using RoBERTa. First, we perform the first-stage fine-tune on corpus with RoBERTa, so that the model can learn some prior domain knowledge. Then we get the contextual embedding of context words based on the token-level embedding with the fine-tuned model. And we use Kfold cross-validation to get K models and ensemble them to get the final result. Finally, we attain the 2nd place in the final evaluation phase of sub-task 2 with pearson correlation of 0.8575.
pdf
bib
abs
PALI at SemEval-2021 Task 2: Fine-Tune XLM-RoBERTa for Word in Context Disambiguation
Shuyi Xie
|
Jian Ma
|
Haiqin Yang
|
Lianxin Jiang
|
Yang Mo
|
Jianping Shen
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
This paper presents the PALI team’s winning system for SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation. We fine-tune XLM-RoBERTa model to solve the task of word in context disambiguation, i.e., to determine whether the target word in the two contexts contains the same meaning or not. In implementation, we first specifically design an input tag to emphasize the target word in the contexts. Second, we construct a new vector on the fine-tuned embeddings from XLM-RoBERTa and feed it to a fully-connected network to output the probability of whether the target word in the context has the same meaning or not. The new vector is attained by concatenating the embedding of the [CLS] token and the embeddings of the target word in the contexts. In training, we explore several tricks, such as the Ranger optimizer, data augmentation, and adversarial training, to improve the model prediction. Consequently, we attain the first place in all four cross-lingual tasks.
pdf
bib
abs
FPAI at SemEval-2021 Task 6: BERT-MRC for Propaganda Techniques Detection
Xiaolong Hou
|
Junsong Ren
|
Gang Rao
|
Lianxin Lian
|
Zhihao Ruan
|
Yang Mo
|
JIanping Shen
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
The objective of subtask 2 of SemEval-2021 Task 6 is to identify techniques used together with the span(s) of text covered by each technique. This paper describes the system and model we developed for the task. We first propose a pipeline system to identify spans, then to classify the technique in the input sequence. But it severely suffers from handling the overlapping in nested span. Then we propose to formulize the task as a question answering task by MRC framework which achieves a better result compared to the pipeline method. Moreover, data augmentation and loss design techniques are also explored to alleviate the problem of data sparse and imbalance. Finally, we attain the 3rd place in the final evaluation phase.
pdf
bib
abs
MagicPai at SemEval-2021 Task 7: Method for Detecting and Rating Humor Based on Multi-Task Adversarial Training
Jian Ma
|
Shuyi Xie
|
Haiqin Yang
|
Lianxin Jiang
|
Mengyuan Zhou
|
Xiaoyi Ruan
|
Yang Mo
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
This paper describes MagicPai’s system for SemEval 2021 Task 7, HaHackathon: Detecting and Rating Humor and Offense. This task aims to detect whether the text is humorous and how humorous it is. There are four subtasks in the competition. In this paper, we mainly present our solution, a multi-task learning model based on adversarial examples, for task 1a and 1b. More specifically, we first vectorize the cleaned dataset and add the perturbation to obtain more robust embedding representations. We then correct the loss via the confidence level. Finally, we perform interactive joint learning on multiple tasks to capture the relationship between whether the text is humorous and how humorous it is. The final result shows the effectiveness of our system.
pdf
bib
abs
Sattiy at SemEval-2021 Task 9: An Ensemble Solution for Statement Verification and Evidence Finding with Tables
Xiaoyi Ruan
|
Meizhi Jin
|
Jian Ma
|
Haiqin Yang
|
Lianxin Jiang
|
Yang Mo
|
Mengyuan Zhou
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Question answering from semi-structured tables can be seen as a semantic parsing task and is significant and practical for pushing the boundary of natural language understanding. Existing research mainly focuses on understanding contents from unstructured evidence, e.g., news, natural language sentences and documents. The task of verification from structured evidence, such as tables, charts, and databases, is still less-explored. This paper describes sattiy team’s system in SemEval-2021 task 9: Statement Verification and Evidence Finding with Tables (SEM-TAB-FACT)(CITATION). This competition aims to verify statements and to find evidence from tables for scientific articles and to promote proper interpretation of the surrounding article. In this paper we exploited ensemble models of pre-trained language models over tables, TaPas and TaBERT, for Task A and adjust the result based on some rules extracted for Task B. Finally, in the leadboard, we attain the F1 scores of 0.8496 and 0.7732 in Task A for the 2-way and 3-way evaluation, respectively, and the F1 score of 0.4856 in Task B.
2020
pdf
bib
abs
FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection
Chenyang Guo
|
Xiaolong Hou
|
Junsong Ren
|
Lianxin Jiang
|
Yang Mo
|
Haiqin Yang
|
Jianping Shen
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper describes the model we apply in the SemEval-2020 Task 10. We formalize the task of emphasis selection as a simplified query-based machine reading comprehension (MRC) task, i.e. answering a fixed question of “Find candidates for emphasis”. We propose our subword puzzle encoding mechanism and subword fusion layer to align and fuse subwords. By introducing the semantic prior knowledge of the informative query and some other techniques, we attain the 7th place during the evaluation phase and the first place during train phase.
2019
pdf
bib
A Hybrid Approach of Deep Semantic Matching and Deep Rank for Context Aware Question Answer System
Shu-Yi Xie
|
Chia-Hao Chang
|
Zhi Zhang
|
Yang Mo
|
Lian-Xin Jiang
|
Yu-Sheng Huang
|
Jian-Ping Shen
Proceedings of the 31st Conference on Computational Linguistics and Speech Processing (ROCLING 2019)
pdf
bib
A Real-World Human-Machine Interaction Platform in Insurance Industry
Wei Tan
|
Chia-Hao Chang
|
Yang Mo
|
Lian-Xin Jiang
|
Gen Li
|
Xiao-Long Hou
|
Chu Chen
|
Yu-Sheng Huang
|
Meng-Yuan Huang
|
Jian-Ping Shen
Proceedings of the 31st Conference on Computational Linguistics and Speech Processing (ROCLING 2019)