Shaojun Wang


2022

pdf bib
PINGAN Omini-Sinitic at SemEval-2022 Task 4: Multi-prompt Training for Patronizing and Condescending Language Detection
Ye Wang | Yanmeng Wang | Baishun Ling | Zexiang Liao | Shaojun Wang | Jing Xiao
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

This paper describes the second-placed system for subtask 2 and the ninth-placed system for subtask 1 in SemEval 2022 Task 4: Patronizing and Condescending Language Detection. We propose an ensemble of prompt training and label attention mechanism for multi-label classification tasks. Transfer learning is introduced to transfer the knowledge from binary classification to multi-label classification. The experimental results proved the effectiveness of our proposed method. The ablation study is also conducted to show the validity of each technique.

pdf bib
Learning to Adapt to Low-Resource Paraphrase Generation
Zhigen Li | Yanmeng Wang | Rizhao Fan | Ye Wang | Jianfeng Li | Shaojun Wang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Paraphrase generation is a longstanding NLP task and achieves great success with the aid of large corpora. However, transferring a paraphrasing model to another domain encounters the problem of domain shifting especially when the data is sparse. At the same time, widely using large pre-trained language models (PLMs) faces the overfitting problem when training on scarce labeled data. To mitigate these two issues, we propose, LAPA, an effective adapter for PLMs optimized by meta-learning. LAPA has three-stage training on three types of related resources to solve this problem: 1. pre-training PLMs on unsupervised corpora, 2. inserting an adapter layer and meta-training on source domain labeled data, and 3. fine-tuning adapters on a small amount of target domain labeled data. This method enables paraphrase generation models to learn basic language knowledge first, then learn the paraphrasing task itself later, and finally adapt to the target task. Our experimental results demonstrate that LAPA achieves state-of-the-art in supervised, unsupervised, and low-resource settings on three benchmark datasets. With only 2% of trainable parameters and 1% labeled data of the target task, our approach can achieve a competitive performance with previous work.

2021

pdf bib
Enhancing Dual-Encoders with Question and Answer Cross-Embeddings for Answer Retrieval
Yanmeng Wang | Jun Bai | Ye Wang | Jianfei Zhang | Wenge Rong | Zongcheng Ji | Shaojun Wang | Jing Xiao
Findings of the Association for Computational Linguistics: EMNLP 2021

Dual-Encoders is a promising mechanism for answer retrieval in question answering (QA) systems. Currently most conventional Dual-Encoders learn the semantic representations of questions and answers merely through matching score. Researchers proposed to introduce the QA interaction features in scoring function but at the cost of low efficiency in inference stage. To keep independent encoding of questions and answers during inference stage, variational auto-encoder is further introduced to reconstruct answers (questions) from question (answer) embeddings as an auxiliary task to enhance QA interaction in representation learning in training stage. However, the needs of text generation and answer retrieval are different, which leads to hardness in training. In this work, we propose a framework to enhance the Dual-Encoders model with question answer cross-embeddings and a novel Geometry Alignment Mechanism (GAM) to align the geometry of embeddings from Dual-Encoders with that from Cross-Encoders. Extensive experimental results show that our framework significantly improves Dual-Encoders model and outperforms the state-of-the-art method on multiple answer retrieval datasets.

pdf bib
PINGAN Omini-Sinitic at SemEval-2021 Task 4:Reading Comprehension of Abstract Meaning
Ye Wang | Yanmeng Wang | Haijun Zhu | Bo Zeng | Zhenghong Hao | Shaojun Wang | Jing Xiao
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)

This paper describes the winning system for subtask 2 and the second-placed system for subtask 1 in SemEval 2021 Task 4: ReadingComprehension of Abstract Meaning. We propose to use pre-trianed Electra discriminator to choose the best abstract word from five candidates. An upper attention and auto denoising mechanism is introduced to process the long sequences. The experiment results demonstrate that this contribution greatly facilitatesthe contextual language modeling in reading comprehension task. The ablation study is also conducted to show the validity of our proposed methods.

pdf bib
PHMOSpell: Phonological and Morphological Knowledge Guided Chinese Spelling Check
Li Huang | Junjie Li | Weiwei Jiang | Zhiyu Zhang | Minchuan Chen | Shaojun Wang | Jing Xiao
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Chinese Spelling Check (CSC) is a challenging task due to the complex characteristics of Chinese characters. Statistics reveal that most Chinese spelling errors belong to phonological or visual errors. However, previous methods rarely utilize phonological and morphological knowledge of Chinese characters or heavily rely on external resources to model their similarities. To address the above issues, we propose a novel end-to-end trainable model called PHMOSpell, which promotes the performance of CSC with multi-modal information. Specifically, we derive pinyin and glyph representations for Chinese characters from audio and visual modalities respectively, which are integrated into a pre-trained language model by a well-designed adaptive gating mechanism. To verify its effectiveness, we conduct comprehensive experiments and ablation tests. Experimental results on three shared benchmarks demonstrate that our model consistently outperforms previous state-of-the-art models.

2020

pdf bib
Contextualized Emotion Recognition in Conversation as Sequence Tagging
Yan Wang | Jiayu Zhang | Jun Ma | Shaojun Wang | Jing Xiao
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Emotion recognition in conversation (ERC) is an important topic for developing empathetic machines in a variety of areas including social opinion mining, health-care and so on. In this paper, we propose a method to model ERC task as sequence tagging where a Conditional Random Field (CRF) layer is leveraged to learn the emotional consistency in the conversation. We employ LSTM-based encoders that capture self and inter-speaker dependency of interlocutors to generate contextualized utterance representations which are fed into the CRF layer. For capturing long-range global context, we use a multi-layer Transformer encoder to enhance the LSTM-based encoder. Experiments show that our method benefits from modeling the emotional consistency and outperforms the current state-of-the-art methods on multiple emotion classification datasets.

2015

pdf bib
Une méthode discriminant formation simple pour la traduction automatique avec Grands Caractéristiques
Tian Xia | Shaodan Zhai | Zhongliang Li | Shaojun Wang
Actes de la 22e conférence sur le Traitement Automatique des Langues Naturelles. Articles courts

Marge infusé algorithmes détendus (MIRAS) dominent modèle de tuning dans la traduction automatique statistique dans le cas des grandes caractéristiques de l’échelle, mais ils sont également célèbres pour la complexité de mise en œuvre. Nous introduisons une nouvelle méthode, qui concerne une liste des N meilleures comme une permutation et minimise la perte Plackett-Luce de permutations rez-de-vérité. Des expériences avec des caractéristiques à grande échelle démontrent que, la nouvelle méthode est plus robuste que MERT ; si ce est seulement à rattacher avec Miras, il a un avantage comparativement, plus facile à mettre en œuvre.

2013

pdf bib
Improving Alignment of System Combination by Using Multi-objective Optimization
Tian Xia | Zongcheng Ji | Shaodan Zhai | Yidong Chen | Qun Liu | Shaojun Wang
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Corpus Level MIRA Tuning Strategy for Machine Translation
Ming Tan | Tian Xia | Shaojun Wang | Bowen Zhou
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

2012

pdf bib
A Scalable Distributed Syntactic, Semantic, and Lexical Language Model
Ming Tan | Wenli Zhou | Lei Zheng | Shaojun Wang
Computational Linguistics, Volume 38, Issue 3 - September 2012

2011

pdf bib
A Large Scale Distributed Syntactic, Semantic and Lexical Language Model for Machine Translation
Ming Tan | Wenli Zhou | Lei Zheng | Shaojun Wang
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2006

pdf bib
Semi-Supervised Conditional Random Fields for Improved Sequence Segmentation and Labeling
Feng Jiao | Shaojun Wang | Chi-Hoon Lee | Russell Greiner | Dale Schuurmans
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

2003

pdf bib
Text Classification in Asian Languages without Word Segmentation
Fuchun Peng | Xiangji Huang | Dale Schuurmans | Shaojun Wang
Proceedings of the Sixth International Workshop on Information Retrieval with Asian Languages

pdf bib
Language Independent Authorship Attribution with Character Level N-Grams
Fuchun Peng | Dale Schuurmans | Vlado Keselj | Shaojun Wang
10th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Language and Task Independent Text Categorization with Simple Language Models
Fuchun Peng | Dale Schuurmans | Shaojun Wang
Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics