2023
pdf
bib
abs
Fusion or Defusion? Flexible Vision-and-Language Pre-Training
Rongyi Sun
|
Ziran Li
|
Yifeng Ding
|
Qifan Wang
|
Jingang Wang
|
Haitao Zheng
|
Wei Wu
|
Yunsen Xian
Findings of the Association for Computational Linguistics: ACL 2023
Existing approaches in the vision-and-language pre-training (VLP) paradigm mainly deploy either fusion-based encoders or dual-encoders, failing to achieve both effectiveness and efficiency in downstream multimodal tasks. In this paper, we build a flexible VLP model by incorporating cross-modal fusions into a dual-encoder architecture, where the introduced fusion modules can be easily decoupled from the dual encoder so as to switch the model to a fusion-free one. To better absorb cross-modal features from the fusion modules, we design a cross-modal knowledge transfer strategy along with other comprehensive pre-training tasks to guide the training process, which can further strengthen both the fusion-based and fusion-free representation learning. Extensive experiments conducted on various downstream vision-language tasks show that our proposed model is well-equipped with effectiveness as well as efficiency, demonstrating a superior performance compared with other strong VLP models.
2022
pdf
bib
abs
The Past Mistake is the Future Wisdom: Error-driven Contrastive Probability Optimization for Chinese Spell Checking
Yinghui Li
|
Qingyu Zhou
|
Yangning Li
|
Zhongli Li
|
Ruiyang Liu
|
Rongyi Sun
|
Zizhen Wang
|
Chao Li
|
Yunbo Cao
|
Hai-Tao Zheng
Findings of the Association for Computational Linguistics: ACL 2022
Chinese Spell Checking (CSC) aims to detect and correct Chinese spelling errors, which are mainly caused by the phonological or visual similarity. Recently, pre-trained language models (PLMs) promote the progress of CSC task. However, there exists a gap between the learned knowledge of PLMs and the goal of CSC task. PLMs focus on the semantics in text and tend to correct the erroneous characters to semantically proper or commonly used ones, but these aren’t the ground-truth corrections. To address this issue, we propose an Error-driven COntrastive Probability Optimization (ECOPO) framework for CSC task. ECOPO refines the knowledge representations of PLMs, and guides the model to avoid predicting these common characters through an error-driven way. Particularly, ECOPO is model-agnostic and it can be combined with existing CSC methods to achieve better performance. Extensive experiments and detailed analyses on SIGHAN datasets demonstrate that ECOPO is simple yet effective.
pdf
bib
abs
Linguistic Rules-Based Corpus Generation for Native Chinese Grammatical Error Correction
Shirong Ma
|
Yinghui Li
|
Rongyi Sun
|
Qingyu Zhou
|
Shulin Huang
|
Ding Zhang
|
Li Yangning
|
Ruiyang Liu
|
Zhongli Li
|
Yunbo Cao
|
Haitao Zheng
|
Ying Shen
Findings of the Association for Computational Linguistics: EMNLP 2022
Chinese Grammatical Error Correction (CGEC) is both a challenging NLP task and a common application in human daily life. Recently, many data-driven approaches are proposed for the development of CGEC research. However, there are two major limitations in the CGEC field: First, the lack of high-quality annotated training corpora prevents the performance of existing CGEC models from being significantly improved. Second, the grammatical errors in widely used test sets are not made by native Chinese speakers, resulting in a significant gap between the CGEC models and the real application. In this paper, we propose a linguistic rules-based approach to construct large-scale CGEC training corpora with automatically generated grammatical errors. Additionally, we present a challenging CGEC benchmark derived entirely from errors made by native Chinese speakers in real-world scenarios. Extensive experiments and detailed analyses not only demonstrate that the training data constructed by our method effectively improves the performance of CGEC models, but also reflect that our benchmark is an excellent resource for further development of the CGEC field.