Wenguang Wang
2022
Parsing Natural Language into Propositional and First-Order Logic with Dual Reinforcement Learning
Xuantao Lu
|
Jingping Liu
|
Zhouhong Gu
|
Hanwen Tong
|
Chenhao Xie
|
Junyang Huang
|
Yanghua Xiao
|
Wenguang Wang
Proceedings of the 29th International Conference on Computational Linguistics
Semantic parsing converts natural language utterances into structured logical expressions. We consider two such formal representations: Propositional Logic (PL) and First-order Logic (FOL). The paucity of labeled data is a major challenge in this field. In previous works, dual reinforcement learning has been proposed as an approach to reduce dependence on labeled data. However, this method has the following limitations: 1) The reward needs to be set manually and is not applicable to all kinds of logical expressions. 2) The training process easily collapses when models are trained with only the reward from dual reinforcement learning. In this paper, we propose a scoring model to automatically learn a model-based reward, and an effective training strategy based on curriculum learning is further proposed to stabilize the training process. In addition to the technical contribution, a Chinese-PL/FOL dataset is constructed to compensate for the paucity of labeled data in this field. Experimental results show that the proposed method outperforms competitors on several datasets. Furthermore, by introducing PL/FOL generated by our model, the performance of existing Natural Language Inference (NLI) models is further enhanced.
2020
Improving Grammatical Error Correction with Data Augmentation by Editing Latent Representation
Zhaohong Wan
|
Xiaojun Wan
|
Wenguang Wang
Proceedings of the 28th International Conference on Computational Linguistics
The incorporation of data augmentation method in grammatical error correction task has attracted much attention. However, existing data augmentation methods mainly apply noise to tokens, which leads to the lack of diversity of generated errors. In view of this, we propose a new data augmentation method that can apply noise to the latent representation of a sentence. By editing the latent representations of grammatical sentences, we can generate synthetic samples with various error types. Combining with some pre-defined rules, our method can greatly improve the performance and robustness of existing grammatical error correction models. We evaluate our method on public benchmarks of GEC task and it achieves the state-of-the-art performance on CoNLL-2014 and FCE benchmarks.
Search
Co-authors
- Zhaohong Wan 1
- Xiaojun Wan 1
- Xuantao Lu 1
- Jingping Liu 1
- Zhouhong Gu 1
- show all...