Yawei Sun
2022
AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension
Xiao Li
|
Gong Cheng
|
Ziheng Chen
|
Yawei Sun
|
Yuzhong Qu
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent machine reading comprehension datasets such as ReClor and LogiQA require performing logical reasoning over text. Conventional neural models are insufficient for logical reasoning, while symbolic reasoners cannot directly apply to text. To meet the challenge, we present a neural-symbolic approach which, to predict an answer, passes messages over a graph representing logical relations between text units. It incorporates an adaptive logic graph network (AdaLoGN) which adaptively infers logical relations to extend the graph and, essentially, realizes mutual and iterative reinforcement between neural and symbolic reasoning. We also implement a novel subgraph-to-node message passing mechanism to enhance context-option interaction for answering multiple-choice questions. Our approach shows promising results on ReClor and LogiQA.
2020
TableGPT: Few-shot Table-to-Text Generation with Table Structure Reconstruction and Content Matching
Heng Gong
|
Yawei Sun
|
Xiaocheng Feng
|
Bing Qin
|
Wei Bi
|
Xiaojiang Liu
|
Ting Liu
Proceedings of the 28th International Conference on Computational Linguistics
Although neural table-to-text models have achieved remarkable progress with the help of large-scale datasets, they suffer insufficient learning problem with limited training data. Recently, pre-trained language models show potential in few-shot learning with linguistic knowledge learnt from pretraining on large-scale corpus. However, benefiting table-to-text generation in few-shot setting with the powerful pretrained language model faces three challenges, including (1) the gap between the task’s structured input and the natural language input for pretraining language model. (2) The lack of modeling for table structure and (3) improving text fidelity with less incorrect expressions that are contradicting to the table. To address aforementioned problems, we propose TableGPT for table-to-text generation. At first, we utilize table transformation module with template to rewrite structured table in natural language as input for GPT-2. In addition, we exploit multi-task learning with two auxiliary tasks that preserve table’s structural information by reconstructing the structure from GPT-2’s representation and improving the text’s fidelity with content matching task aligning the table and information in the generated text. By experimenting on Humans, Songs and Books, three few-shot table-to-text datasets in different domains, our model outperforms existing systems on most few-shot settings.
2018
Reading Comprehension with Graph-based Temporal-Casual Reasoning
Yawei Sun
|
Gong Cheng
|
Yuzhong Qu
Proceedings of the 27th International Conference on Computational Linguistics
Complex questions in reading comprehension tasks require integrating information from multiple sentences. In this work, to answer such questions involving temporal and causal relations, we generate event graphs from text based on dependencies, and rank answers by aligning event graphs. In particular, the alignments are constrained by graph-based reasoning to ensure temporal and causal agreement. Our focused approach self-adaptively complements existing solutions; it is automatically triggered only when applicable. Experiments on RACE and MCTest show that state-of-the-art methods are notably improved by using our approach as an add-on.
Search
Co-authors
- Gong Cheng 2
- Yuzhong Qu 2
- Heng Gong 1
- Xiaocheng Feng 1
- Bing Qin 1
- show all...