Zhenwen Li
2022
Exploring the Secrets Behind the Learning Difficulty of Meaning Representations for Semantic Parsing
Zhenwen Li
|
Jiaqi Guo
|
Qian Liu
|
Jian-Guang Lou
|
Tao Xie
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Previous research has shown that the design of Meaning Representation (MR) greatly influences the final model performance of a neural semantic parser. Therefore, designing a good MR is a long-term goal for semantic parsing. However, it is still an art as there is no quantitative indicator that can tell us which MR among a set of candidates may have the best final model performance. In practice, in order toselect an MR design, researchers often have to go through the whole training-testing process for all design candidates, and the process often costs a lot. In this paper, we propose a data-aware metric called ISS (denoting incremental structural stability) of MRs, and demonstrate that ISS is highly correlated with the final performance. The finding shows that ISS can be used as an indicator for MR design to avoid the costly training-testing process.
2020
Composing Elementary Discourse Units in Abstractive Summarization
Zhenwen Li
|
Wenhao Wu
|
Sujian Li
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In this paper, we argue that elementary discourse unit (EDU) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization. To well handle the problem of composing EDUs into an informative and fluent summary, we propose a novel summarization method that first designs an EDU selection model to extract and group informative EDUs and then an EDU fusion model to fuse the EDUs in each group into one sentence. We also design the reinforcement learning mechanism to use EDU fusion results to reward the EDU selection action, boosting the final summarization performance. Experiments on CNN/Daily Mail have demonstrated the effectiveness of our model.
Benchmarking Meaning Representations in Neural Semantic Parsing
Jiaqi Guo
|
Qian Liu
|
Jian-Guang Lou
|
Zhenwen Li
|
Xueqing Liu
|
Tao Xie
|
Ting Liu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Meaning representation is an important component of semantic parsing. Although researchers have designed a lot of meaning representations, recent work focuses on only a few of them. Thus, the impact of meaning representation on semantic parsing is less understood. Furthermore, existing work’s performance is often not comprehensively evaluated due to the lack of readily-available execution engines. Upon identifying these gaps, we propose , a new unified benchmark on meaning representations, by integrating existing semantic parsing datasets, completing the missing logical forms, and implementing the missing execution engines. The resulting unified benchmark contains the complete enumeration of logical forms and execution engines over three datasets × four meaning representations. A thorough experimental study on Unimer reveals that neural semantic parsing approaches exhibit notably different performance when they are trained to generate different meaning representations. Also, program alias and grammar rules heavily impact the performance of different meaning representations. Our benchmark, execution engines and implementation can be found on: https://github.com/JasperGuo/Unimer.