On The Ingredients of an Effective Zero-shot Semantic Parser

Pengcheng Yin, John Wieting, Avirup Sil, Graham Neubig


Abstract
Semantic parsers map natural language utterances into meaning representations (e.g., programs). Such models are typically bottlenecked by the paucity of training data due to the required laborious annotation efforts. Recent studies have performed zero-shot learning by synthesizing training examples of canonical utterances and programs from a grammar, and further paraphrasing these utterances to improve linguistic diversity. However, such synthetic examples cannot fully capture patterns in real data. In this paper we analyze zero-shot parsers through the lenses of the language and logical gaps (Herzig and Berant, 2019), which quantify the discrepancy of language and programmatic patterns between the canonical examples and real-world user-issued ones. We propose bridging these gaps using improved grammars, stronger paraphrasers, and efficient learning methods using canonical examples that most likely reflect real user intents. Our model achieves strong performance on two semantic parsing benchmarks (Scholar, Geo) with zero labeled data.
Anthology ID:
2022.acl-long.103
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1455–1474
Language:
URL:
https://aclanthology.org/2022.acl-long.103
DOI:
10.18653/v1/2022.acl-long.103
Bibkey:
Cite (ACL):
Pengcheng Yin, John Wieting, Avirup Sil, and Graham Neubig. 2022. On The Ingredients of an Effective Zero-shot Semantic Parser. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1455–1474, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
On The Ingredients of an Effective Zero-shot Semantic Parser (Yin et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.103.pdf