Constrained Language Models Yield Few-Shot Semantic Parsers

Richard Shin, Christopher Lin, Sam Thomson, Charles Chen, Subhro Roy, Emmanouil Antonios Platanios, Adam Pauls, Dan Klein, Jason Eisner, Benjamin Van Durme


Abstract
We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. Our results demonstrate that with only a small amount of data and very little code to convert into English-like representations, our blueprint for rapidly bootstrapping semantic parsers leads to surprisingly effective performance on multiple community tasks, greatly exceeding baseline methods also trained on the same limited data.
Anthology ID:
2021.emnlp-main.608
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7699–7715
Language:
URL:
https://aclanthology.org/2021.emnlp-main.608
DOI:
10.18653/v1/2021.emnlp-main.608
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.608.pdf
Code
 microsoft/semantic_parsing_with_constrained_lm
Data
BREAK