Geunseob Oh
2022
Improving Top-K Decoding for Non-Autoregressive Semantic Parsing via Intent Conditioning
Geunseob Oh
|
Rahul Goel
|
Chris Hidey
|
Shachi Paul
|
Aditya Gupta
|
Pararth Shah
|
Rushin Shah
Proceedings of the 29th International Conference on Computational Linguistics
Semantic parsing (SP) is a core component of modern virtual assistants like Google Assistant and Amazon Alexa. While sequence-to-sequence based auto-regressive (AR) approaches are common for conversational SP, recent studies employ non-autoregressive (NAR) decoders and reduce inference latency while maintaining competitive parsing quality. However, a major drawback of NAR decoders is the difficulty of generating top-k (i.e., k-best) outputs with approaches such as beam search. To address this challenge, we propose a novel NAR semantic parser that introduces intent conditioning on the decoder. Inspired by the traditional intent and slot tagging parsers, we decouple the top-level intent prediction from the rest of a parse. As the top-level intent largely governs the syntax and semantics of a parse, the intent conditioning allows the model to better control beam search and improves the quality and diversity of top-k outputs. We introduce a hybrid teacher-forcing approach to avoid training and inference mismatch. We evaluate the proposed NAR on conversational SP datasets, TOP & TOPv2. Like the existing NAR models, we maintain the O(1) decoding time complexity while generating more diverse outputs and improving top-3 exact match (EM) by 2.4 points. In comparison with AR models, our model speeds up beam search inference by 6.7 times on CPU with competitive top-k EM.
Search
Co-authors
- Rahul Goel 1
- Chris Hidey 1
- Shachi Paul 1
- Aditya Gupta 1
- Pararth Shah 1
- show all...