Multi-task Learning of Spoken Language Understanding by Integrating N-Best Hypotheses with Hierarchical Attention

Mingda Li, Xinyue Liu, Weitong Ruan, Luca Soldaini, Wael Hamza, Chengwei Su


Abstract
Currently, in spoken language understanding (SLU) systems, the automatic speech recognition (ASR) module produces multiple interpretations (or hypotheses) for the input audio signal and the natural language understanding (NLU) module takes the one with the highest confidence score for domain or intent classification. However, the interpretations can be noisy, and solely relying on one interpretation can cause information loss. To address the problem, many research works attempt to rerank the interpretations for a better choice while some recent works get better performance by integrating all the hypotheses during prediction. In this paper, we follow the way of integrating hypotheses but strengthen the training mode by involving more tasks, some of which may be not in existing tasks of NLU but relevant, via multi-task learning or transfer learning. Moreover, we propose the Hierarchical Attention Mechanism (HAM) to further improve the performance with the acoustic-model features like confidence scores, which are ignored in the current hypotheses integration models. The experimental results show that compared to the standard estimation with one hypothesis, the multi-task learning with HAM can improve the domain and intent classification by relatively 19% and 37%, which are much higher than improvements with current integration or reranking methods. To illustrate the cause of improvements brought by our model, we decode the hidden representations of some utterance examples and compare the generated texts with hypotheses and transcripts. The comparison shows that our model could recover the transcription by integrating the fragmented information among hypotheses and identifying the frequent error patterns of the ASR module, and even rewrite the query for a better understanding, which reveals the characteristic of multi-task learning of broadcasting knowledge.
Anthology ID:
2020.coling-industry.11
Volume:
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track
Month:
December
Year:
2020
Address:
Online
Editors:
Ann Clifton, Courtney Napoles
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
113–123
Language:
URL:
https://aclanthology.org/2020.coling-industry.11
DOI:
10.18653/v1/2020.coling-industry.11
Bibkey:
Cite (ACL):
Mingda Li, Xinyue Liu, Weitong Ruan, Luca Soldaini, Wael Hamza, and Chengwei Su. 2020. Multi-task Learning of Spoken Language Understanding by Integrating N-Best Hypotheses with Hierarchical Attention. In Proceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 113–123, Online. International Committee on Computational Linguistics.
Cite (Informal):
Multi-task Learning of Spoken Language Understanding by Integrating N-Best Hypotheses with Hierarchical Attention (Li et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-industry.11.pdf