2022
pdf
bib
abs
Controlled Data Generation via Insertion Operations for NLU
Manoj Kumar
|
Yuval Merhav
|
Haidar Khan
|
Rahul Gupta
|
Anna Rumshisky
|
Wael Hamza
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
Use of synthetic data is rapidly emerging as a realistic alternative to manually annotating live traffic for industry-scale model building. Manual data annotation is slow, expensive and not preferred for meeting customer privacy expectations. Further, commercial natural language applications are required to support continuously evolving features as well as newly added experiences. To address these requirements, we propose a targeted synthetic data generation technique by inserting tokens into a given semantic signature. The generated data are used as additional training samples in the tasks of intent classification and named entity recognition. We evaluate on a real-world voice assistant dataset, and using only 33% of the available training set, we achieve the same accuracy as training with all available data. Further, we analyze the effects of data generation across varied real-world applications and propose heuristics that improve the task performance further.
pdf
bib
abs
Unsupervised training data re-weighting for natural language understanding with local distribution approximation
Jose Garrido Ramas
|
Dieu-thu Le
|
Bei Chen
|
Manoj Kumar
|
Kay Rottmann
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
One of the major challenges of training Natural Language Understanding (NLU) production models lies in the discrepancy between the distributions of the offline training data and of the online live data, due to, e.g., biased sampling scheme, cyclic seasonality shifts, annotated training data coming from a variety of different sources, and a changing pool of users. Consequently, the model trained by the offline data is biased. We often observe this problem especially in task-oriented conversational systems, where topics of interest and the characteristics of users using the system change over time. In this paper we propose an unsupervised approach to mitigate the offline training data sampling bias in multiple NLU tasks. We show that a local distribution approximation in the pre-trained embedding space enables the estimation of importance weights for training samples guiding re-sampling for an effective bias mitigation. We illustrate our novel approach using multiple NLU datasets and show improvements obtained without additional annotation, making this a general approach for mitigating effects of sampling bias.
pdf
bib
abs
Improving Large-Scale Conversational Assistants using Model Interpretation based Training Sample Selection
Stefan Schroedl
|
Manoj Kumar
|
Kiana Hajebi
|
Morteza Ziyadi
|
Sriram Venkatapathy
|
Anil Ramakrishna
|
Rahul Gupta
|
Pradeep Natarajan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
This paper presents an approach to identify samples from live traffic where the customer implicitly communicated satisfaction with Alexa’s responses, by leveraging interpretations of model behavior. Such customer signals are noisy and adding a large number of samples from live traffic to training set makes re-training infeasible. Our work addresses these challenges by identifying a small number of samples that grow training set by ~0.05% while producing statistically significant improvements in both offline and online tests.
pdf
bib
abs
Chasing the Tail with Domain Generalization: A Case Study on Frequency-Enriched Datasets
Manoj Kumar
|
Anna Rumshisky
|
Rahul Gupta
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Natural language understanding (NLU) tasks are typically defined by creating an annotated dataset in which each utterance is encountered once. Such data does not resemble real-world natural language interactions in which certain utterances are encountered frequently, others rarely. For deployed NLU systems this is a vital problem, since the underlying machine learning (ML) models are often fine-tuned on typical NLU data, and then applied to real-world data with a very different distribution. Such systems need to maintain interpretation consistency for both high-frequency utterances and low-frequency utterances. We propose an alternative strategy that explicitly uses utterance frequency in training data to learn models that are more robust to unknown distributions. We present a methodology to simulate utterance usage in two public NLU corpora and create new corpora with head, body and tail segments. We evaluate several methods for joint intent classification and named entity recognition (IC-NER), and use two domain generalization approaches that we adapt to NER. The proposed approaches demonstrate upto 7.02% relative improvement in semantic accuracy over baselines on the tail data. We provide insights as to why the proposed approaches work and show that the reasons for observed improvements do not align with those reported in previous work.
pdf
bib
abs
IISERB Brains at SemEval-2022 Task 6: A Deep-learning Framework to Identify Intended Sarcasm in English
Tanuj Shekhawat
|
Manoj Kumar
|
Udaybhan Rathore
|
Aditya Joshi
|
Jasabanta Patro
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
This paper describes the system architectures and the models submitted by our team “IISERB Brains” to SemEval 2022 Task 6 competition. We contested for all three sub-tasks floated for the English dataset. On the leader-board, we got 19th rank out of 43 teams for sub-task A, 8th rank out of 22 teams for sub-task B, and 13th rank out of 16 teams for sub-task C. Apart from the submitted results and models, we also report the other models and results that we obtained through our experiments after organizers published the gold labels of their evaluation data. All of our code and links to additional resources are present in GitHub for reproducibility.