Abhyuday Jagannatha

Also published as: Abhyuday N Jagannatha


pdf bib
Calibrating Structured Output Predictors for Natural Language Processing
Abhyuday Jagannatha | Hong Yu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the applications are to be deployed in a safety-critical domain such as healthcare. However the output space of such structured prediction models are often too large to directly adapt binary or multi-class calibration methods. In this study, we propose a general calibration scheme for output entities of interest in neural network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for Named Entity Recognition, Part-of-speech tagging and Question Answering systems. We also observe an improvement in model performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.


pdf bib
Active Learning for New Domains in Natural Language Understanding
Stanislav Peshterliev | John Kearney | Abhyuday Jagannatha | Imre Kiss | Spyros Matsoukas
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers)

We explore active learning (AL) for improving the accuracy of new domains in a natural language understanding (NLU) system. We propose an algorithm called Majority-CRF that uses an ensemble of classification models to guide the selection of relevant utterances, as well as a sequence labeling model to help prioritize informative examples. Experiments with three domains show that Majority-CRF achieves 6.6%-9% relative error rate reduction compared to random sampling with the same annotation budget, and statistically significant improvements compared to other AL approaches. Additionally, case studies with human-in-the-loop AL on six new domains show 4.6%-9% improvement on an existing NLU system.


pdf bib
Bidirectional RNN for Medical Event Detection in Electronic Health Records
Abhyuday N Jagannatha | Hong Yu
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Structured prediction models for RNN based sequence labeling in clinical text
Abhyuday Jagannatha | Hong Yu
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing


pdf bib
Mining and Ranking Biomedical Synonym Candidates from Wikipedia
Abhyuday Jagannatha | Jinying Chen | Hong Yu
Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis