Samridhi Choudhary
2020
Extreme Model Compression for On-device Natural Language Understanding
Kanthashree Mysore Sathyendra
|
Samridhi Choudhary
|
Leah Nicolich-Henkin
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track
In this paper, we propose and experiment with techniques for extreme compression of neural natural language understanding (NLU) models, making them suitable for execution on resource-constrained devices. We propose a task-aware, end-to-end compression approach that performs word-embedding compression jointly with NLU task learning. We show our results on a large-scale, commercial NLU system trained on a varied set of intents with huge vocabulary sizes. Our approach outperforms a range of baselines and achieves a compression rate of 97.4% with less than 3.7% degradation in predictive performance. Our analysis indicates that the signal from the downstream task is important for effective compression with minimal degradation in performance.
2019
Deep Neural Model Inspection and Comparison via Functional Neuron Pathways
James Fiacco
|
Samridhi Choudhary
|
Carolyn Rose
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
We introduce a general method for the interpretation and comparison of neural models. The method is used to factor a complex neural model into its functional components, which are comprised of sets of co-firing neurons that cut across layers of the network architecture, and which we call neural pathways. The function of these pathways can be understood by identifying correlated task level and linguistic heuristics in such a way that this knowledge acts as a lens for approximating what the network has learned to apply to its intended task. As a case study for investigating the utility of these pathways, we present an examination of pathways identified in models trained for two standard tasks, namely Named Entity Recognition and Recognizing Textual Entailment.
2017
Linguistic Markers of Influence in Informal Interactions
Shrimai Prabhumoye
|
Samridhi Choudhary
|
Evangelia Spiliopoulou
|
Christopher Bogart
|
Carolyn Rose
|
Alan W Black
Proceedings of the Second Workshop on NLP and Computational Social Science
There has been a long standing interest in understanding ‘Social Influence’ both in Social Sciences and in Computational Linguistics. In this paper, we present a novel approach to study and measure interpersonal influence in daily interactions. Motivated by the basic principles of influence, we attempt to identify indicative linguistic features of the posts in an online knitting community. We present the scheme used to operationalize and label the posts as influential or non-influential. Experiments with the identified features show an improvement in the classification accuracy of influence by 3.15%. Our results illustrate the important correlation between the structure of the language and its potential to influence others.
Search