Pravalika Avvaru
2019
Topic Spotting using Hierarchical Networks with Self Attention
Pooja Chitkara
|
Ashutosh Modi
|
Pravalika Avvaru
|
Sepehr Janghorbani
|
Mubbasir Kapadia
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Success of deep learning techniques have renewed the interest in development of dialogue systems. However, current systems struggle to have consistent long term conversations with the users and fail to build rapport. Topic spotting, the task of automatically inferring the topic of a conversation, has been shown to be helpful in making dialog system more engaging and efficient. We propose a hierarchical model with self attention for topic spotting. Experiments on the Switchboard corpus show the superior performance of our model over previously proposed techniques for topic spotting and deep models for text classification. Additionally, in contrast to offline processing of dialog, we also analyze the performance of our model in a more realistic setting i.e. in an online setting where the topic is identified in real time as the dialog progresses. Results show that our model is able to generalize even with limited information in the online setting.
2018
Retrieval-Based Neural Code Generation
Shirley Anugrah Hayati
|
Raphael Olivier
|
Pravalika Avvaru
|
Pengcheng Yin
|
Anthony Tomasic
|
Graham Neubig
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
In models to generate program source code from natural language, representing this code in a tree structure has been a common approach. However, existing methods often fail to generate complex code correctly due to a lack of ability to memorize large and complex structures. We introduce RECODE, a method based on subtree retrieval that makes it possible to explicitly reference existing code examples within a neural code generation model. First, we retrieve sentences that are similar to input sentences using a dynamic-programming-based sentence similarity scoring method. Next, we extract n-grams of action sequences that build the associated abstract syntax tree. Finally, we increase the probability of actions that cause the retrieved n-gram action subtree to be in the predicted code. We show that our approach improves the performance on two code generation tasks by up to +2.6 BLEU.
Search
Co-authors
- Pooja Chitkara 1
- Ashutosh Modi 1
- Sepehr Janghorbani 1
- Mubbasir Kapadia 1
- Shirley Anugrah Hayati 1
- show all...