2018
pdf
bib
abs
Neural Machine Translation for Low Resource Languages using Bilingual Lexicon Induced from Comparable Corpora
Sree Harsha Ramesh
|
Krishna Prasad Sankaranarayanan
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop
Resources for the non-English languages are scarce and this paper addresses this problem in the context of machine translation, by automatically extracting parallel sentence pairs from the multilingual articles available on the Internet. In this paper, we have used an end-to-end Siamese bidirectional recurrent neural network to generate parallel sentences from comparable multilingual articles in Wikipedia. Subsequently, we have showed that using the harvested dataset improved BLEU scores on both NMT and phrase-based SMT systems for the low-resource language pairs: English–Hindi and English–Tamil, when compared to training exclusively on the limited bilingual corpora collected for these language pairs.
pdf
bib
abs
Embedded-State Latent Conditional Random Fields for Sequence Labeling
Dung Thai
|
Sree Harsha Ramesh
|
Shikhar Murty
|
Luke Vilnis
|
Andrew McCallum
Proceedings of the 22nd Conference on Computational Natural Language Learning
Complex textual information extraction tasks are often posed as sequence labeling or shallow parsing, where fields are extracted using local labels made consistent through probabilistic inference in a graphical model with constrained transitions. Recently, it has become common to locally parametrize these models using rich features extracted by recurrent neural networks (such as LSTM), while enforcing consistent outputs through a simple linear-chain model, representing Markovian dependencies between successive labels. However, the simple graphical model structure belies the often complex non-local constraints between output labels. For example, many fields, such as a first name, can only occur a fixed number of times, or in the presence of other fields. While RNNs have provided increasingly powerful context-aware local features for sequence tagging, they have yet to be integrated with a global graphical model of similar expressivity in the output distribution. Our model goes beyond the linear chain CRF to incorporate multiple hidden states per output label, but parametrizes them parsimoniously with low-rank log-potential scoring matrices, effectively learning an embedding space for hidden states. This augmented latent space of inference variables complements the rich feature representation of the RNN, and allows exact global inference obeying complex, learned non-local output constraints. We experiment with several datasets and show that the model outperforms baseline CRF+RNN models when global output constraints are necessary at inference-time, and explore the interpretable latent structure.
2016
pdf
bib
Towards Building A Domain Agnostic Natural Language Interface to Real-World Relational Databases
Sree Harsha Ramesh
|
Jayant Jain
|
Sarath K S
|
Krishna R Sundaresan
Proceedings of the 13th International Conference on Natural Language Processing