Prashant Gupta
2019
Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment
Hemant Pugaliya
|
Karan Saxena
|
Shefali Garg
|
Sheetal Shalini
|
Prashant Gupta
|
Eric Nyberg
|
Teruko Mitamura
Proceedings of the 18th BioNLP Workshop and Shared Task
Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream tasks by just fine-tuning on domain-specific datasets (similar to transfer learning). However, using powerful models on non-trivial tasks, such as ranking and large document classification, still remains a challenge due to input size limitations of parallel architecture and extremely small datasets (insufficient for fine-tuning). In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in the medical domain. We use task-specific pre-trained models as deep feature extractors. Our model achieves the highest Spearman’s Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task.
2009
Cross-document Event Extraction and Tracking: Task, Evaluation, Techniques and Challenges
Heng Ji
|
Ralph Grishman
|
Zheng Chen
|
Prashant Gupta
Proceedings of the International Conference RANLP-2009
Predicting Unknown Time Arguments based on Cross-Event Propagation
Prashant Gupta
|
Heng Ji
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers
Search
Co-authors
- Heng Ji 2
- Ralph Grishman 1
- Zheng Chen 1
- Hemant Pugaliya 1
- Karan Saxena 1
- show all...