Souradip Chakraborty
2020
BioMedBERT: A Pre-trained Biomedical Language Model for QA and IR
Souradip Chakraborty
|
Ekaba Bisong
|
Shweta Bhatt
|
Thomas Wagner
|
Riley Elliott
|
Francesco Mosconi
Proceedings of the 28th International Conference on Computational Linguistics
The SARS-CoV-2 (COVID-19) pandemic spotlighted the importance of moving quickly with biomedical research. However, as the number of biomedical research papers continue to increase, the task of finding relevant articles to answer pressing questions has become significant. In this work, we propose a textual data mining tool that supports literature search to accelerate the work of researchers in the biomedical domain. We achieve this by building a neural-based deep contextual understanding model for Question-Answering (QA) and Information Retrieval (IR) tasks. We also leverage the new BREATHE dataset which is one of the largest available datasets of biomedical research literature, containing abstracts and full-text articles from ten different biomedical literature sources on which we pre-train our BioMedBERT model. Our work achieves state-of-the-art results on the QA fine-tuning task on BioASQ 5b, 6b and 7b datasets. In addition, we observe superior relevant results when BioMedBERT embeddings are used with Elasticsearch for the Information Retrieval task on the intelligently formulated BioASQ dataset. We believe our diverse dataset and our unique model architecture are what led us to achieve the state-of-the-art results for QA and IR tasks.
Transformers at SemEval-2020 Task 11: Propaganda Fragment Detection Using Diversified BERT Architectures Based Ensemble Learning
Ekansh Verma
|
Vinodh Motupalli
|
Souradip Chakraborty
Proceedings of the Fourteenth Workshop on Semantic Evaluation
In this paper, we present our approach for the ’Detection of Propaganda Techniques in News Articles’ task as a part of the 2020 edition of International Workshop on Semantic Evaluation. The specific objective of this task is to identify and extract the text segments in which propaganda techniques are used. We propose a multi-system deep learning framework that can be used to identify the presence of propaganda fragments in a news article and also deep dive into the diverse enhancements of BERT architecture which are part of the final solution. Our proposed final model gave an F1-score of 0.48 on the test dataset.
Search
Co-authors
- Ekaba Bisong 1
- Shweta Bhatt 1
- Thomas Wagner 1
- Riley Elliott 1
- Francesco Mosconi 1
- show all...