Razvan-Gabriel Dumitru
Also published as: Razvan Gabriel Dumitru
2024
Retrieval Augmented Generation of Subjective Explanations for Socioeconomic Scenarios
Razvan-Gabriel Dumitru
|
Maria Alexeeva
|
Keith Alcock
|
Nargiza Ludgate
|
Cheonkam Jeong
|
Zara Fatima Abdurahaman
|
Prateek Puri
|
Brian Kirchhoff
|
Santadarshan Sadhu
|
Mihai Surdeanu
Proceedings of the Sixth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS 2024)
We introduce a novel retrieval augmented generation approach that explicitly models causality and subjectivity. We use it to generate explanations for socioeconomic scenarios that capture beliefs of local populations. Through intrinsic and extrinsic evaluation, we show that our explanations, contextualized using causal and subjective information retrieved from local news sources, are rated higher than those produced by other large language models both in terms of mimicking the real population and the explanations quality. We also provide a discussion of the role subjectivity plays in evaluation of this natural language generation task.
ELLEN: Extremely Lightly Supervised Learning for Efficient Named Entity Recognition
Haris Riaz
|
Razvan Gabriel Dumitru
|
Mihai Surdeanu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In this work, we revisit the problem of semi-supervised named entity recognition (NER) focusing on extremely light supervision, consisting of a lexicon containing only 10 examples per class. We introduce ELLEN, a simple, fully modular, neuro-symbolic method that blends fine-tuned language models with linguistic rules. These rules include insights such as “One Sense Per Discourse”, using a Masked Language Model as an unsupervised NER, leveraging part-of-speech tags to identify and eliminate unlabeled entities as false negatives, and other intuitions about classifier confidence scores in local and global context. ELLEN achieves very strong performance on the CoNLL-2003 dataset when using the minimal supervision from the lexicon above. It also outperforms most existing (and considerably more complex) semi-supervised NER methods under the same supervision settings commonly used in the literature (i.e., 5% of the training data). Further, we evaluate our CoNLL-2003 model in a zero-shot scenario on WNUT-17 where we find that it outperforms GPT-3.5 and achieves comparable performance to GPT-4. In a zero-shot setting, ELLEN also achieves over 75% of the performance of a strong, fully supervised model trained on gold data. Our code is publicly available.
Search
Co-authors
- Mihai Surdeanu 2
- Maria Alexeeva 1
- Keith Alcock 1
- Nargiza Ludgate 1
- Cheonkam Jeong 1
- show all...