Dorota Glowacka


2021

pdf bib
Statistically Significant Detection of Semantic Shifts using Contextual Word Embeddings
Yang Liu | Alan Medlar | Dorota Glowacka
Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems

Detecting lexical semantic change in smaller data sets, e.g. in historical linguistics and digital humanities, is challenging due to a lack of statistical power. This issue is exacerbated by non-contextual embedding models that produce one embedding per word and, therefore, mask the variability present in the data. In this article, we propose an approach to estimate semantic shift by combining contextual word embeddings with permutation-based statistical tests. We use the false discovery rate procedure to address the large number of hypothesis tests being conducted simultaneously. We demonstrate the performance of this approach in simulation where it achieves consistently high precision by suppressing false positives. We additionally analyze real-world data from SemEval-2020 Task 1 and the Liverpool FC subreddit corpus. We show that by taking sample variation into account, we can improve the robustness of individual semantic shift estimates without degrading overall performance.

2019

pdf bib
A Framework for Annotating ‘Related Works’ to Support Feedback to Novice Writers
Arlene Casey | Bonnie Webber | Dorota Glowacka
Proceedings of the 13th Linguistic Annotation Workshop

Understanding what is expected of academic writing can be difficult for novice writers to assimilate, and recent years have seen several automated tools become available to support academic writing. Our work presents a framework for annotating features of the Related Work section of academic writing, that supports writer feedback.

pdf bib
Classifying Author Intention for Writer Feedback in Related Work
Arlene Casey | Bonnie Webber | Dorota Glowacka
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)

The ability to produce high-quality publishable material is critical to academic success but many Post-Graduate students struggle to learn to do so. While recent years have seen an increase in tools designed to provide feedback on aspects of writing, one aspect that has so far been neglected is the Related Work section of academic research papers. To address this, we have trained a supervised classifier on a corpus of 94 Related Work sections and evaluated it against a manually annotated gold standard. The classifier uses novel features pertaining to citation types and co-reference, along with patterns found from studying Related Works. We show that these novel features contribute to classifier performance with performance being favourable compared to other similar works that classify author intentions and consider feedback for academic writing.