JuniperLiu at CoMeDi Shared Task: Models as Annotators in Lexical Semantics Disagreements

Zhu Liu, Zhen Hu, Ying Liu


Abstract
We present the results of our system for the CoMeDi Shared Task, which predicts majority votes (Subtask 1) and annotator disagreements (Subtask 2). Our approach combines model ensemble strategies with MLP-based and threshold-based methods trained on pretrained language models. Treating individual models as virtual annotators, we simulate the annotation process by designing aggregation measures that incorporate continuous relatedness scores and discrete classification labels to capture both majority and disagreement. Additionally, we employ anisotropy removal techniques to enhance performance. Experimental results demonstrate the effectiveness of our methods, particularly for Subtask 2. Notably, we find that standard deviation on continuous relatedness scores among different model manipulations correlates with human disagreement annotations compared to metrics on aggregated discrete labels. The code will be published at https://github.com/RyanLiut/CoMeDi_Solution
Anthology ID:
2025.comedi-1.10
Volume:
Proceedings of Context and Meaning: Navigating Disagreements in NLP Annotation
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Michael Roth, Dominik Schlechtweg
Venues:
CoMeDi | WS
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
103–112
Language:
URL:
https://aclanthology.org/2025.comedi-1.10/
DOI:
Bibkey:
Cite (ACL):
Zhu Liu, Zhen Hu, and Ying Liu. 2025. JuniperLiu at CoMeDi Shared Task: Models as Annotators in Lexical Semantics Disagreements. In Proceedings of Context and Meaning: Navigating Disagreements in NLP Annotation, pages 103–112, Abu Dhabi, UAE. International Committee on Computational Linguistics.
Cite (Informal):
JuniperLiu at CoMeDi Shared Task: Models as Annotators in Lexical Semantics Disagreements (Liu et al., CoMeDi 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.comedi-1.10.pdf