2021
pdf
bib
abs
LIORI at SemEval-2021 Task 2: Span Prediction and Binary Classification approaches to Word-in-Context Disambiguation
Adis Davletov
|
Nikolay Arefyev
|
Denis Gordeev
|
Alexey Rey
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
This paper presents our approaches to SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation task. The first approach attempted to reformulate the task as a question answering problem, while the second one framed it as a binary classification problem. Our best system, which is an ensemble of XLM-R based binary classifiers trained with data augmentation, is among the 3 best-performing systems for Russian, French and Arabic in the multilingual subtask. In the post-evaluation period, we experimented with batch normalization, subword pooling and target word occurrence aggregation methods, resulting in further performance improvements.
pdf
bib
abs
LIORI at SemEval-2021 Task 8: Ask Transformer for measurements
Adis Davletov
|
Denis Gordeev
|
Nikolay Arefyev
|
Emil Davletov
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
This work describes our approach for subtasks of SemEval-2021 Task 8: MeasEval: Counts and Measurements which took the official first place in the competition. To solve all subtasks we use multi-task learning in a question-answering-like manner. We also use learnable scalar weights to weight subtasks’ contribution to the final loss in multi-task training. We fine-tune LUKE to extract quantity spans and we fine-tune RoBERTa to extract everything related to found quantities, including quantities themselves.
pdf
bib
LIORI at the FinCausal 2021 Shared task: Transformer ensembles are not enough to win
Adis Davletov
|
Sergey Pletenev
|
Denis Gordeev
Proceedings of the 3rd Financial Narrative Processing Workshop
2020
pdf
bib
abs
Gorynych Transformer at SemEval-2020 Task 6: Multi-task Learning for Definition Extraction
Adis Davletov
|
Nikolay Arefyev
|
Alexander Shatilov
|
Denis Gordeev
|
Alexey Rey
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper describes our approach to “DeftEval: Extracting Definitions from Free Text in Textbooks” competition held as a part of Semeval 2020. The task was devoted to finding and labeling definitions in texts. DeftEval was split into three subtasks: sentence classification, sequence labeling and relation classification. Our solution ranked 5th in the first subtask and 23rd and 21st in the second and the third subtasks respectively. We applied simultaneous multi-task learning with Transformer-based models for subtasks 1 and 3 and a single BERT-based model for named entity recognition.
pdf
bib
abs
LIORI at the FinCausal 2020 Shared task
Denis Gordeev
|
Adis Davletov
|
Alexey Rey
|
Nikolay Arefiev
Proceedings of the 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation
In this paper, we describe the results of team LIORI at the FinCausal 2020 Shared task held as a part of the 1st Joint Workshop on Financial Narrative Processing and MultiLingual Financial Summarisation. The shared task consisted of two subtasks: classifying whether a sentence contains any causality and labelling phrases that indicate causes and consequences. Our team ranked 1st in the first subtask and 4th in the second one. We used Transformer-based models with joint-task learning and their ensembles.
2019
pdf
bib
abs
Neural GRANNy at SemEval-2019 Task 2: A combined approach for better modeling of semantic relationships in semantic frame induction
Nikolay Arefyev
|
Boris Sheludko
|
Adis Davletov
|
Dmitry Kharchev
|
Alex Nevidomsky
|
Alexander Panchenko
Proceedings of the 13th International Workshop on Semantic Evaluation
We describe our solutions for semantic frame and role induction subtasks of SemEval 2019 Task 2. Our approaches got the highest scores, and the solution for the frame induction problem officially took the first place. The main contributions of this paper are related to the semantic frame induction problem. We propose a combined approach that employs two different types of vector representations: dense representations from hidden layers of a masked language model, and sparse representations based on substitutes for the target word in the context. The first one better groups synonyms, the second one is better at disambiguating homonyms. Extending the context to include nearby sentences improves the results in both cases. New Hearst-like patterns for verbs are introduced that prove to be effective for frame induction. Finally, we propose an approach to selecting the number of clusters in agglomerative clustering.