Maria Lebedeva
2022
Semi-automatically Annotated Learner Corpus for Russian
Anisia Katinskaia
|
Maria Lebedeva
|
Jue Hou
|
Roman Yangarber
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We present ReLCo— the Revita Learner Corpus—a new semi-automatically annotated learner corpus for Russian. The corpus was collected while several thousand L2 learners were performing exercises using the Revita language-learning system. All errors were detected automatically by the system and annotated by type. Part of the corpus was annotated manually—this part was created for further experiments on automatic assessment of grammatical correctness. The Learner Corpus provides valuable data for studying patterns of grammatical errors, experimenting with grammatical error detection and grammatical error correction, and developing new exercises for language learners. Automating the collection and annotation makes the process of building the learner corpus much cheaper and faster, in contrast to the traditional approach of building learner corpora. We make the data publicly available.
2021
Slav-NER: the 3rd Cross-lingual Challenge on Recognition, Normalization, Classification, and Linking of Named Entities across Slavic Languages
Jakub Piskorski
|
Bogdan Babych
|
Zara Kancheva
|
Olga Kanishcheva
|
Maria Lebedeva
|
Michał Marcińczuk
|
Preslav Nakov
|
Petya Osenova
|
Lidia Pivovarova
|
Senja Pollak
|
Pavel Přibáň
|
Ivaylo Radev
|
Marko Robnik-Sikonja
|
Vasyl Starko
|
Josef Steinberger
|
Roman Yangarber
Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing
This paper describes Slav-NER: the 3rd Multilingual Named Entity Challenge in Slavic languages. The tasks involve recognizing mentions of named entities in Web documents, normalization of the names, and cross-lingual linking. The Challenge covers six languages and five entity types, and is organized as part of the 8th Balto-Slavic Natural Language Processing Workshop, co-located with the EACL 2021 Conference. Ten teams participated in the competition. Performance for the named entity recognition task reached 90% F-measure, much higher than reported in the first edition of the Challenge. Seven teams covered all six languages, and five teams participated in the cross-lingual entity linking task. Detailed valuation information is available on the shared task web page.