Karel Beneš
2025
BenCzechMark: A Czech-Centric Multitask and Multimetric Benchmark for Large Language Models with Duel Scoring Mechanism
Martin Fajcik | Martin Docekal | Jan Dolezal | Karel Ondrej | Karel Beneš | Jan Kapsa | Pavel Smrz | Alexander Polok | Michal Hradis | Zuzana Neverilova | Ales Horak | Radoslav Sabol | Michal Stefanik | Adam Jirkovsky | David Adamczyk | Petr Hyner | Jan Hula | Hynek Kydlicek
Transactions of the Association for Computational Linguistics, Volume 13
Martin Fajcik | Martin Docekal | Jan Dolezal | Karel Ondrej | Karel Beneš | Jan Kapsa | Pavel Smrz | Alexander Polok | Michal Hradis | Zuzana Neverilova | Ales Horak | Radoslav Sabol | Michal Stefanik | Adam Jirkovsky | David Adamczyk | Petr Hyner | Jan Hula | Hynek Kydlicek
Transactions of the Association for Computational Linguistics, Volume 13
We present BenCzechMark (BCM), the first comprehensive Czech language benchmark designed for large language models, offering diverse tasks, multiple task formats, and multiple evaluation metrics. Its duel scoring system is grounded in statistical significance theory and uses aggregation across tasks inspired by social preference theory. Our benchmark encompasses 50 challenging tasks, with corresponding test datasets, primarily in native Czech, with 14 newly collected ones. These tasks span 8 categories and cover diverse domains, including historical Czech news, essays from pupils or language learners, and spoken word. Furthermore, we collect and clean BUT-Large Czech Collection, the largest publicly available clean Czech language corpus, and use it for (i) contamination analysis and (ii) continuous pretraining of the first Czech-centric 7B language model with Czech-specific tokenization. We use our model as a baseline for comparison with publicly available multilingual models. Lastly, we release and maintain a leaderboard with existing 50 model submissions, where new model submissions can be made at https://huggingface.co/spaces/CZLC/BenCzechMark.
2023
BUT Systems for IWSLT 2023 Marathi - Hindi Low Resource Speech Translation Task
Santosh Kesiraju | Karel Beneš | Maksim Tikhonov | Jan Černocký
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
Santosh Kesiraju | Karel Beneš | Maksim Tikhonov | Jan Černocký
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper describes the systems submitted for Marathi to Hindi low-resource speech translation task. Our primary submission is based on an end-to-end direct speech translation system, whereas the contrastive one is a cascaded system. The backbone of both the systems is a Hindi-Marathi bilingual ASR system trained on 2790 hours of imperfect transcribed speech. The end-to-end speech translation system was directly initialized from the ASR, and then fine-tuned for direct speech translation with an auxiliary CTC loss for translation. The MT model for the cascaded system is initialized from a cross-lingual language model, which was then fine-tuned using 1.6 M parallel sentences. All our systems were trained from scratch on publicly available datasets. In the end, we use a language model to re-score the n-best hypotheses. Our primary submission achieved 30.5 and 39.6 BLEU whereas the contrastive system obtained 21.7 and 28.6 BLEU on official dev and test sets respectively. The paper also presents the analysis on several experiments that were conducted and outlines the strategies for improving speech translation in low-resource scenarios.