Martin Docekal
2025
BenCzechMark: A Czech-Centric Multitask and Multimetric Benchmark for Large Language Models with Duel Scoring Mechanism
Martin Fajcik | Martin Docekal | Jan Dolezal | Karel Ondrej | Karel Beneš | Jan Kapsa | Pavel Smrz | Alexander Polok | Michal Hradis | Zuzana Neverilova | Ales Horak | Radoslav Sabol | Michal Stefanik | Adam Jirkovsky | David Adamczyk | Petr Hyner | Jan Hula | Hynek Kydlicek
Transactions of the Association for Computational Linguistics, Volume 13
Martin Fajcik | Martin Docekal | Jan Dolezal | Karel Ondrej | Karel Beneš | Jan Kapsa | Pavel Smrz | Alexander Polok | Michal Hradis | Zuzana Neverilova | Ales Horak | Radoslav Sabol | Michal Stefanik | Adam Jirkovsky | David Adamczyk | Petr Hyner | Jan Hula | Hynek Kydlicek
Transactions of the Association for Computational Linguistics, Volume 13
We present BenCzechMark (BCM), the first comprehensive Czech language benchmark designed for large language models, offering diverse tasks, multiple task formats, and multiple evaluation metrics. Its duel scoring system is grounded in statistical significance theory and uses aggregation across tasks inspired by social preference theory. Our benchmark encompasses 50 challenging tasks, with corresponding test datasets, primarily in native Czech, with 14 newly collected ones. These tasks span 8 categories and cover diverse domains, including historical Czech news, essays from pupils or language learners, and spoken word. Furthermore, we collect and clean BUT-Large Czech Collection, the largest publicly available clean Czech language corpus, and use it for (i) contamination analysis and (ii) continuous pretraining of the first Czech-centric 7B language model with Czech-specific tokenization. We use our model as a baseline for comparison with publicly available multilingual models. Lastly, we release and maintain a leaderboard with existing 50 model submissions, where new model submissions can be made at https://huggingface.co/spaces/CZLC/BenCzechMark.
2021
R2-D2: A Modular Baseline for Open-Domain Question Answering
Martin Fajcik | Martin Docekal | Karel Ondrej | Pavel Smrz
Findings of the Association for Computational Linguistics: EMNLP 2021
Martin Fajcik | Martin Docekal | Karel Ondrej | Pavel Smrz
Findings of the Association for Computational Linguistics: EMNLP 2021
This work presents a novel four-stage open-domain QA pipeline R2-D2 (Rank twice, reaD twice). The pipeline is composed of a retriever, passage reranker, extractive reader, generative reader and a mechanism that aggregates the final prediction from all system’s components. We demonstrate its strength across three open-domain QA datasets: NaturalQuestions, TriviaQA and EfficientQA, surpassing state-of-the-art on the first two. Our analysis demonstrates that: (i) combining extractive and generative reader yields absolute improvements up to 5 exact match and it is at least twice as effective as the posterior averaging ensemble of the same models with different parameters, (ii) the extractive reader with fewer parameters can match the performance of the generative reader on extractive QA datasets.
2020
BUT-FIT at SemEval-2020 Task 4: Multilingual Commonsense
Josef Jon | Martin Fajcik | Martin Docekal | Pavel Smrz
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Josef Jon | Martin Fajcik | Martin Docekal | Pavel Smrz
Proceedings of the Fourteenth Workshop on Semantic Evaluation
We participated in all three subtasks. In subtasks A and B, our submissions are based on pretrained language representation models (namely ALBERT) and data augmentation. We experimented with solving the task for another language, Czech, by means of multilingual models and machine translated dataset, or translated model inputs. We show that with a strong machine translation system, our system can be used in another language with a small accuracy loss. In subtask C, our submission, which is based on pretrained sequence-to-sequence model (BART), ranked 1st in BLEU score ranking, however, we show that the correlation between BLEU and human evaluation, in which our submission ended up 4th, is low. We analyse the metrics used in the evaluation and we propose an additional score based on model from subtask B, which correlates well with our manual ranking, as well as reranking method based on the same principle. We performed an error and dataset analysis for all subtasks and we present our findings.
BUT-FIT at SemEval-2020 Task 5: Automatic Detection of Counterfactual Statements with Deep Pre-trained Language Representation Models
Martin Fajcik | Josef Jon | Martin Docekal | Pavel Smrz
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Martin Fajcik | Josef Jon | Martin Docekal | Pavel Smrz
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper describes BUT-FIT’s submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused on detecting whether a given statement contains a counterfactual (Subtask 1) and extracting both antecedent and consequent parts of the counterfactual from the text (Subtask 2). We experimented with various state-of-the-art language representation models (LRMs). We found RoBERTa LRM to perform the best in both subtasks. We achieved the first place in both exact match and F1 for Subtask 2 and ranked second for Subtask 1.
JokeMeter at SemEval-2020 Task 7: Convolutional Humor
Martin Docekal | Martin Fajcik | Josef Jon | Pavel Smrz
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Martin Docekal | Martin Fajcik | Josef Jon | Pavel Smrz
Proceedings of the Fourteenth Workshop on Semantic Evaluation
This paper describes our system that was designed for Humor evaluation within the SemEval-2020 Task 7. The system is based on convolutional neural network architecture. We investigate the system on the official dataset, and we provide more insight to model itself to see how the learned inner features look.