%0 Conference Proceedings %T BUT-FIT at SemEval-2020 Task 5: Automatic Detection of Counterfactual Statements with Deep Pre-trained Language Representation Models %A Fajcik, Martin %A Jon, Josef %A Docekal, Martin %A Smrz, Pavel %Y Herbelot, Aurelie %Y Zhu, Xiaodan %Y Palmer, Alexis %Y Schneider, Nathan %Y May, Jonathan %Y Shutova, Ekaterina %S Proceedings of the Fourteenth Workshop on Semantic Evaluation %D 2020 %8 December %I International Committee for Computational Linguistics %C Barcelona (online) %F fajcik-etal-2020-fit %X This paper describes BUT-FIT’s submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused on detecting whether a given statement contains a counterfactual (Subtask 1) and extracting both antecedent and consequent parts of the counterfactual from the text (Subtask 2). We experimented with various state-of-the-art language representation models (LRMs). We found RoBERTa LRM to perform the best in both subtasks. We achieved the first place in both exact match and F1 for Subtask 2 and ranked second for Subtask 1. %R 10.18653/v1/2020.semeval-1.53 %U https://aclanthology.org/2020.semeval-1.53 %U https://doi.org/10.18653/v1/2020.semeval-1.53 %P 437-444