BUT-FIT at SemEval-2020 Task 5: Automatic Detection of Counterfactual Statements with Deep Pre-trained Language Representation Models

This paper describes BUT-FIT’s submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused on detecting whether a given statement contains a counterfactual (Subtask 1) and extracting both antecedent and consequent parts of the counterfactual from the text (Subtask 2). We experimented with various state-of-the-art language representation models (LRMs). We found RoBERTa LRM to perform the best in both subtasks. We achieved the first place in both exact match and F1 for Subtask 2 and ranked second for Subtask 1.


Introduction
One of the concerns of SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals (Yang et al., 2020) is to research the extent to which current state-of-the-art systems can detect counterfactual statements. A counterfactual statement, as defined in this competition, is a conditional composed of two parts. The former part is the antecedent -a statement that is contradictory to known facts. The latter is the consequent -a statement that describes what would happen had the antecedent held 1 . To detect a counterfactual statement, the system often needs to posses a commonsense world knowledge to detect whether the antecedent contradicts with it. In addition, such a system must have an ability to reason over consequences that would arise had the antecedent would have been true. In some cases, the consequent might not be present at all, but instead a sequence resembling consequent, but with no consequential statement, might be present. Figure 1 shows a set of examples drawn from the data.

If that was my daughter, I would have asked If I did something wrong.
Nike's revenue last quarter alone climbed a solid 5%, to $8.432 billion, and would have increased 7% had it not been for foreign-currency exchange. Al Sharpton's storefront headquarters in Harlem, the gatherings were solemn, spirited and reflected the fraught nature of what would have been the Rev. Figure 1: Three examples from the training data containing counterfactual statements. Antecedents are highlighted with red bold, consequents with blue bold italic. The last example has no consequent.
Counterfactuals were studied across a wide spectrum of domains. For instance, logicians and philosophers focus on logical rules between parts of counterfactual and its outcome (Goodman, 1947). Political scientists conducted counterfactual thought experiments for hypothetical tests on historical events, policies or other aspects of society (Tetlock and Belkin, 1996). However, there is only a small amount of work in computational linguistics studying this phenomenon. SemEval-2020 Task 5 aims at filling this gap in the field. The challenge consists of two subtasks: 1. Detecting counterfactual statements -classify whether the sentence has a counterfactual statement.
2. Detecting antecedent and consequence -extract boundaries of antecedent and consequent from the input text.
The approaches we adopted follow recent advancements from deep pre-trained language representation models. In particular, we experimented with fine-tuning of BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2019) models. Our implementation is available online 2 .
2 System overview 2.1 Language Representation Models We experimented with three language representation models (LRMs): BERT (Devlin et al., 2019) is pre-trained using the multi-task objective consisting of denoising LM and inter-sentence coherence (ISC) sub-objectives. The LM objective aims at predicting the identity of 15% randomly masked tokens present at the input 3 . Given two sentences from the corpus, the ISC objective is to classify whether the second sentence follows the first sentence in the corpus. The sentence is replaced randomly in half of the cases. During the pre-training, the input consists of two documents, each represented as a sequence of tokens divided by special [SEP ]  . The input tokens are represented via jointly learned token embeddings E t , segment embeddings E s capturing whether the word belongs into document 1 or document 2 and positional embeddings E p since self-attention is position-invariant operation. During fine-tuning, we leave the second segment empty.
RoBERTa (Liu et al., 2019) is a BERT-like model with the different training procedure. This includes dropping the ISC sub-objective, tokenizing via byte pair encoding (Sennrich et al., 2016) instead of WordPiece, full-length training sequences, more training data, larger batch size, dynamic token masking instead of token masking done during preprocessing and more hyperparameter tuning.
ALBERT (Lan et al., 2019) is a RoBERTa-like model, but with n-gram token masking (consecutive n-grams of random length from the input are masked), cross-layer parameter sharing, novel ISC objective that aims at detecting whether the order of two consecutive sentences matches the data, input embedding factorization, SentencePiece tokenization (Kudo and Richardson, 2018) and much larger model dimension. The model is currently at the top of leaderboards for many natural language understanding tasks including GLUE (Wang et al., 2018) and SQuAD2.0 (Rajpurkar et al., 2018).

Subtask 1: Detecting counterfactual statements
The first part of the challenge is a binary classification task, where the participating systems determine whether the input sentence is a counterfactual statement.
A baseline system applying an SVM classifier (Cortes and Vapnik, 1995) over TF-IDF features was supplied by the organizers. We modified this script to use other simple classifiers over the same featuresnamely Gaussian Naive Bayes and 6-layer perceptron network, with 64 neurons in each layer.
As a more serious attempt at tackling the task, we compare these baselines with state-of-the-art LRMs -RoBERTa and ALBERT. The input is encoded the same way as in 2.3. We trained both models with cross-entropy objective and we used the linear transformation of CLS-level output after applying dropout for classification. After the hyperparameter search, we found that RoBERTa model performed the best on this task. For our final system, we built an ensemble from the best checkpoints of RoBERTa model.

Subtask 2: Detecting antecedent and consequence
We extended each LRM in the same way Devlin et al. (2019) extended BERT for SQuAD. The input representation for input x is obtained by summing the input embedding matrices E = E t + E s + E p ∈ R L×d i representing its word embeddings E t , position embeddings E p and segment embeddings 4 E s with L being the input length and d i input dimensionality. Applying LRM and dropout δ, an output matrix δ(LRM (E)) = H ∈ R L×do is obtained, d o being the LRM's output dimensionality. Finally, a linear transformation is applied to obtain logit vector for antecedent start/end a s , a e and consequent start/end c s , For consequent, we do not mask CLS-level output and use it as a no consequent option for both c s and c e . Therefore we predict that there is no consequent iff model's prediction is c s = 0 and c e = 0; assuming 0 is the index of CLS-level output. Finally, the log-softmax is applied and model is trained via minimizing cross-entropy for each tuple of inputs x and target indices t from the dataset D.
− (x,t)∈D j∈{as,ae,cs,ce} An ensemble was built using a greedy heuristic seeking the smallest subset from the pool of trained models s.t. it obtains best exact match on a validation set 5 .

Preprocessing & Tools
In case of Subtask 1, after performing a length analysis on the data, we truncated input sequences at length of 100 tokens for the LM based models in order to reduce worst-case memory requirements, since only 0.41% of the training sentences were longer than this limit. A histogram of the example lengths in tokens is presented in Appendix A.2. For Subtask 2, all the input sequences fit the maximum input length of 509 tokens.
For the preliminary experiments with simpler machine learning methods, we adopted the baseline script provided by the organizers, which is based on sklearn Python module. We implemented our neural network models in PyTorch (Paszke et al., 2019) using transformers (Wolf et al., 2019) library. In particular, we experimented with roberta-large and albert-xxlarge-v2 in Subtask 1 and with bertbase-uncased, bert-large-uncased, roberta-large and albert-xxlarge-v1 models in Subtask 2. We used hyperopt (Bergstra et al., 2013) to tune model hyperparameters. See Appendix A.1 for further details on hyperparameters. We used the Adam optimizer with a decoupled weight decay (Loshchilov and Hutter, 2017). For Subtask 2, we combined this optimizer with lookahead (Zhang et al., 2019). All models were trained on 12GB GPU.

Results and analysis
For Subtask 1, we adapted the baseline provided by the task organizer to asses how more classical machine learning approaches perform on the dataset. After seeing the subpar performance, we turned our attention to pre-trained LRMs, namely RoBERTa and ALBERT. The results of the best run of each model can be found in Table 1. A more comprehensive list of results for different hyperparameters can be found in the Appendix 3.
Our final submission is an ensemble of RoBERTa-large models since we found that this LRM performs better than ALBERT for this task. We trained a number of models on the train set and computed F1 scores on the validation part. 10 best (in terms of F1) single models were selected, and the output probabilities were averaged for all the possible combinations of these models. The combination with highest F1 score was selected as a final ensemble. Then we trained new models with the same parameters as the models in the ensemble, but using the whole training data, including the part that was previously used for validation. Finally, for our submitted ensemble, we used checkpoints saved after the same number of updates as the best checkpoints for the systems trained only on part of the training data.  Table 2: Results on the Subtask 2 validation data. For EM/F1, we report means and standard deviations. The statistics were collected from 10 runs. #θ denotes the number of model's parameters. We also measured EM/F1 for the extraction of antecedent/consequent separately; denoted as A EM , A F 1 and C EM , C F 1 respectively. At last ACC no−c denotes no-consequent classification accuracy.
For Subtask 2, the results are presented in Table 2. The hyperparameters were the same for all LRMs. An ensemble was composed of 11 models drawn from the pool of 60 trained models. We found the ALBERT results to have a high variance. In fact, we recorded our overall best result on validation data with ALBERT, obtaining 75.35/89.00 EM/F1. However, in competition, we submitted only RoBERTa models due to less variance and slightly better results on average 6 .

Related work
Closest to our work, Son et al. (2017) created a counterfactual tweet dataset and built a pipeline classifier to detect counterfactuals. The authors identified 7 distinct categories of counterfactuals and firstly attempted to classify the examples into one of these categories using a set of rules. Then for certain categories, they used a linear SVM classifier (Cortes and Vapnik, 1995) to filter out tricky false positives.
A large effort in computational linguistics was devoted to the specific form of counterfactuals -so-called what-if questions. A recent paper by Tandon et al. (2019) presents a new dataset for what-if question answering, including a strong, BERT-based baseline. The task is to choose an answer to a hypothetical question about cause and an effect, e.g. Do more wildfires result in more erosion by the ocean?. Each question is accompanied by a paragraph focused on the topic of the question, which may or may not contain enough information to choose the correct option. The authors show that there is still a large performance gap between humans and state-of-the-art models (73.8% accuracy for BERT against 96.3% for a human). This gap is caused mainly by the inability of the BERT model to answer more complicated questions based on indirect effects, which require more reasoning steps. However, the results show that the BERT model was able to answer a large portion of the questions even without accompanying paragraphs, indicating that the LRM models have a notion of commonsense knowledge.

Conclusions
We examined the performance of current state-of-the-art language representation models on both subtasks and we found yet another NLP task benefits from unsupervised pre-training. In both cases, we found RoBERTa model to perform slightly better than other LRMs, while its results also being more stable. We have ended up first in both EM and F1 on Subtask 2 and second in Subtask 1.

A.1.1 Subtask 1
The results of RoBERTa models with their training hyperparameters are presented in Table 3 Table 3: Different batch sizes and learning rates used to train RoBERTa-large models, results of the best checkpoint on the validation part of the data.
We kept other RoBERTa model hyperparameters as shown in Table 4 (Bergstra et al., 2013).

A.2 Data analysis
The distribution of lengths for examples from Subtask 1 is presented in Figure 2. We truncate sequences in this subtask to maximum of 100 tokens per example.  Table 6 shows examples of statements classified wrongly by both ALBERT and RoBERTa models.

A.4 Ambiguous labels
During the error analysis, we noticed a number of examples where we were not sure whether the labels are correct (see Table 7).

Statement
Predicted Correct MAUREEN DOWD VISITS SECRETARY NAPOLITANO -"New Year's Resolutions: If only we could put America in Tupperware": "Janet Napolitano and I hadn't planned to spend New Year's Eve together.
0 1 If the current process fails, however, in hindsight some will say that it might have made more sense to outsource the whole effort to a commercial vendor.

Statement
Label Given that relatively few people have serious, undiagnosed arrhythmias with no symptoms (if people did, we would be screening for this more often), this isn't the major concern. 0 A flu shot will not always prevent you from getting flu, but most will have a less severe course of flu than if they hadn't had the shot," Dr. Morens said. 0 Table 7: Examples of ambiguous annotation.

A.5 Measurement of results
The individual measurements for Subtask 2 statistics presented in 2 can be found at https://tinyurl. com/y8zncw7p. Note that we did not use the same evaluation script as used in official baseline. Our evaluation script was SQuAD1.1 like, ground truth and extracted strings were firstly normalized the same way as in SQuAD1.1, then the strings were compared. For details see our implementation of method evaluate semeval2020 task5 in scripts/common/evaluate.py.

A.6 Wrong predictions in Subtask 2
Ground Truth Prediction GLOBAL FOOTPRINT Mylan said in a separate statement that the combination would create "a vertically and horizontally integrated generics and specialty pharmaceuticals leader with a diversified revenue base and a global footprint." On a pro forma basis, the combined company would have had revenues of about $4.2 billion and a gross profit, or EBITDA, of about $1.0 billion in 2006, Mylan said. GLOBAL FOOTPRINT Mylan said in a separate statement that the combination would create "a vertically and horizontally integrated generics and specialty pharmaceuticals leader with a diversified revenue base and a global footprint." On a pro forma basis, the combined company would have had revenues of about $4.2 billion and a gross profit, or EBITDA, of about $1.0 billion in 2006, Mylan said. Shortly after the theater shooting in 2012, he told ABC that the gunman was "diabolical" and would have found another way to carry out his massacre if guns had not been available, a common argument from gun-control opponents.
Shortly after the theater shooting in 2012, he told ABC that the gunman was "diabolical" and would have found another way to carry out his massacre if guns had not been available, a common argument from gun-control opponents. Now, if the priests in the Vatican had done their job in the first place, a quiet conversation, behind closed doors and much of it would have been prevented.
Now, if the priests in the Vatican had done their job in the first place, a quiet conversation, behind closed doors and much of it would have been prevented. The CPEC may have some advantages for Pakistan's economy -for one, it has helped address the country's chronic power shortage -but the costs are worrisome and unless they can be wished away with a wand, it will present significant issues in the future.
The CPEC may have some advantages for Pakistan's economy -for one, it has helped address the country's chronic power shortage -but the costs are worrisome and unless they can be wished away with a wand, it will present significant issues in the future.