Claim Matching Beyond English to Scale Global Fact-Checking

Manual fact-checking does not scale well to serve the needs of the internet. This issue is further compounded in non-English contexts. In this paper, we discuss claim matching as a possible solution to scale fact-checking. We define claim matching as the task of identifying pairs of textual messages containing claims that can be served with one fact-check. We construct a novel dataset of WhatsApp tipline and public group messages alongside fact-checked claims that are first annotated for containing “claim-like statements” and then matched with potentially similar items and annotated for claim matching. Our dataset contains content in high-resource (English, Hindi) and lower-resource (Bengali, Malayalam, Tamil) languages. We train our own embedding model using knowledge distillation and a high-quality “teacher” model in order to address the imbalance in embedding quality between the low- and high-resource languages in our dataset. We provide evaluations on the performance of our solution and compare with baselines and existing state-of-the-art multilingual embedding models, namely LASER and LaBSE. We demonstrate that our performance exceeds LASER and LaBSE in all settings. We release our annotated datasets, codebooks, and trained embedding model to allow for further research.


Introduction
Human fact-checking is high-quality but timeconsuming. Given the effort that goes into factchecking a piece of content, it is desirable that a fact-check be easily matched with any content to which it applies. It is also necessary for factcheckers to prioritize content for fact-checking since there is not enough time to fact-check everything. In practice, there are many factors that affect whether a message is 'fact-check worthy' (Konstantinovskiy et al., 2020;Hassan et al., 2017), but one important factor is prevalence. Fact-checkers often want to check claims that currently have high viewership and avoid fact-checking 'fringe' claims as a fact-check could bring more attention to the claims-an understudied process known as amplification (Phillips, 2018;Wardle, 2018). While the number of exact duplicates and shares of a message can be used as a proxy for popularity, discovering and grouping together multiple messages making the same claims in different ways can give a more accurate view of prevalence. Such algorithms are also important for serving relevant fact-checks via 'misinformation tiplines' on WhatsApp and other platforms (Wardle et al., 2019;Meedan, 2019;Magallón Rosa, 2019).
Identifying pairs of textual messages containing claims that can be served with one fact-check is a potential solution to these issues. The ability to group claim-matched textual content in different languages would enable fact-checking organizations around the globe to prioritize and scale up their efforts to combat misinformation. In this paper, we make the following contributions: (i) we develop the task of claim matching, (ii) we train and release an Indian language XLM-R (I-XLM-R) sentence embedding model, (iii) we develop a multilingual annotated dataset across high-and lower-resource languages for evaluation, and (iv) we evaluate the ability of state-of-the-art sentence embedding models to perform claim matching at scale. We formally evaluate our methods within language but also show clusters found using our multilingual embedding model often have messages in different languages presenting the same claims.
We release two annotated datasets and our codebooks to enable further research. The first dataset

Very Similar
Barber's salon poses the biggest risk factor for Corona! This threat is going to remain for a long duration. *At an average a barber's napkin touches 5 noses minimum* The US health dept chief J Anthony said that salons have been responsible for almost 50% deaths.
*The biggest danger is from the barbershop itself*. This danger will remain for a long time. *Barber rubs the nose of at least 4 to 5 people with a towel,* The head of the US Department of Health J. Anthony has said that 50 percent of the deaths in the US have happened in the same way that came in saloons.  Konstantinovskiy et al. (2020). The second dataset consists of 2,343 pairs of social media messages and fact-checks in the same five languages as the first dataset annotated for claim similarity. Table 1 shows examples of annotated pairs of messages from the second dataset.
2 Related Work

Semantic Textual Similarity
Semantic textual similarity (STS) refers to the task of measuring the similarity in meaning of sentences, and there have been widely adopted evaluation benchmarks including the Semantic Textual Similarity Benchmark (STS-B) (2017; 2016; 2015; 2014; 2013; 2012) and the Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005). The STS-B benchmark assigns discrete similarity scores of 0 to 5 to pairs of sentences, with sentence pairs scored zero being completely dissimilar and pairs scored five being equivalent in meaning. The MRPC benchmark assigns binary labels that indicate whether sentence pairs are paraphrases or not. Semantic textual similarity is a problem still actively researched with a dynamic state of the art performance. In recent work from Raffel et al. (2020), the authors achieved state-of-the-art performance on STS-B benchmark using the large 11B parameter T5 model. The ALBERT model (Lan et al., 2019) achieved an accuracy of 93.4% on the MRPC benchmark and is considered one of the top contenders on the MRPC leaderboard.
While semantic textual similarity is similar to claim matching, the nuances in the latter require special attention. Claim matching is the task of matching messages with claims that can be served with the same fact-check and that does not always translate to message pairs having the same meanings. Moreover, claim matching requires working with content of variable length. In practice, content from social media also has wide variation in lexical and grammatical quality.

Multilingual Embedding Models
Embedding models are essential for claim and semantic similarity search at scale, since classification methods require a quadratic number of compar-isons. While we have seen an increasing number of transformer-based contextual embedding models in recent years (Devlin et al., 2019;Reimers and Gurevych, 2019;Cer et al., 2018), the progress has been asymmetric across languages.
The XLM-R model by Conneau et al. (2019) with 100 languages is a transformer-based model with a 250K token vocabulary trained by multilingual masked language modeling (MLM) with monolingual data and gained significant improvements in cross-lingual and multilingual benchmarks. LASER (Artetxe and Schwenk, 2019) provided language-agnostic representation of text in 93 languages. The authors trained a BiLSTM architecture using parallel corpora and an objective function that maps similar sentences in the same vicinity in a high-dimensional space. Language-agnostic BERT sentence embeddings (LaBSE) by Feng et al. (2020) improved over LASER in higher resource languages by MLM and translation language modeling (TLM) pretraining, followed by fine-tuning on a translation ranking task (Yang et al., 2019).

Claim Matching
Shaar et al. (2020) discussed retrieval and ranking of fact-checked claims for an input claim to detect previously debunked misinformation. They introduced the task, as well as a dataset covering US politics in English, and two BM25 based architectures with SBERT and a BERT-based reranker on top. Vo and Lee (2020) tackled a similar problem by finding relevant fact-check reports for multimodal social media posts. However these projects only focus on English data that mainly cover U.S. politics and at least one of the matching pairs is a claim from a fact-check report. Additionally, the data collection process used in Shaar et al. (2020) might not necessarily capture all possible matches for a claim, since the dataset is constructed by including only the claims mentioned in one fact-check report and not all previous occurrences. This may skew results and increase the risk of the model having a high false negative ratio. Recently, the CheckThat! Lab 2020 (Barrón-Cedeno et al., 2020) has presented the same problem as a shared task. We improve on prior work by finding a solution that works for high-and low-resource languages and also for matching claims between pairs of social media content and pairs of fact-checks. We explicitly annotated claim pairs that might match, avoiding the aforementioned false negatives issue by design and providing more accurate models and evaluations.

Data Sources
The data used in this paper comes from a variety of sources. We use a mixture of social media (e.g., WhatsApp) content alongside fact-checked claims, since it is essential for any claim-matching solution to be able to match content both among fact-checked claims and social media posts as well as within social media posts. Among the prevalent topics in our data sources are the COVID-19 pandemic, elections, and politics.
Tiplines. Meedan, a technology non-profit, has been assisting fact-checking organizations to setup and run misinformation tiplines on WhatsApp using their open-source software, Check. A tipline is a dedicated service to which 'tips' can be submitted by users. On WhatsApp, tiplines are phone numbers to which WhatsApp users can forward potential misinformation to check for existing factchecks or request a new fact-check. The first tipline in our dataset ran during the 2019 Indian elections and received 37,823 unique text messages. Several additional always-on tiplines launched in December 2019 and ran throughout the 2020 calendar year. We obtained a list of the text of messages and the times at which they were submitted to these tiplines for March to May 2019 (Indian election tipline) and for February 2020 to August 2020 (all other tiplines). We have no information beyond the text of messages and the times at which they were submitted. In particular, we have no information about the submitting users.
WhatsApp Public Groups. In addition to the messages submitted to these tiplines, we have data from a large number "public" WhatsApp groups collected by Garimella and Eckles (2020) during the same time period as the Indian election tipline. The dataset was collected by monitoring over 5,000 public WhatsApp groups discussing politics in India, totaling over 2 million unique posts. For more information on the dataset, please refer to Garimella and Eckles (2020). Such public WhatsApp groups, particularly those discussing politics have been shown to be widely used in India (Lokniti, 2018).
Fact-Check Reports. We aggregate roughly 150,000 fact-checks from a mixture of primary fact-checkers and fact-check aggregators. We employ aggregators such as Google Fact-check Explorer, 3 GESIS (Tchechmedjiev et al., 2019), and Data Commons, and include roughly a dozen factchecking organizations certified by the International Fact-Checking Network with either global or geographically-relevant scope in our dataset. All fact-checks included at minimum a headline and a publish date, but typically also include a lead or the full text of the fact-check, as well as adjudication of the claim (e.g., truth or falsity), and sometimes include information of lesser value for our work such as author, categorization tags, or references to original content that necessitated the fact-check.

Data Sampling & Annotation
To construct a dataset for claim matching, we design a two-step sampling and annotation process. We first sample a subset of items with potential matches from all sources and then annotate and select the ones containing "claim-like statements." In a second task, we annotate pairs of messages for claim similarity. One of the messages in each pair must have been annotated as containing a "claim-like statement" in the first annotation task. We sample possible matches in several ways in order to not unnecessarily waste annotator time. We describe these sampling strategies and other details of the process in the remainder of this section.

Task 1: Claim Detection
Task 1 presented annotators with a WhatsApp message or fact-check headline and asked whether it contained a "claim-like statement." We first created a codebook by inductively examining the English-language data, translations of the other-language data, and discussing the task with two fact-checkers (one Hindi-speaking and one Malayalam-speaking). We began with the definition set out by practitioners (Konstantinovskiy et al., 2020) for a "claim-like statement" and created examples drawn from our data sources. Annotators were asked whether the message had a claim-like statement and allowed to choose "Yes", "Probably", "No", or "N/A: The message is not in language X" (where X was the language being annotated). The instructions made clear "Probably" should be used sparingly and was intended for instances where an image, video, or other context was We recruited three native speakers for each of Hindi, Bengali, Tamil, and Malayalam through Indian student societies at different universities as well as independent journalists. All of our annotators had a Bachelor's degree and many were pursuing Masters or PhDs. We onboarded all annotators and discussed the risks of possibly politically charged, hateful, violent, and/or offensive content in the dataset. Our custom-built annotation interface provided the ability to skip any piece of content with one keystroke. We also encouraged annotators to take frequent breaks and calculated these breaks into our payments.
Our English-language data is a mix of Indian and global content. Two of our English annotators had previously completed the Hindi and Malayalam tasks while the third English annotator completed only the English-language task.
We calculate agreement using Randolph's marginal-free kappa (Randolph, 2005). This measure better estimates intercoder agreement in unbalanced datasets compared to fixed-marginal scores like Fleiss' kappa (Warrens, 2010).
All participants annotated 100 items independently. We then discussed disagreements on these 100 items and updated the codebook if needed. The participants then annotated datasets of approximately 1,000 items in each language. Information about this final annotation dataset is presented in Table 2. Agreement between annotators for this task is lower than the next task but on par with annotation tasks for hate speech and other 'hard tasks' (Del Vigna et al., 2017;Ousidhoum et al., 2019) suggesting determining whether a message has a claim-like statement is harder than determining the similarity of the statements (Task 2).

Task 2: Claim Similarity
The second task presented annotators with two messages and asked how similar the claim-like statements were in the messages. Annotators were given a four-point scale ("Very Similar", "Somewhat Similar", "Somewhat Dissimilar", and "Very Dissimilar"). We prepared a codebook with clear instructions for each response and examples in consultation with the two fact-checkers and discussed it with all annotators before annotation began. Annotators could also select "N/A: One or more of the messages is not in language X or does not contain a claim-like statement").
Our initial testing showed the largest source of disagreement was between "Somewhat Dissimilar" and "Very Dissimilar." We added guidance to the codebook but did not dwell on this aspect as we planned to collapse these categories together. We prioritize our evaluations on "Very Similar" or "Somewhat Similar" statements.
Although our goal is claim matching, this task asked annotators about the similarity of claimlike statements as the annotators were not all factcheckers. We found asking the annotators to speculate about whether some hypothetical fact-check could cover both statements was unhelpful. Our codebook is constructed such that "Very Similar" pairs of messages could be served by one factcheck while "Somewhat Similar" messages would partially be served by the same fact-check. A link to the codebook is in the supplemental materials.
The same annotators from Task 1 completed Task 2 with a few exceptions. One Tamil annotator was unable to continue due to time restrictions, and one Bengali annotator only completed part of the annotations (we calculate agreement with and without this annotator in Table 3). We added a fourth English annotator in case there was an- Legend: "similar" pairs were annotated by two or more annotators as being "Very Similar". "not sim." encompasses all other pairs, excluding "N/A" pairs. other dropout but all English annotators completed. Table 3 shows a breakdown of the dataset by language. In general, agreement on this task, even among the same annotators as Task 1, was much higher than Task 1 suggesting claim similarity is an easier task than claim detection. The largest point of disagreement was around the use of the N/A label: discussing this with annotators we found it was again the disagreement about whether certain messages had claims leading to the disagreement.

Sampling
A purely random sample of pairs is very unlikely to find many pairs that match. We considered examining pairs with the highest cosine similarities only, but these pairs were likely to match in trivial and uninteresting ways. In the end, we used random stratified sampling to select pairs for annotation.
We first calculate all pairwise cosine similarities using multiple embedding models (described in Section 5). We then use stratified sampling to sample 100 pairs in proportion to a Gaussian distribution with mean 0.825 and standard deviation 0.1 for each model and language. We do this due to our strong prior that pairs close to zero as well as pairs close to one are usually 'uninteresting.' These represent pairs that either clearly do not match or (very often) clearly match. In practice, we still sample a wide range of values (Figure 1). We also include 100 random pairs for each language with the exception of Tamil due to annotator time limitations.
We used LASER, LaBSE, and our Indian XLM-R (I-XLM-R) model (details below) to sample pairs for all languages. Our Bengali and Malayalam annotators had additional capacity and annotated 4509 additional pairs drawn in a similar way.

Experimental Setup
We use a GPU-enabled server with one 1080 GPU to train our own embedding model and run the rest of our experiments on desktop computers with minimal runtime. We use the Elasticsearch implementation of the BM25 system and use the Sentence-Transformers (for I-XLM-R), PyTorch (for LASER), and TensorFlow (for LaBSE) 4 to train and retrieve embeddings. We follow the approach of Reimers and Gurevych (2020) for tuning the hyperparameters of our embedding model.

Training a Multilingual Embedding Model
We use the knowledge distillation approach presented in Reimers and Gurevych (2020) to train a multilingual embedding model. 5 The approach adopts a student-teacher model in which a high quality teacher embedding model is used to align text representations of a student model by mapping embeddings of text in the student language to close proximity of the embeddings of the same text in the teacher language. Using this approach we train a model for English, Hindi, Malayalam, Tamil, and Bengali. We refer to this model as our Indian XLM-R model (I-XLM-R), and use it as one of the models we evaluate for claim matching.
Training Data. The knowledge distillation approach requires parallel text in both student and teacher languages for training embedding models. We find the OPUS parallel corpora (Tiedemann, 2012) to be a useful and diverse resource for parallel data. We retrieve parallel data between English and the collection of our four Indian languages from OPUS and use it as training data.
Training Procedure. For a teacher model M T and a student model M S and a collection of (s i , t i ) pairs of parallel text, we minimize the following MSE loss function for a given mini-batch B: Intuitively, this loss function forces embeddings of the student model for both t i and s i to be in proximity of the teacher embeddings for s i , therefore transferring embedding knowledge from the teacher to the student model. For training our Indian XLM-R model, we pick the English SBERT model as teacher (Reimers and Gurevych, 2019) (for its high quality embeddings) and XLM-Roberta (XLM-R) as the student (for SOTA performance in NLP tasks and a universal vocabulary that includes tokens from 100 languages).

Model Architecture
We evaluate a retrieval-based claim matching solution built on top of the BM25 retrieval system (Robertson and Zaragoza, 2009) as well as an embeddings-only approach. In the first case, queries are fed into BM25 and the retrieved results are then sorted based on their embedding similarity to the input query. The top ranking results are then used as potential matches for the input claim. In the latter case, we classify pairs of items using features derived from the embedding models.

Results
For some applications, it is good enough to be able to rank the most similar claims and treat the problem of claim matching as an information retrieval problem. This is the case, for example, when fact-checkers are examining possible matches to determine if a new content item matches a previous fact-check. We discuss the performance of information retrieval approaches in Section 6.1.
In many other applications, however, we seek a system that can determine if the claims in two items match without human intervention. These applications demand a classification approach: i.e., to determine whether two items match. This allows similar items to be grouped and fact-checkers to identify the largest groups of items with claims that have not been fact-checked. We discuss the performance of simple classification approaches in Section 6.2.

Information Retrieval Approach
We find the mean reciprocal rank (MRR) metric to be a good IR-based performance measure for our system, since we only know of one match in the retrieved results by the system for our queries. We use the base BM25 system as a strong baseline to compare against. We also compare our system with other state-of-the-art multilingual embedding models used for reranking, namely LASER and LaBSE. Results are presented in Table 4. The BM25 with I-XLM-R reranking outperforms other systems in all languages, with the exception of Tamil and English where the system performs comparably with the BM25 baseline. The largest lead in performance of the I-XLM-R based model is for Bengali, where the MRR score is more than 0.1 higher than the BM25 baseline.
Both LASER and LaBSE fall short on surpassing the baseline for any of the languages. LASER performs the worst on Tamil, where its MRR score is nearly 0.07 less than BM25. Similarly, LaBSE's largest difference with BM25 is in Hindi where it falls short by 0.085. Although there is room for improvement in some languages, the I-XLM-R seems the best choice if only one system is chosen.
After calculating MRR we also evaluated the systems on other metrics, namely "Mean First Relevant" (MFR, Fuhr (2018)) and HasPositive@K (Shaar et al., 2020). Both measures did not demonstrate any meaningful patterns useful for selecting the best system. We do not include the details of these evaluations for brevity.

Classification Approaches
Responding to submitted content on a tipline, as well as grouping claims to understand their relative prevalence/popularity, requires more than presenting a ranked list as occurs in the information retrieval approaches in the previous subsection and in previous formulations of this problem (e.g., Shaar et al., 2020). In this section we use the annotated pairs to evaluate how well simple classifiers perform with each model. Threshold Classifier. The first 'classifier' we evaluate is a simple threshold applied to the cosine similarity of a pair of items. Items above the threshold are predicted to match while items with a similarity below the threshold are predicted to not match. In doing this, we seek to understand the extent to which the embedding models can separate messages with matching claims from those with non-matching claims.
An ideal model would assign higher cosine similarity scores to every pair of messages with matching claims than to pairs of messages with nonmatching claims. Table 5 shows the F1 scores averaged across 10 runs of 10-fold cross validation for binary classifiers applied to all languages and each language individually. In general, the Indian XLM-R model performs best at the task with F1 scores ranging from 0.57 to 0.88. As shown in Figure 2, our Indian XLM-R model outperforms LASER primarily in precision and outperforms LaBSE primarily in terms of recall.
The numbers reported in Table 5's last column all come from I-XLM-R. The English-only SBERT model performs slightly better with a maximum F1 score of 0.90±0.09 at a threshold of 0.71 on English data, suggesting that the student model may have drifted from the teacher model for English during training. This drift is slight, however, and the cosine similarities across all English-language data for the two models are highly correlated with a Pearson's correlation coefficient of 0.93. The authors of SBERT released two additional multilingual models on that support English and Hindi, but do not support Bengali, Malayalam, or Tamil. 6 We find the models have comparable performance to I-XLM-R on English & Hindi while F1 scores for other languages are between 0.17 and 0.61. Our dataset includes both social media messages (namely, WhatsApp messages) and fact-checks. Overall, performance is higher for matching factchecks to one another than for matching social media messages to one another for all models. As an example, the best-performing model, Indian XLM-R, achieves a maximum F1 score of 0.76 with a threshold 0.87 for matching pairs of fact-checks, but only a maximum F1 score of 0.72 (threshold 0.90) for matching pairs of social media messages.
Claim Matching Classifier. We train an Ada-Boost binary classifier that predicts if two textual claims match. The features are all precomputed or trivial to compute so that such a system could easily be run to refine a smaller number of candidate matches with minimal additional computation.
We use lengths of claims, the difference in lengths, embedding vectors of each item, and their cosine similarity as features. We build a balanced dataset by taking all the "Very Similar" pairs and matching every item with a randomly selected "Not Very Similar" (every other label) item from the same language. We do not differentiate between pairs in different languages as our per language data is limited and all features including the embedding vectors translate across languages as they are from mulitilingual embedding models.
Claim matching classification results are presented in Table 6. We evaluate models using 10fold cross validation and report accuracy and F1 scores for each class averaged over 10 runs. Consistent with previous outcomes, it is clear that using the I-XLM-R cosine similarity and embeddings as input features results in better performance than other models, including the model with all features. The positive class F1 scores for all models in Table 6 are notably higher than the threshold approaches (Table 5) suggesting information from the embeddings themselves and the lengths of the texts are useful in determining whether the claims in two messages match. The claim matching classifier is language-agnostic and is learning from only 522 datapoints, which underscores the quality of the I-XLM-R embeddings.
Error Analysis. We manually inspect the pairs classified in error using the "threshold classifier" and I-XLM-R. The pairs either have a similarity score above the matching threshold but are "Not Similar" (false positives, 24/89) or are matches and have a score below threshold (false negatives, 65/89). 16 of the 24 false positives are labeled as "Somewhat Similar," and manual inspection shows that these pairs all have overlapping claims (i.e., they share some claims but not others). There are no obvious patterns for the false negatives, but some of the errors are made in ambiguous cases.
We also examine the errors of one random fold of the AdaBoost classifier to further investigate where our model makes mistakes. There are a total of 10 wrong predictions (6 false negatives and 4 false positives). Of these, 2/6 and 1/4 are annotation errors. Within the false negatives, most other cases are pairs of text that are very similar but minimally ambiguous because of a lack of context, which annotators correctly resolved to being identical. An example of such a false negative is the pair of messages "Claim rare flower that blooms once in 400 years in the-himalayas-called-mahameru-pushpam" and "Images of Mahameru flower blooms once every 400 years in Himalayas." False positives were all "Somewhat Similar" and "Somewhat Dissimilar" pairs that the classifier mistook for "Very Similar." There were no significant discrepancies among languages in classification errors.

Discussion & Conclusions
Scaling human-led fact-checking efforts requires matching messages with the same claims. In this paper, we train a new model and create an evaluation dataset that moves beyond English and American politics. Our system is being used in practice to support fact-checking organizations.
We find that the embedding models can generally match messages with the same claims. Performance for matching fact-checks slightly exceeds that for matching social media items. This makes sense, given that fact-checks are written by professional journalists and generally exhibit less orthographical variation than social media items.
Too few examples of fact-checks correctly matched a social media item to evaluate performance in that setting. This is not a major limitation since nearly every fact-check starts from a social media item. So, in practice we only need to be able to match social media items to one another in order to locate other social media items having the same claims as the item that led to a fact-check.
We evaluate claim matching within each language, but the embedding models are all multilingual and could serve to match claims across languages. BM25 is not multilingual, but Elasticsearch can index embeddings directly. Previously de Britto Almeida and Santos (2020) developed a Elasticsearch plugin to query embeddings by cosine distance, but since version 7.3 of Elasticsearch this functionality is now available natively in Elasticsearch (Tibshirani, 2019), meaning a large set of embeddings can be searched efficiently to find near matches across languages.
As a proof of concept, we took the 37,823 unique text messages sent to the Indian election tipline and clustered them using I-XLM-R and online, singlelink hierarchical clustering with a threshold of 0.90. We found 1,305 clusters with 2 or more items; the largest cluster had 213 items. We hired an Indian journalist with experience fact-checking during the Indian 2019 elections to annotate each of the 559 clusters with five or more items by hand. The annotation interface presented three examples from each cluster: one with the lowest average distance to all other messages in the cluster, one with the highest distance, and one message chosen randomly. In 137 cases the examples shown for annotation were from multiple languages, and in 132 of those cases the journalist was able to identify the same claims across multiple languages. Although preliminary, this demonstrates the feasibility and importance of multilingual claim matching with these methodsan area we hope further work will tackle.
Our findings are supporting over 12 factchecking organizations running misinformation tiplines. The deployed system uses I-XLM-R and automatically groups text messages with similarities over 0.95 and recommends possible matches from less-similar candidates that fact-checking organizations can confirm or reject. Matches can also be added manually. Initial feedback from the factcheckers has been positive, and we are collecting data for further research and evaluation.
We prioritized the well-being of annotators and the privacy of WhatsApp users throughout this research. Our data release conforms to the FAIR principles (Wilkinson et al., 2016). We have no identifying information about WhatsApp users and any references to personally identifiable information in messages such as phone numbers, emails, addresses and license plate numbers are removed to preserve user privacy. We worked closely with our annotators preparing them for the risk of hateful content, encouraging frequent breaks, and paying well-above minimum wage. We took a compassionate response to COVID disruptions and other life stresses even when this meant less annotated data than was originally envisioned. Our codebooks are available openly. Due to the page limit for the supplemental materials, we provide hyperlinks to these codebooks:

4515
• Claim detection codebook • Claim similarity codebook We coded a simple annotation interface, which is free and open-source: https://github.com/ meedan/surveyer/. A screen capture of the annotation interface during the English-language claimsimilarity task is shown in Figure 3 8.2 Per language results Figure 4 shows the accuracy, precision, recall, and F1 scores for simple threshold classifiers. This is equivalent to Figure 2, but shows the plots for each language individually in addition to the overall values across all languages.
The figure also includes two additional embedding models from the SBERT website: xlm-rdistilroberta-base-paraphrase-v1 and xlm-r-bertbase-nli-stsb-mean-tokens. 7 As discussed in the main paper, we find our models far outperform these models for Bengali, Malayalam, and Tamil while performance for English and Hindi is similar.

Alternative definition of the positive class
The analysis in the paper presents results for "Very Similar" compared to all other classes (N/A labels excluded). Here we show qualitatively similar results are obtained when the positive class is items for which a majority of annotators indicated "Very Similar" or "Somewhat Similar." As stated, somewhat similar matches are useful as a fact-check would partially address some of the claims in a somewhat similar match. Table 8 provides the distribution of labels for the claim matching dataset. Table 7 presents F1 scores averaged across 10 runs of 10-fold cross validation using "Somewhat Similar" or "Very Similar" as the positive class. The results are similar to Table 5 in the main paper. F1 scores are generally higher, but our Indian XLM-R model still performs best. Surprisingly, LASER matches its performance in one language (Hindi). Figure 4: Accuracy, precision, recall, and F1 scores for each language individually. Positive class is "Very similar." Table 7: Maximum F1 scores (F1) and standard deviations achieved and the corresponding thresholds (thres.) for each score. The 'classifiers' are simple thresholds on the cosine similarities. Scores are the average of 10 rounds of 10-fold cross validation. The positive class is "Somewhat Similar" or "Very Similar."