Generating Scientific Claims for Zero-Shot Scientific Fact Checking

Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data, as annotation requires domain expertise. To address this challenge, we propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences, and demonstrate its usefulness in zero-shot fact checking for biomedical claims. We propose CLAIMGEN-BART, a new supervised method for generating claims supported by the literature, as well as KBIN, a novel method for generating claim negations. Additionally, we adapt an existing unsupervised entity-centric method of claim generation to biomedical claims, which we call CLAIMGEN-ENTITY. Experiments on zero-shot fact checking demonstrate that both CLAIMGEN-ENTITY and CLAIMGEN-BART, coupled with KBIN, achieve up to 90% performance of fully supervised models trained on manually annotated claims and evidence. A rigorous evaluation study demonstrates significant improvement in generated claim and negation quality over existing baselines


Introduction
Scientific documents contain complex assertions about scientific processes, making it difficult to automate important tasks such as claim extraction and scientific fact checking. Additionally, the collection of manually annotated labels to train models on tasks with scientific data is time consuming and expensive due to the need for domain expertise (Collins et al., 2017;Augenstein and Søgaard, 2017;Lehman et al., 2019;Wadden et al., 2020;DeYoung et al., 2021). As such, methods which require less manual annotation are especially useful in this domain. This work addresses this challenge by exploring how automatic generation of scientific claims can assist with dataset creation and zero-shot fact checking in the biomedical domain.
Being able to reduce scientific text to atomic assertions has numerous possible applications, and is known to be helpful for scientific communication and machine processing of scientific concepts (Kuhn et al., 2013). Claim generation can enable zero-shot fact checking, reducing the need for expert-labeled data (Pan et al., 2021), and can be used to expand existing datasets such as Wadden et al. (2020) and Saakyan et al. (2021) without additional manual annotation. In this work we focus on the use of claim generation in scientific fact checking, demonstrating that claim generation enables zero-shot biomedical fact checking.
Generating scientific claims involves distilling a complex scientific sentence into one or more valid claims (see examples in Figure 1). As in previous work, we focus on biomedical claims as biomedical literature has long been a major focus in scientific natural language processing, as well as scientific fact checking (Saakyan et al., 2021;Wadden et al., 2020;Kotonya and Toni, 2020). While in Wadden et al. (2020), claims were rewritten by domain experts from complex citation sentences (citances), we propose methods for automatically generating claims and claim negations from this source.
Similar to other generation tasks, evaluating the quality of generated output requires multiple judgements beyond the fluency of the generated text, e.g., whether each claim is faithful to the source sentence, and is understandable on its own (Sai et al., 2020). However, there are also other quality attributes that are important to assess specifically for scientific claims, such as whether each claim is atomic or check-worthy . Given this, we propose a set of manual evaluation criteria and annotation guidelines for evaluating claim generation ( §5.2).
Additionally, when generating claims to build datasets for tasks such as fact checking, a major challenge is creating refuted claims as negative training instances. Previous work has proposed automatic ways of generating refutations based on negating existing claims or creating claim variants via entity-replacement (Pan et al., 2021) and text-infilling using a pre-trained masked language model (Saakyan et al., 2021). We improve upon this by introducing Knowledge Base Informed Negations (KBIN), a principled method to generate refutations that performs entity-replacement using the relations and learned embeddings of entities in a domain-specific knowledge base.
Contributions In sum, our contributions are: • The first study on scientific claim generation, comparing both unsupervised (CLAIMGEN-ENTITY) and fully supervised (CLAIMGEN-BART) generation on biomedical text. • KBIN, a novel method for generating refuted scientific claims which produces more convincing negations than previous work. • Application of our claim generation methods on zero-shot scientific fact checking resulting in 90% of the performance of a model trained on in-domain manually written claims. Additionally, a rigorous evaluation study showing that CLAIMGEN-BART and KBIN produce significantly higher quality claims and more convincing negations than previous work.

Preliminaries
Valid Claims In this work, we define a valid claim as one which is fluent, atomic, decontextualized, and accurately reflects the meaning of the original sentence. Fluency is concerned with a claim being a generally well-formed English sentence, and atomicity with a claim being a "verifiable statement expressing a finding about one aspect of a scientific entity or process, which can be verified from a single source" (Wadden et al., 2020). De-contextualilzation is concerned with a sentence being interpretable on its own, requiring none of the original surrounding text to resolve aspects of the sentence such as pronouns, abbreviations, etc., and can be handled by either directly de-contextualizing a sentence (Choi et al., 2021) or by ensuring that all of the context sentences are available to a model (Wadden et al., 2021). Checkworthy claims in the wild may not be fluent, atomic, or de-contextualized, however it is useful to generate such claims as they have been shown to be useful for automated processing of science concepts (Kuhn et al., 2013) and scientific fact checking (Wadden et al., 2020).
Scientific Claim Generation At a high level, scientific claim generation is the task of distilling one or more valid claims from one or more sentences concerned with a scientific fact. More specifically, the task is defined as: given a scientific sentence s and optionally additional context sentences X, generate one or more claims c i ∈ C which are valid and entailed by s and X. In the context of fact checking, we must generate claims which are either supported or refuted by the literature, as well as those for which not enough information is present to make a veracity judgement, in order that they may be paired with appropriate evidence documents to serve as training data for fact checking systems. As such, we require methods which can take the claims in C which are entailed by the source sentence and generate negations to acquire refuted claims.

Generating Supported Claims
We experiment with two generation methods designed to produce claims which are supported by the source sentence. The first method is an entitycentric unsupervised method adapted from Pan et al. (2021) which requires no <sentence, claim> pairs (CLAIMGEN-ENTITY). We also introduce a new Exergames improve function and reduce the risk of falls.

C1457868
T033: "Finding" Exergames worse function and reduce the risk of falls.
Exergames deteriorating function and reduce the risk of falls.
Exergames worsened function and reduce the risk of falls.

GPT-2 Ranker
Exergames deteriorate function and reduce the risk of falls.
cui2vec Figure 2: KBIN method. We start with NER and linking to UMLS using scispaCy. We then find the most similar concepts with the same type using cui2vec, replace the entity in the source sentence using the canonical name and aliases of similar entities, and rank them using GPT-2. Finally, from the highest ranked replacements, we select the claim which maximizes contradiction with the original claim using an external NLI model. method that uses BART (Lewis et al., 2020) trained on a small set of <sentence, claim> pairs to directly generate claims (CLAIMGEN-BART). For each sample i, we refer to the input source sentence as s i , the context sentences as x (i) l ∈ X i and the output claims as C i consisting of k claims {c Wadden et al. (2020), we use citation sentences as unlabelled sentences for generation since these provide a natural link to an evidence document. Various components of our modeling pipelines take advantage of models pretrained on datasets for NER, NLI, QA, and fact-checking. We provide an overview of these datasets in §A.4.

CLAIMGEN-ENTITY
We adapt the entity-centric method presented in Pan et al. (2021) as an unsupervised claim generation approach. This method has been tested on general domain fact checking, but has not been used for science claim generation and zero-shot scientific fact checking. In particular, we re-implement the base method used for generating supported claims and adapt it to the biomedical domain, substituting in a domain specific model for named-entity recognition. The method consists of the following steps for a given sample i:  (Mohan and Li, 2019), which consists of 4,392 PubMed abstracts exhaustively annotated for mentions of UMLS entities (Bodenreider, 2004).
Question Generation For question generation, we use BART trained on questions from SQuAD (Rajpurkar et al., 2016). As input for training, we encode a concatenation of the context and answer text from a given SQuAD question, and train the model to decode the question. During inference, we concatenate the source sentence s i and an entity e Question to Claim Finally, as in Pan et al. (2021), we use a second BART model to generate declarative claims from questions. We train the model on the QA2D dataset (Demszky et al., 2018), which contains declarative full sentences paired with questions and their answer from SQuAD. The model is trained by encoding a concatenation of the question and answer, and decoding the full declarative sentence. At inference time, we concatenate and encode q

CLAIMGEN-BART
We introduce a fully-supervised model for claim generation based on BART trained on <citance, claim> pairs. For this, we use the manual citance re-writes released by the SciFact authors, 3 which consist of citances from scientific papers rewritten as one or more atomic claims which are directly entailed by the citance. For training, we encode the citance, as well as the sentences immediately before and after the citance (the context), and train the decoder to generate claims directly. We choose to encode the context as well to help de-contextualize generated claims. We concatenate the citance and context using a double pipe (i.e. X i ||s i ), and train the encoder to generate one claim at a time. We use top-k sampling to generate multiple claims, with k set to the number of noun chunks in the original source citance. 4

Knowledge Base Informed Negations
CLAIMGEN-ENTITY and CLAIMGEN-BART only produce claims which are entailed by the source sentence. Additionally, we are interested in producing claim variants which are directly refuted by the original sentence, as these negations are needed when building fact checking datasets and for training fact checking models. Work in Wadden et al. (2020) created these negations manually, and some work has begun to explore automatically generating these negations for scientific claims (Saakyan et al., 2021). To this end, we leverage the availability of large curated biomedical knowledge bases to develop a principled approach to claim variant generation. In particular, we use the UMLS metathesaurus (Bodenreider, 2004), which unifies hundreds of different ontologies in biomedicine, as a source of term replacements for negations.
We provide an overview of the KBIN algorithm 4 We use scispaCy to identify noun chunks in Algorithm 1 and Figure 2. KBIN works by first performing NER on an input claim c, obtaining entities {e 1 , . . . , e n } ∈ E. For each entity e j in E, we link the entity to its unique concept u j in UMLS using the scispaCy entity linker. If the entity is linked, we select all concepts which are siblings to u j in the concept hierarchy, and which have the same semantic type (e.g. "Clinical Drug"). We rank all selected concepts by their cosine distance to the entity concept using pre-trained UMLS concept vectors, retaining the top 20 closest concepts. For this, we use cui2vec (Beam et al., 2020), which contains pre-trained concept vectors for 108,477 concepts from UMLS trained on medical documents from diverse sources. For each of the related concepts, we generate candidate claim variants by replacing the entity text in the original claim with the canonical name and aliases of the related concept from UMLS. We rank all replacement sentences by their perplexity using a pre-trained GPT-2 model (Radford et al., 2019), keeping the sentence with least perplexity for each replacement. Finally, from among these most fluent sentences, we select the replacement which maximizes the NLI prediction of contradiction with the original claim. For this, we use a RoBERTa model (Liu et al., 2019) pre-trained on MNLI (Williams et al., 2018).

Experiments
We investigate three primary research questions: RQ1 Do automatically generated claims enable zero-shot scientific fact checking? RQ2 What is the percentage of high-quality claims generated using our methods? RQ3 How does KBIN compare with previous work for claim negation in terms of generating contradictions?
For RQ1, we use CLAIMGEN-ENTITY and CLAIMGEN-BART generated claims to train a fact checking model, evaluating on the SciFact dataset (Wadden et al., 2020) and comparing to relevant baselines. To answer RQ2 and RQ3, we design annotation criteria and perform manual evaluations with a group of expert annotators (details in §5.2).

RQ1: Fact Checking Performance
SciFact Task The SciFact fact verification task consists of: given a claim c and a corpus of scientific abstracts D, retrieve evidence abstracts from D, predict if the claim is supported or refuted by those documents or if there is not enough information (NEI) to make a prediction, and optionally determine what the rationale sentences are that explain the prediction. Here we focus on the oracle abstract setting of the task, in which gold abstracts are provided to the model and there is no retrieval component. This setup exists in the scientific fact checking literature (Saakyan et al., 2021), and allows us to focus on one component of the fact checking pipeline for evaluating the impacts of claim generation.
Creating Training Data for the Zero-shot Setting We require a set of claim-abstract pairs for training where the abstract either supports, refutes, or does not provide evidence for the given claim. We exploit citation relationships to generate claims paired with potential evidence, using citances from the CiteWorth dataset (Wright and Augenstein, 2021) as source citances for generation. Supports claims are produced by directly pairing a generated claim with the abstracts of documents cited by the source citance. For refutes claims, we negate a generated claim using KBIN and pair it with the same abstract. For claims labelled NEI, we pair the generated claim or negated claim with the abstract of the source document of the citance; the source document is related to the claim but presumably does not directly support or refute the claim given the need for a citation.
Experimental Setup In our experimental setup, we use LongChecker (Wadden et al., 2021), a Longformer (Beltagy et al., 2020) model adapted for scientific fact checking. The model forms its input by concatenating a claim with its evidence abstract, inserting separator tokens between sentences, and uses a classification head to predict the veracity label from the representation of the [CLS] token. We explore several different setups for our training data. As a baseline, we experiment with pretraining only on FEVER claims (Thorne et al., 2018), which are general domain fact checking data based on Wikipedia. We also include an experiment where we manually tune a threshold for the prediction of NEI on the SciFact training data, as we saw that the model tends to overpredict this label without any fine-tuning on in-domain data. We also provide an upper bound on performance by fine-tuning on the in-domain train split of SciFact. Finally, we experiment with both CLAIMGEN-ENTITY and CLAIMGEN-BART as sources of training data generated from CiteWorth citances, pairing both with KBIN for negations. We note that though CLAIMGEN-BART requires manually re-written claims as training data for generating supports claims, it does not use any claims paired with evidence manually labelled for veracity, thus making it zero-shot for the SciFact factchecking task. In all cases we test on the SciFact dev split. Hyperparameter information, including number of training instances, is given in §A.3, and code and data will be released upon paper acceptance. In all cases, results are reported as macro-F1.

Results
Our results on SciFact are given in Table 1. With an upper bound of 77.70 F1, we see that a model fine-tuned on automatically generated claims is able to achieve within 90% of the performance of a model trained on in-domain manually written claims. This is also invariant to the method used to generate claims, as both CLAIMGEN-ENTITY and CLAIMGEN-BART produce similar results. Additionally, both methods provide significant gains over pre-training on FEVER only, especially when no threshold on NEI claims is used but also when re-calibrating the model to predict NEI less often.

RQ2: Claim Quality Evaluation
Next, we explore if there are differences between our methods in terms of claim quality and the percentage of valid claims. For this, we ask three expert annotators to manually assess generated claims along a number of quality criteria. One annotator has undergraduate training in the life sciences and graduate training in computer science; the other two annotators have undergraduate training in the life sciences and materials science respectively. We define a set of criteria for evaluation, given in Table 2. These criteria are inspired by the AIDA (Atomic, Independent, Declarative, and Absolute) Metric Labels Fluency 3 -The claim contains no grammatical errors and its meaning can be understood 2 -The claim contains some grammatical errors but is still understandable 1-The claim contains many grammatical errors and cannot be understood De-Contextualized 1 -The claim is interpretable on its own and requires no context; the addition of the original context does not alter the meaning of the claim 0 -The claim cannot be interpreted in a meaningful way without the original context Atomicity 1 -The claim is about a single entity/process (atomic) 0 -The claim is non-atomic and can be broken down into multiple claims Faithfulness 5 -The claim is correct and fully supported and complete with respect to the original sentence and context 4 -The claim is correct with respect to the original sentence and context but leaves out information from the original sentence and context 3 -The claim is related to the original sentence and does not contain incorrect information but is not explicitly stated in the original sentence 2 -The claim contains explicitly incorrect information relative to the original sentence and context 1 -The claim has nothing to do with the original sentence  (2013). They are also based on similar human evaluation criteria used to assess generation quality for related tasks (Sai et al., 2020). We develop an initial set of guidelines for the annotators and conduct two rounds of pilot annotations to improve instructions and increase agreement. For the final evaluation, we generate claims on a set of 100 citances sampled from the CiteWorth dataset (Wright and Augenstein, 2021), which contains citations in context for over 1M citances spanning 10 domains. We limit the citances to those from papers in biology and medicine to match the domain of Sci-Fact. Annotator agreement is measured as Krippendorff's α (Krippendorff, 2011) on 236 claims for each category except fluency, where we measure the percentage of claims where all annotators agree. 5 The annotators then assess 1,049 total claims (including the 236 shared claims). Each annotator rates all criteria for an individual claim, starting with fluency, then de-contextualized, then atomicity, then faithfulness. We are mainly interested in claim quality and yield, so annotators only annotate "de-contextualized" if the claim is legible (fluency > 1), and only annotate "atomicity" and "faithfulness" if the claim is also de-contextualized (so one is able to discern meaning from the claim). This results in the following rules for acceptable 5 Fluency agreement is measured in terms of agreement percentage as most ratings are the same (3), thus any disagreements have an oversized influence on α. claims based on the definitions for the labels in each category: Fluency > 1 AND De-Contextualized = 1 AND Atomicity = 1 AND Faithfulness > 3. An acceptable claim is thus legible, meaningful, represents a single aspect of a scientific entity or process, and accurately reflects the information presented in the original citance.
The results of claim quality annotation are given in Table 3. Note that these are on claims generated by CLAIMGEN-ENTITY and CLAIMGEN-BART (see examples in Table 4), and thus are only supports claims. We first note that inter-annotator agreement is very high for fluency and moderate across all other criteria. Generated claims are quite fluent across methods, with a small minority of instances being illegible. Unsurprisingly, CLAIMGEN-BART improves over CLAIMGEN-ENTITY across all categories except for atomicity. This intuitively makes sense as CLAIMGEN-ENTITY directly produces claims which are about a single entity. CLAIMGEN-ENTITY yields a higher number of claims per citance as it generates one claim for every entity in the sentence, but the precision of acceptable claims is much lower than that of CLAIMGEN-BART. Thus, there is a tradeoff between the two methods between the number of claims generated and their acceptability. While higher yield could lead to higher coverage of claims in the original text, this study is left to future work.
Next, we examine the similarity between generated claims and manually written claims from  Table 3: Average annotation score, agreement, and claim yield for each category. De-contextualized is only annotated if fluency > 1; atomicity and faithfulness are only annotated if fluency > 1 and de-contextualized == 1. # Gen are the total claims generated by the method, and # Accept are the number of acceptable claims generated.
Citance Generated Fl,D,A,Fa Due to its geographic position and geological history, the island of Sardinia is characterized by a remarkable richness of endemic species and represents one of the most prominent biodiversity hotspots in the Mediterranean basin.
The island of Sardinia is characterized by a remarkable richness of endemic species. 3,1,1,5 Frequently reported symptom-eliciting chemicals and environmental agents include fragranted products, motor-vehicle exhaust fumes, cleaning agents, freshly printed papers or magazines, and smoke from wood burners.
Frequently reported symptomeliciting chemicals and environmental agents are fragranted products. 3,1,1,5 The herbicide inhibits EPSPS (5-enolpyruvylshikimate-3phosphate synthase) in the shikimate pathway, which has a key role in the biosynthesis of aromatic amino acids and is required for survival of the plant.
The herbicide inhibits EPSPS in the shikimate pathway. 3,1,1,5 Experimental models of OA, such as the intra-articular injection of monosodium acetate (MIA), are associated with joint pathology and pain behaviour comparable to clinical OA.
OA is associated with joint pathology and pain behaviour comparable to clinical OA.
3,1,0,4  Table 5: ROUGE score between generated and manually written reference claims in the SciFact dataset SciFact. We generate claims for each source citance s i in the SciFact dev split, and calculate the ROUGE score (Lin, 2004) between each generated claim c (i) j and each manually written claim d (i) k . From this, we take an average of the max ROUGE score for each generated claim. Formally, given |C| claims we calculate: Our evaluation results are given in Table 5. Both methods produce claims which have high overlap with the reference claims, though claims generated directly using BART are significantly closer to the reference claims than those generated using CLAIMGEN-ENTITY. Finally, we note the these scores are in the range of state-of-the-art models used for paraphrase generation, establishing a solid baseline for this task (Zhou and Bhat, 2021).

RQ3: Negation Evaluation
Finally, we perform a manual evaluation to compare KBIN against other methods of negation generation. Annotators evaluate negations based on Fluency and Entailment. We adopt the definitions used to annotate the SNLI corpus (Bowman et al., 2015), in which the annotator is given an original claim (premise) and a generated negation (hypothesis) and asked to select from among the following options, including a SKIP option for Fluency: 3 The hypothesis is DEFINITELY FALSE given the premise 2 The hypothesis MIGHT BE TRUE given the premise 1 The hypothesis is DEFINITELY TRUE given the premise SKIP The hypothesis contains a lot of grammatical errors and cannot be understood We compare KBIN to two baselines. The first baseline replaces a single entity in the claim with Original Claim

Method Generated Negation
Tonic signaling from the SCFV prevents constitutive stimulation.
Entity replace Tonic signaling from the SCFV under care of respiratory physician (finding) constitutive stimulation. Saakyan et al. (2021) Tonic signaling from the inflammatory stimulation.

KBIN
Tonic signaling from the SCFV accelerates constitutive stimulation.

KBIN
Activation of the RAC1 homolog CED-10 mediate viable cells in SRGP-1 mutant Caenorhabditis Elegans.  a random entity of the same type, similar to the method in Pan et al. (2021). The second is the proposed negation generation method in Saakyan et al. (2021). The method is based on extracting keywords using YAKE (Campos et al., 2020) (an unsupervised method based on statistical text features), replacing those keywords using text infilling with a pre-trained language model, and selecting the replacement with the highest contradiction score using a model pre-trained for NLI. We generate negations for 100 claims using all three methods. For annotation, generated negations from all three methods are aggregated and the order of negation method randomized for each of the 100 claims. Example negations generated by all three methods are given in Table 6 and annotation results for fluency and entailment are given in Table 7. First, KBIN produces more fluent claims than both baselines. Additionally, KBIN produces more convincing negations on average than both baselines. We observe that the most common operation performed by all three methods is to replace a noun phrase. KBIN has the benefit of being able to replace many entity types corresponding to concepts found in UMLS, which also include verb phrases that encode relations. Finally, KBIN improves over the baseline from Saakyan et al. (2021) by producing fewer claims which are directly entailed by the source claim, i.e., that maintain the original meaning and do not negate the original claim.

Further Analysis
To give further insight into the quality of claims generated using our methods, we perform an experiment where we train and test models for scientific fact checking using claims only. This "claim-only" experiment helps us assess whether the negation process introduces data artifacts that can be leveraged by the model to predict veracity. We present results from training on claims generated using CLAIMGEN-BART and KBIN, compared against training on the original SciFact training data (which has manually written negations), along with random and majority baselines, in Figure 3.
We observe that there are likely some dataset artifacts in the original SciFact claims that lead to model performance well above the majority and random baselines. 6 This phenomenon has been observed in general domain natural language inference datasets as well (Poliak et al., 2018). Training on claims generated using our methods results in performance that is much more proximal to random performance on the SciFact dev set, indicating that the label-associated bias in the original training data is not present and a possible domain shift between the original SciFact claims and our generated claims. This can further explain some of the performance gap we observe between zero-shot fact-checking and the upper bound of training on manually labeled training data (Table 1).

Related Work
Scientific Fact Checking Our work follows a line of recent literature on scientific fact checking (Wadden et al., 2020). The goal of this task is to determine the veracity of claims related to scientific topics by retrieving appropriate documents from scientific literature, finding evidentiary sentences from those documents, and determining whether claims are supported, refuted, or there is not enough evidence to make a judgement. The task closely resembles the task of general domain factchecking (Thorne et al., 2018;Augenstein et al., 2019). Well-performing systems on this task use large language models to perform neural document retrieval (Pradeep et al., 2020) or multi-task learning of rationale prediction and stance prediction (Li et al., 2021;Wadden et al., 2021). Recent work on general domain fact checking has also introduced methods for adversarial generation of claims which are particularly difficult to fact-check (Thorne et al., 2019;Atanasova et al., 2020), and for performing the task without any labeled data (Pan et al., 2021).
Our proposed methods extend zero-shot fact checking to the scientific domain, demonstrating that one can achieve 90% of the inference performance of state-of-the-art systems without domain-specific labeled data.
Generating Training Data Our work is also related to methods for the automatic generation of training data. Generation of synthetic data has been used for multiple tasks, for example question answering (Duan et al., 2017;Riabi et al., 2021), knowledge-base completion (Safavi et al., 2021), and fact-checking (Pan et al., 2021). Most similar to our setting, the COVID-Fact dataset (Saakyan et al., 2021) contains claims related to COVID-19 crawled from Reddit, and is constructed semiautomatically. Claims which are supported by evidence are extracted from Reddit and verified by human annotators, while negations of these claims are generated automatically via masked language model infilling. KBIN improves upon the negation method proposed in this work by leveraging in-domain structured knowledge via UMLS.

Conclusion
In this work, we propose the task of scientific claim generation, presenting CLAIMGEN-BART, CLAIMGEN-ENTITY, and KBIN to perform the task. We demonstrate that generated claims can be used to train a model for zero-shot scientific fact checking and obtain within 90% of the performance of a model trained on human-written claims. Through a rigorous user study we demonstrate that CLAIMGEN-BART produces higher quality claims than CLAIMGEN-ENTITY, and that KBIN produces more fluent and more convincing negations than previous work. Work remains to improve claim generation quality and assess the impacts of generated claims in other domains of science, as well as how generated claims can be used in the evidence retrieval component of fact checking systems. We hope that our methods will be used to facilitate future work by enabling faster creation of training datasets and improving the performance of models on the timely and important task of scientific fact checking.

Ethical Considerations
Automated scientific fact checking has great potential value to the scientific community, as well as for addressing phenomenon such as the propagation of scientific misinformation. Our aim in releasing models for scientific claim generation is to improve the generalizability of science fact checking systems in domains with less training resources. When training our fact checking models with generated or synthetic data, there are questions regarding the veracity of the generated data and whether a model trained on inferred labels could produce trustworthy judgments. We hope that by introducing this task and models, we will enable the community to study such questions, while contributing to data curation in a domain in which such curation would normally require significant manual efforts and cost.

A.1 Computing Infrastructure
All experiments were run on an Amazon Web Services p3.2xlarge instance using a Tesla V100 GPU with 16GB of RAM.

A.2 Number of Parameters per Model
The sizes of each of the models used in this work are given in Table 8.