WSDMS: Debunk Fake News via Weakly Supervised Detection of Misinforming Sentences with Contextualized Social Wisdom

In recent years, we witness the explosion of false and unconfirmed information (i.e., rumors) that went viral on social media and shocked the public. Rumors can trigger versatile, mostly controversial stance expressions among social media users. Rumor verification and stance detection are different yet relevant tasks. Fake news debunking primarily focuses on determining the truthfulness of news articles, which oversimplifies the issue as fake news often combines elements of both truth and falsehood. Thus, it becomes crucial to identify specific instances of misinformation within the articles. In this research, we investigate a novel task in the field of fake news debunking, which involves detecting sentence-level misinformation. One of the major challenges in this task is the absence of a training dataset with sentence-level annotations regarding veracity. Inspired by the Multiple Instance Learning (MIL) approach, we propose a model called Weakly Supervised Detection of Misinforming Sentences (WSDMS). This model only requires bag-level labels for training but is capable of inferring both sentence-level misinformation and article-level veracity, aided by relevant social media conversations that are attentively contextualized with news sentences. We evaluate WSDMS on three real-world benchmarks and demonstrate that it outperforms existing state-of-the-art baselines in debunking fake news at both the sentence and article levels.


Introduction
Misinformation, such as fake news, poses tremendous risks and threats to contemporary society.The detection of fake news entails various technical challenges (Glockner et al., 2022), and one of them is accurately identifying false elements within news articles.This challenge arises due to the blending of authentic and fabricated content by creators of fake news, thereby complicating the determination of overall veracity (Solovev and Pröllochs, 2022).Such instances have been prevalent during the Covid-19 pandemic 1 .
Fake news detection aims to determine the veracity of a given news article (Shu et al., 2017).Previous analysis has revealed that users often share comments and provide evidence about fake news on social media platforms (Zubiaga et al., 2017), which has led to a growing stream of research that leverages these social engagements, along with the content of news articles, to aid in fake news detection (Pan et al., 2018;Shu et al., 2019a;Min et al., 2022).This approach bears analogies to rumor detection, where the focus is on assessing as a specific statement rather than an entire news article (Wu et al., 2015;Ma et al., 2018;Bian et al., 2020;Lin et al., 2021;Song et al., 2021;Park et al., 2021;Zheng et al., 2022;Xu et al., 2022).Many studies in this domain aims to train supervised classifiers using features extracted from the social context and the content of the claim or article.However, the existing fake news detection models predominately focus on coarse-level classification of the entire article, which oversimplifies the problem.Misinformation can be strategically embedded within an article by manipulating portions of its content to enhance its credibility (Feng et al., 2012;Rogers et al., 2017;Zhu et al., 2022) Therefore, we target a fine-grained task that aims to identify sentences containing misinformation within an article, which can be jointly learned with article-level fake news detection.
Figure 1 shows an illustrative example of a fake news article titled "NASA will pay 100,000 USD News Title: NASA Will Pay You 100,000 USD To Stay In Bed For 60 Days!

News Article:
!: Wouldn't you just love to carry on sleeping on a Monday morning without having to submit to the Monday morning blues and get ready for work? " : What type of heaven would you envisage if you were paid to stay in bed  # : You can get paid a huge sum of money just staying in bed for two whole months and by you know who, NASA no less!!! yes the American space agency NASA is paying $100,000 to stay in bed for 60 days. $ : Most of us dream about hanging out in bed, all day, every day. % : NASA is currently on the lookout for people to participate in their "Bed Rest Studies", in which participants will have to stay in bed for 60 days straight.to participants staying in bed for 60 days!",where the sentences in the article can be linked to a set of social conversations organized as propagation trees of posts.These sentences contain opinions and evidence that can aid in the veracity classification at the sentence and article levels, specifically in spotting misinformation sentences.For instance, sentence s 3 can be debunked by referring to trees t 1 and t 3 , as they provide evidence that contradicts the incorrect reward amount and duration mentioned in the sentence.This information helps in determining that the article is fake.Conversely, if we already know that the article is fake, we can infer that there must be misinforming sentences present within it.However, existing methods are not readily applicable for the identification of sentence-level misinformation due to two main reasons: 1) Obtaining veracity labels for sentences in an article is costly, as it requires annotators to exhaustively factcheck each sentence.2) While rumor detection models can predict the label of a given claim, they often assume the availability of social conversations that correspond to the claim.However, it is difficult to establish a correspondence between social conversations and specific sentences within a news article.Inspired by multiple instance learning (MIL) (Foulds and Frank, 2010), we attempt to develop an approach for debunking fake news via weakly supervised detection of misinforming sentences (i.e., instances), called WSDMS 2 , only using available article-level veracity annotations (i.e., bag-level labels) and a handful of social conversations related to the news.
To gather the relevant social conversations associated with an article, we employ established 2 https://github.com/HKBUNLP/WSDMS-EMNLP2023methods used in fake news detection that rely on social news engagement data collection (Shu et al., 2020), which provides the necessary conversation trees linked to the article in question.We devise a hierarchical embedding model to establish connections between each sentence in the article and its corresponding conversations, facilitating the identification of sentence-level misinformation.Standard MIL determines the bag-level label as positive if one or more instances within the bag are positive, and negative otherwise (Dietterich et al., 1997).To improve its tolerance on sentence-level prediction errors, we further develop a collective attention mechanism for a more accurate article veracity inference on top of the sentence-level predictions.The entire framework is trained end-to-end by optimizing a loss function that aims to alleviate prediction bias by considering both sentence-and articlelevel consistencies.Our approach ensures that the model captures the nuances of misinformation at both levels of granularity.Our contributions are summarized as follows: • Unlike existing fake news detection approaches, we introduce a new task that is focused on spotting misinforming sentences in news articles while simultaneously detecting article-level fake news.
• We develop WSDMS, a MIL-based model, to contextualize news sentences aided by social conversations about the news and use only article veracity annotations to weakly supervise sentence representation and model training.
• Our method achieves superior performance over state-of-the-art baselines on sentenceand article-level misinformation detection.

Related Work
Early studies on fake news detection have attempted to exploit various approaches to extract features from news content and social context information, including linguistic features (Potthast et al., 2018;Azevedo et al., 2021), visual clues (Jin et al., 2016), temporal traits (Kwon et al., 2013;Ma et al., 2015), user behaviors and profiles (Castillo et al., 2011;Ruchansky et al., 2017;Shu et al., 2019b).Subsequent studies have employed neural networks to automatically learn deep feature representations from similar sources of data (Ma et al., 2016;Popat et al., 2018;Ma et al., 2019;Nguyen et al., 2020;Kaliyar et al., 2021;Sheng et al., 2022).Furthermore, researchers have incorporated external knowledge sources (Pan et al., 2018;Dun et al., 2021;Hu et al., 2021) and combined multi-modal data (Wang et al., 2018(Wang et al., , 2021;;Fung et al., 2021;Wu et al., 2021;Silva et al., 2021;Chen et al., 2022) to enhance learning and improve fake news detection performance.Notably, social context information has played a crucial role in debunking fake news and rumors (Yuan et al., 2019;Khoo et al., 2020;Yang et al., 2022a;Ma et al., 2020;Mehta et al., 2022).The utilization of social context structures has spurred the development of Graph Neural Networks (GNNs) such as Kernel Graph Attention Networks (KGAT) (Liu et al., 2020) and Graphaware Co-Attention Networks (GCAN) (Lu and Li, 2020), which have demonstrated effectiveness in various fake news-related tasks.However, existing approaches (Shu et al., 2019a;Jin et al., 2022;Yang et al., 2022b) generally aim to detect article-level fake news, which lack the capability to tell which specific sentences contain misinformation.MIL is a weakly supervised approach that infers instance-level labels (e.g., sentence or pixel) when training data is annotated with bag-level labels (e.g., document or image) (Dietterich et al., 1997).Several MIL variants have been developed based on threshold-based MIL assumption (Foulds and Frank, 2010) and weighted collective MIL assumption (Pappas and Popescu-Belis, 2017), successfully applied in various downstream tasks such as recommendation systems (Lin et al., 2020) sentiment analysis (Angelidis and Lapata, 2018), keywords extraction (Wang et al., 2016), community question answering (Chen et al., 2017), and more recently joint detection of stances and rumors (Yang et al., 2022a).We adopt the weighted collective MIL assumption (Pappas and Popescu-Belis, 2017) to incorporate a weight function over the sentence space to calculate the article veracity probability.This assumption allows us to achieve a more robust prediction, as it avoids bias introduced by less important instances.

Problem Definition
We define a fake news dataset as a set of news articles {A}, where each article consists of a set of n sentences A = {s i } n i=1 and s i is the i-th sentence.For each article, we assume there is a set of m social conversation trees relevant to it denoted as T = {t j } m j=1 , where t j is the j-th conversation tree containing posts (i.e., nodes) and message propagation paths (i.e., edges) which can provide the social context information for A. Our task is to predict the veracity of information at both sentence level and article level in a unified model: • Sentence-level Veracity Prediction aims to determine whether each s i ∈ A is a misinforming sentence or not given its relevant social context information T .That is to learn a function , where p i is the sentence-level prediction probability as to whether s i is misinforming or not.
• Article-level Veracity Prediction aims to classify the veracity of the article A on top of the sentence-level misinformation detection.That is to learn a function g(A) : where ŷ denotes the prediction as to whether A is fake or true.Note that we have only article-level ground truth for model training.

WSDMS: Our MIL-based Model
Detecting more nuanced instances of misinformation at the sentence level solely based on article content is challenging (Feng et al., 2012).Previous studies have demonstrated that social media posts contain valuable opinions, conjectures, and evidence that can be leveraged to debunk claimlevel misinformation, such as rumors (Ma et al., 2017(Ma et al., , 2018;;Wu et al., 2019), where claims, typically presented as short sentences, share similar characteristics with sentences in news articles.We hypothesize that the detection of misinforming sentences can be done by incorporating relevant information from social context associated with the article.We try to establish connections between social conversations and specific news sentences in the article, enabling the contextualization of social wisdom to enrich the representation of sentences, in order to better capture the veracity of sentences.
The architecture of our MIL-based weakly supervised model WSDMS is illustrated in Figure 2. WSDMS consists of four closely coupled components: input embedding, sentence and conversation tree linking, misinforming sentence detection, and article veracity prediction.We describe them with detail in this section.

Input Embeddings
We represent the word sequence of each news sentence and social post using SBERT (Reimers and Gurevych, 2019) which maps the sequence into a fixed-size vector.Let a sequence S = w 1 w 2 • • • w |S| consist of |S| tokens, where S could optionally denote a news title, a news sentence, or a post in conversation tree.Then, the SBERT embedding of S can be represented by S = SBERT(w 1 , • • • , w |S| ).In the rest of the paper, given an article A, we will use additional notations T to denote the news title, p and q to denote posts in a conversation tree.And then T , si , p and q will denote the respective SBERT embeddings of T , s i , p and q.

Linking Sentences to Conversation Trees
To mine the discernible relationship between sentences and social posts trees, we want to design a sentence-tree linking mechanism between the sentence set {s i } n i=1 and post tree set {t j } m j=1 , both associated with A. There are clearly different designs to create links across the elements between them, such as 1) using a fully connected graph that links any s i and t j regardless of their similarity, followed by a model to fix the closeness of each connection; 2) creating a link according to the similarity between s i and t j based on a threshold.Our preliminary experiments indicate that the different designs of interaction indeed influence the performance.Given that the number of sentences and trees associated with articles varies significantly, we opt for the threshold-based approach to avoid the overhead of computing on a fully connected graph.We begin with modeling posts interaction in each tree to learn its representation before linking the sentences and trees.
Post Interaction Embedding: To represent a tree accurately, we use a generic kernel-based graph model KernelGAT (Liu et al., 2020) to measure the importance of each post in a tree by modeling the interactions between each post and its neighboring posts.
We first construct a translation matrix M to represent the similarity of each post with its neighbors, where each M pq ∈ M is the cosine similarity between post p and q: where N (p) is the set of neighboring nodes of p.
We then define a kernel function ⃗ G(M p ) to represent the features considering the interactions between p and its neighbors based on K Gaussian kernels (Keerthi and Lin, 2003), and this yields: where and µ k and σ k are parameters in the k-th kernel to capture the node interactions at different levels (Xiong et al., 2017).Note that if σ k → ∞, the kernel function degenerates to the mean pooling.Then, we update the representation p of each post p by considering all its neighbors with their identified importance, which is given as: where γ q is a scalar representing the post-level attention coefficient between p and its neighbor q, W 1 and b 1 are trainable parameters used to transform K kernels into a vector of all nodes in the tree, [q] takes the value corresponding to post q, and p and q are initialized respectively with the BERT-based post embeddings p and q.Link Sentences and Trees.With the obtained interaction-enhanced post representations, we use a mean pooling function to represent a conversation tree t j , i.e., tj = mean( p p) for all p ∈ t j .For each pair of sentences and tree (s i , t j ) associated with an article, we then create a link between them if the cosine similarity of si and tj is above a global threshold τ , where τ is determined according to the global range of similarity scores between sentences and trees by mapping τ to the median of the range of scores.We fix this setting empirically.

Detecting Misinforming Sentences
To spot misinforming sentences based on the graph with the sentence-tree links, we propose a graph attention model to detect whether a sentence s i contains misinformation.Each sentence can be linked to multiple conversation trees and vice versa.In Figure 1, for example, two trees t 1 and t 3 are linked to s 3 , where t 1 provides more specific evidence (e.g., the right reward amount and the number of experimental days) indicating that s 3 is misinforming, while t 3 just implies the sentence is not credible without providing specific clues.Hence, we design an attention mechanism to update the representation of each sentence by considering the importance of all its corresponding trees.
More specifically, let T i denote the set of trees linked to s i .We aggregate the representation of corresponding trees according to their attention weights, and then update the sentence representa-tion.This is achieved by: where si denotes the socially contextualized representation of s i , β i,j is the importance of t j ∈ T i with respect to s i , and ⊕ denotes concatenation operation.
We then use a fully-connected softmax layer to predict the probability of s i containing misinformation based on its BERT-based embedding si and socially contextualized embedding si : where W 2 , W 3 and b 2 are trainable parameters and pi is the class probability distribution of s i provided that the bag-level class labels are fake and real, based on the MIL (Foulds and Frank, 2010;Angelidis and Lapata, 2018).

Inferring Article Veracity
We can simply predict an article as fake if there is at least one misinforming sentence is detected, which conforms to the original threshold-based MIL assumption.However, the assumption is overly strong because there can be inaccuracies in sentence-level prediction.Based on the weighted collective MIL assumptions (Foulds and Frank, 2010), we design a context-based attention mechanism to bridge the inconsistency between sentenceand article-level predictions.Specifically, we first learn a global representation for the article utilizing a pre-trained transformer (Grail et al., 2021): where T is the initial SBERT embedding of the article title.We then adopt an attention mechanism to measure the importance of sentences w.r.t the article veracity prediction, which yields: where α i denotes the attention weight of ŝi relative to the title representation T , and ŷ is the class probability distribution of A being fake or real.

Model Training
Intuitively, the more similar two sentences are, the more similar their corresponding predictions should be.We define the following loss function considering pairwise consistency between sentence representation and prediction, with only articlelevel ground truth: where Here C(.) ∈ [0, 1] is the function measuring the consistency between pairwise sentence similarity (i.e., ŝi and ŝj ) and the prediction (i.e., pi and pj ), y A and ŷA denote respectively the ground-truth and predicted class probability distributions of A, ||.|| 2 2 is an efficient kernel based on the L2 norm (Luo et al., 2016) as a non-negative penalty function, and λ is the trade-off coefficient.
5 Experiments and Results

Datasets and Setup
We employ two public real-world datasets Politi-Fact and GossipCop (Shu et al., 2020) respectively related to politics and entertainment fake news, where relevant social conversations are collected from Twitter.We also construct an open-domain fake news dataset BuzzNews by extending Buz-zFeed (Tandoc Jr, 2018), for which we gather social conversations of the articles via Twitter API3 .We recruit three annotators to label misinforming sentences of the articles in the test sets of the three datasets.We train the annotators by providing them with a unified set of annotation rules referring to the detailed guide from several fact-checking websites such as snopes.comand politifact.com,where specific rationales on how each claim was judged are provided.Then, we take a majority vote for determining the label of each sentence, and the inter-annotator agreement is 0.793.Table 1 shows the statistics of these three datasets.
We use precision (Pre), recall (Rec), F1, and accuracy (Acc) as evaluation metrics.All the baselines and our methods are implemented with Py-Torch (Paszke et al., 2019)  • In the second group of non-structured models, the graph-based models GCAN and Bi-GCN mainly rely on propagation structures of fake news and perform comparably with KAN using entities and their contexts extracted from the social media content, suggesting that social conversations embed a good amount of human wisdom useful for detecting fake news.SureFact performs best among all the baselines because it groups social posts into the topics discovered from article content, suggesting that creating a connection between them at the topic level is helpful.
• WSDMS consistently defeats the best baseline SureFact on the three datasets, demonstrating that our explicit and fine-grained linking between sentence and social context is superior, and the sentence-level detection can help article veracity prediction.In addition, WSDMS does not sacrifice its performance compared to WSDMS-FC that uses full connections between sentences and trees, while we find that WSDMS significantly reduces training time from 4.5 to 2 hours.This indicates our sentence-tree linking method is cost-effective.

Misinforming Sentence Detection
For misinforming sentence detection, the baselines are deployed by treating each sentence as a claim and the conversation trees linked to the sentence (see Section 4.2) as the source of evidence.
SureFact is excluded as it cannot classify specific sentences.More details are in Appendix A.1.Since all baselines are supervised methods that need sentence labels for training, we split the three test sets with sentence-level annotation into train and test parts with a 70%-30% ratio.Due to the large number of sentences in the original test sets (6,300/6,480/2,480), we end up with three workable sentence-level training and test sets.We then train all models on the same training data.But this intentionally disadvantages our WSDMS since it can only use article labels.Therefore, we also present the performance of WSDMS (o) trained on the original training sets without sentence labels, which baselines cannot take advantage of.Table 3 conveys the following findings: • Similar to article-level prediction, dEFEND outperforms DeClarE and HAN because it effectively models the sentence and social context correlations via the co-attention mechanism.
BERTweet is more advantageous at representing social media posts, demonstrating better performance at the sentence level.
• Among the structured models, KAN performs best because it incorporates both content and propagation information and has a co-attention mechanism between sentence and entity contexts extracted from social conversations.This may enhance sentence representation better than Bi-GCN and GCAN that can only utilize propagation-based features.
• Weakly supervised WSDMS performs better than DeClarE and comparably with HAN, which are fully supervised.This is because WSDMS considers the propagation structure while DeClarE and HAN can only leverage unstructured posts.The overall performance of WSDMS is clearly compromised due to weak supervision.However, when it is trained on the original datasets, WSDMS (o) can enjoy the large volume of article labels to beat all baselines that cannot be weakly supervised.To reach the same level of performance, the baselines may need tremendous sentence annotations which are infeasible to get.Again, it performs comparably well as WSDMS-FC (o), implying that our sentence-tree linking  reserves vital information for spotting misinforming sentences efficiently.
• WSDMS effectively enhances sentence-level performance by utilizing publicly accessible article-level labels.To achieve comparable performance, baseline systems generally require massive fine-grained sentence-level annotations.
Consequently, sentence-level prediction remains a pivotal contribution of our study.

Ablation Study
We Figure 3 shows that most of the ablations make the result worse.w/o tree implies that only using article content is insufficient for the task.w/o kernel supports that embedding post interactions with kernel can help post and tree representation.Experiment in the Appendix A.3 also echoes the advantages of the kernel.Title as sent means that the news title may attract the most attention from the trees, which can hurt the representation of other sentences, and should be specially treated.w/o wc indicates adopting weighted collective MIL is better.w/o NLL confirms that our designed loss is necessary and effective.Only w/o τ is marginally better due to fully connected sentences and trees, which is however more costly and less efficient.

Case Study
To gain a deeper insight, we visualize two news articles checked by PolitiFact in Figure 4 which are predicted as fake (left) and true (right) correctly by WSDMS.The spotted misinforming and true sentences are also shown.We observe that 1) WS-DMS can associate a sentence with multiple trees using attention weights (arrow lines indicate highweight trees) to help determine its veracity.2) The posts in the conversations provide useful clues for indicating how credible each sentence is by aggregating collective opinions of users in the trees; 3) The article-level veracity is not determined simply by whether there is a misinforming sentence detected, because the prediction might be inaccurate.For example, if s 4 is incorrectly predicted as fake, the article will also be determined as fake under the standard MIL.Our approach increases the chance of correcting such an error by giving higher attention weights to other sentences, which may indicate that the article is overall more likely to be true.Thus, the attention weights of sentences can collectively aggregate sentence-level predictions to improve the final prediction.

User Study Experiment
We conduct a user study to evaluate the quality of the model output.We sample 120 articles from PolitiFact and present them in two forms: Baseline (article, posts) and WSDMS (article, misinforming   sentences, trees).We then ask 6 users to label the articles and give their confidence in a 5-point Likert Scale (Joshi et al., 2015), and each person is given only one form to avoid cross influence.Table 4 shows that 1) users determine the articlelevel veracity more accurately with WSDMS; 2) users spent 70% less time identifying fake news; and 3) users show higher confidence with the results of WSDMS, suggesting that users tend to be more sure about their decision when specific misinforming sentences and relevant evidence are provided.

Conclusion and Future Work
We propose a MIL-based model called WSDMS to debunk fake news in a finer-grained manner via weakly supervised detection of misinforming sentences with only article veracity labels for model training.WSDMS uses the attention mechanism to associate news sentences with their relevant social news conversations to identify misinforming sentences and determine the article's veracity by aggregating sentence-level predictions.WSDMS outperforms a set of strong baselines at the article level and sentence level on three datasets.
In the future, we will incorporate more intersentence features, such as discourse relations, to detect composition-level misinformation.

Limitations
Fake news is one type of misinformation, which also includes disinformation, rumors, and propaganda.WSDMS can be well-generalized to detect these various forms of misinformation.Whereas, we simplify some techniques in this paper.For example, the representation of conversation trees can be learned by considering the direction of message propagation and combining top-down and bottomup propagation trees.In addition, it cannot deal with more complex situations, where multiple true sentences combined constitute some kind of logical falsehoods or inconsistencies.This can be strengthened by considering sentence-level relations such as discourse information in the model.Despite this limitation, WSDMS encounters no such situation in the three datasets used according to our observation.Nevertheless, this suggests that the existing fake news datasets and detection models lack consideration of discourse-level fakes or logically inconsistent compositions, which are presumably not uncommon in real-world fake news.Lastly, we only use social context data collected from Twitter, which might have platform bias.To mitigate the issue, we can introduce additional data from different social media platforms, such as BuzzFace (Santia and Williams, 2018) from Facebook.information as misinformation or vice versa.In light of this concern, we have taken precautions to carefully assess the model we developed and restrict their distribution to the general public.We are committed to designing a responsible policy regarding the dissemination of codes and datasets within research community, and ensure that they are used responsibly in a manner that aligns with ethical standards and societal well-being.

A Appendix
A.1 Detailed Baseline Settings Existing fake news detection and rumor detection methods predominately focus on coarse-level classification on the entire article and claim, respectively, while our goals include identifying misinforming sentences within an article at a fine-grained level.When comparing with the baselines that are originally designed to either classify a news article or a claim, the required (and available) inputs may differ from our study.Therefore, we need to specifically customize the data inputs to make the baselines applicable to the article-level and sentence-level detection tasks while maintaining the implementation of baseline models intact.
In this section, we will provide more details about baseline models and the information they used.
A.1.1 Article-level Task 1) DeClarE (Popat et al., 2018) is designed to classify a claim with relevant news content obtained from external sources as evidence, such as web search results.The claim it used is short and there are many relevant articles providing evidence.In our fake news detection dataset, however, what is available includes a single long-form article which is the target to be checked, and the relevant social conversation trees providing external assistance.Since DeClarE can only accept short claims as input, we use the title of the news article as an input claim and the posts in conversations as evidence.
2) HAN (Ma et al., 2019) aims similarly to De-ClarE to the claim verification task and the provided evidence set is collected from multiple documents relevant to the claim.In our case, article text is the target to be verified, while HAN assumes a short claim as the target which cannot be fed into HAN directly.So, we use the news title as the input claim and posts in conversations as evidence.
3) dEFEND (Shu et al., 2019a) is a fake news detection model using news article as the target of verification and the related user comments as evidence.This is mostly consistent with our setting.Thus, it does not require any special treatment.4) BerTweet (Nguyen et al., 2020) is a pretrained language model trained on large English posts corpus.It is designed to encode short text.To apply BerTweet for article-level verification, we use the posts in conversation trees to fine-tune the model, and then treat the news title as a claim to be verified because BerTweet cannot accept the & : It does sound like the dream job, right?…  ' : You wouldn't just be sleeping you can keep yourself occupied with books, TV, video games, and they can also use their phones as they please… Only $100,000?Not good enough.Does a 'NASA Study...Pay You $100,000 to Stay in Bed for Sixty Days'?Think of the space force training that could be paid for with this money Hmmm Seriously?Can you link the video please?Can you bring your phone?Lol I linked the video already, just scroll up through the replies!If you're super serious, NASA will pay you $18,000 to stay in bed for 70 days straight… Sis go for it!! Can you eat among these days?? I'm pretty sure they provide food and water… negative impact it … Oh I need money, I can stay in bed for 70 days!Girl me too.im down for this.@NASA hmu … Fact Check: Does a 'NASA Study...Pay You $100,000 to Stay in Bed for Sixty Days'?Think of the space force training that could be paid for with this money No, they are not gaga Only $100,000?Not good.

Figure 1 :
Figure 1: A fake news article together with its relevant social context information, where the sentences containing misinformation (i.e., s 3 and s 5 ) are in orange and the posts implying the misinforming sentences are in red.

Figure 2 :
Figure 2: The architecture of our WSDMS model.ti denotes the representation of tree t i after kernel-based interaction of post information among tree nodes.

[Figure 4 :
Figure 4: A case study illustrating the prediction.

Table 1 :
(see Appendix A.2 for implementation details).Statistics of the datasets used.

Table 2 :
Article-level fake news detection results.

Table 4 :
User study results on model outputs quality.