The Authors Matter: Understanding and Mitigating Implicit Bias in Deep Text Classification

It is evident that deep text classification models trained on human data could be biased. In particular, they produce biased outcomes for texts that explicitly include identity terms of certain demographic groups. We refer to this type of bias as explicit bias, which has been extensively studied. However, deep text classification models can also produce biased outcomes for texts written by authors of certain demographic groups. We refer to such bias as implicit bias of which we still have a rather limited understanding. In this paper, we first demonstrate that implicit bias exists in different text classification tasks for different demographic groups. Then, we build a learning-based interpretation method to deepen our knowledge of implicit bias. Specifically, we verify that classifiers learn to make predictions based on language features that are related to the demographic attributes of the authors. Next, we propose a framework Debiased-TC to train deep text classifiers to make predictions on the right features and consequently mitigate implicit bias. We conduct extensive experiments on three real-world datasets. The results show that the text classification models trained under our proposed framework outperform traditional models significantly in terms of fairness, and also slightly in terms of classification performance.


Introduction
Many recent studies have suggested that machine learning algorithms can learn social prejudices from data produced by humans, and thereby show systemic bias in performance towards specific demographic groups or individuals (Mehrabi et al., 2019;Blodgett et al., 2020;Shah et al., 2020). As one machine learning application, text classification has been proven to be discriminatory towards certain groups of people (Dixon et al., 2018;Borkan et al., 2019). Text classification applications such as sentiment analysis and hate speech detection are common and widely used in our daily lives. If a biased hate speech detection model is deployed by a social media service provider to filter users' comments, the comments related to different demographic groups can have uneven chances to be recognized and removed. Such a case will cause unfairness and bring in negative experience to its users. Thus, it is highly desired to mitigate the bias in text classification.
The majority of existing studies on bias and fairness in text classification have mainly focused on the bias towards the individuals mentioned in the text content. For example, in (Dixon et al., 2018;Park et al., 2018;Zhang et al., 2020), it is investigated how text classification models perform unfairly on texts containing demographic identity terms such as "gay" and "muslim". In such scenarios, the demographic attributes of the individuals subject to bias explicitly exist in the text. In this work, we refer to this kind of bias as explicit bias. Bias in texts, however, can be reflected more subtly and insidiously. While a text may not contain any reference to a specific group or individual, the content can somehow be revealing of the demographic information of the author. As shown in (Coulmas, 2013;Preoţiuc-Pietro and Ungar, 2018), the language style (e.g., wordings and tone) of a text can be highly correlated with its author's demographic attributes (e.g., age, gender, and race). We find that a text classifier can learn to associate the content with demographic information and consequently make unfair decisions towards certain groups. We refer to such bias as implicit bias. Table 1 demonstrates an example of implicit bias. There are two short texts where the first text is written by a white American and the second one by an African American. The task is to predict the sentiment of a text by a convolutional neural network (CNN) model. Words with a red background indicate those with the salient predictive capability by the model where the darker the color, the more salient the words. The words "yup" and "goin" in the second text are commonly used by African Americans (Liu et al., 2020a) and are irrelevant to the sentiment. However, the CNN model has hinted at them and consequently has predicted a positive text to be negative.
In this work, we aim to understand and mitigate implicit bias in deep text classification models.
One key source of bias is the imbalance of training data (Dixon et al., 2018;Park et al., 2018). Thus, existing debiasing methods mainly focus on balancing the training data, such as adding new training data (Dixon et al., 2018) and augmenting data based on identity-term swap (Park et al., 2018). However, these methods cannot be directly applied to mitigate implicit bias. Obtaining new texts from authors of various demographic groups is very expensive. It requires heavy human labor. Meanwhile, given that there is no explicit demographic information in texts, identity-term swap data augmentation is not applicable. Thus, we propose to enhance deep text classification models to mitigate implicit bias in the training process. To achieve this goal, we face tremendous challenges. First, to mitigate the implicit bias, we have to understand how deep models behave. For example, how they correlate implicit features in text with demographic attributes and how the models make biased predictions. Second, we need to design new mechanisms to take advantage of our understandings to mitigate the implicit bias in deep text classifiers.
To address the above challenges, in this paper, we first propose an interpretation method, which sheds light on the formation mechanism of implicit bias in deep text classification models. We show that the implicit bias is caused by the fact that the models make predictions based on incorrect language features in texts. Second, based on this finding, we propose a novel framework Debiased-TC (Debiased Text Classification) to mitigate the implicit bias of deep text classifiers. More specifically, we equip the deep classifiers with an additional saliency selection layer that first determines the correct language features which the model should base on to make predictions. We also propose an optimization method to train the classifiers with the saliency selection layer. Note that both our proposed interpretation method and the learning framework are model-agnostic, where they can be applied to any deep text classifier. We evaluate the framework with two popular deep text classification models across various text classification tasks on three public datasets. The experimental results demonstrate that our method significantly mitigates the implicit bias while maintaining or even improving their prediction performance.

Preliminary Study
In this section, we perform a preliminary study to validate the existence of implicit bias in deep text classification models. We first introduce the data and text classification tasks, and then present the empirical results.

Data and Tasks
In this preliminary study, we investigate different text classification tasks and various demographic groups to validate the implicit bias. We use three datasets, including the DIAL and PAN16 datasets processed by (Elazar and Goldberg, 2018) and the Multilingual Twitter Corpus (MTC) introduced in (Huang et al., 2020).
The DIAL dataset contains dialectal texts collected from Twitter. Each tweet's text is associated with the race of the author as the demographic attribute, denoted as "white" and "black", respectively. This dataset is annotated for two classification tasks: sentiment analysis and mention detection. The sentiment analysis task aims to categorize a text as "happy" or "sad". The mention detection task tries to determine whether a tweet mentions another user, which can also be viewed as distinguishing conversational tweets from non-conversational ones.
The PAN16 dataset consists of tweets. For each tweet, age and gender of its author have been manually labelled. The demographic attribute age has two categories of (18-34) and (≥ 35), and gender has male and female. Also, this dataset is annotated for the mention detection task as described above.
The MTC dataset contains multilingual tweets for the hate speech detection task. Each tweet is annotated as "hate speech" or "non hate speech" and associated with four author's demographic attributes: race, gender, age, and country. We only use the English corpus with the attribute race. In this dataset, the attribute race has two categories, i.e., white and nonwhite.
More statistical information on these three datasets and the links to downloadable versions of the data can be found in Appendix A.

Empirical study
In this subsection, we aim to empirically study if text classification models make the predictions dependent on the demographic attributes of the authors of the texts. The explicit bias in text classification tasks stems from the imbalance of training data (Dixon et al., 2018;Park et al., 2018). For example, when there are more negative examples from one group in the training data, the model learns to correlate that group with the negative label, which results in bias. Inspired by this observation, to validate the existence of implicit bias, we investigate if the imbalance of training data in terms of demographic attributes of the authors can lead to biased predictions. To answer this question, we consider the following setting: (1) the training data has an equal number of positive and negative examples; and (2) positive and negative examples in the training data are imbalanced among different groups of the authors according to their demographic attributes. Intuitively, if the predictions are independent of the demographic attributes of authors, the model should still perform similarly for different groups.
For each task and demographic attribute of authors, we consider two labels (i.e., positive and negative) and two demographic groups (i.e., Group I and Group II). For each dataset, we follow the aforementioned setting to build a training set. We make the training set overall balanced in terms of the labels and demographic groups. That is, we set the overall ratio of positive and negative examples as 1:1, and the overall ratio of examples from Group I and Group II as 1:1 as well. Meanwhile, we make the data in each group imbalanced. In particular, for Group I, we set the ratio of its positive and negative examples to 4:1, while the ratio is automatically set to 1:4 for Group II. We name the proportion of positive and negative samples in Group I as "balance rate". We train a convolution neural network (CNN) text classifier as a representative model on the training set and evaluate it on the test set. We use the false positive/negative rates (Dixon et al., 2018) and the demographic parity rate (a.k.a., positive outcome rate, the probability of the model predicting a positive outcome for one group) (Dwork et al., 2012;Kusner et al., 2017) to evaluate the fairness of the classification models.
The results are shown in Table 2. For the demographic attribute race, Group I/Group II stands for white/black in the DIAL dataset, and white/nonwhite in the MTC dataset. For gender and age, Group I/Group II stands for male/female and age ranges (18-34)/(≥35), respectively. From the table, we observe that in terms of different tasks and demographic attributes of authors, the model shows significant bias with the same pattern. For all cases, the demographic group with more positive examples (Group I) always gets a higher false positive rate, a lower false negative rate, and a higher demographic parity rate than the other group. This demonstrates that imbalanced data can cause implicit bias, and the predictions are not independent of the demographic attributes of authors. Since the text itself doesn't explicitly contain any demographic information, the model could learn to recognize the demographic attributes of authors based on implicit features such as language styles and associate them with a biased outcome. Next, we will understand one formation of implicit bias and then propose Debiased-TC to mitigate it.

Understanding Implicit Bias
In this section, we aim to understand the possible underlying formation mechanism of the implicit bias. Our intuition is -when a training set for sentiment analysis has more positive examples from white authors and more negative examples from black authors, a classification model trained on such a dataset may learn a "shortcut" (Mahabadi et al., 2020) to indiscriminately associates the language style features of white people with the positive sentiment and those of black people with the negative sentiment. In other words, the model does not use the correct language features (e.g., emotional words) to make the prediction. Thus, we attempt to examine the following hypothesis: A deep text classification model presents implicit bias since it makes predictions based on language features that should be irrelevant to the classification task but are correlated with a certain demographic group of authors. To verify this hypothesis, we first propose an interpretation method to detect the salient words a text classification model relies on to make the prediction. The interpretation model enables us to check the overlapping between the salient words and the words related to the authors' demographic attributes. Consequently, it allows us to understand the relationship between such overlapping and the model's implicit bias.

An Interpretation Method
We follow the idea of the learning-based interpretation method L2X (Chen et al., 2018) to train an explainer to interpret a given model. The reasons for choosing L2X are -1) as a learning-based explainer, it learns to globally explain the behavior of a model, instead of explaining a single instance at one time; and 2) the explainer has the potential to be integrated into our debiasing framework to mitigate implicit bias in an end-to-end manner, which will be introduced in Section 4.
A binary text classification model M : . For a certain model M, we seek to specify the contribution of each word in X for M to make the prediction Y . The contributions can be denoted as a saliency distribution S = (s 1 , s 2 , . . . , s n ), where s i is the saliency score of the word x i , and n i=1 s i = 1. Given a model M, we train an explainer E M : X → S to estimate the saliency distribution S of an input text X.
The explainer is trained by maximizing I(X S , Y ), the mutual information (Cover, 1999) between the response variable Y and the selected feature X S of X under saliency distribution S. The selected feature X S = X S = (s 1 · x 1 , s 2 · x 2 , . . . , s n · x n ) 1 is calculated as the element-wise product between X and S. In our implementation, we parametrize the explainer by a bi-directional RNN followed by a linear layer and a Softmax layer. More details about the optimization of the explainer can be found in Appendix B.

Saliency Correlation Measurement
In this work, we assume that the text classification task is totally independent of the demographic attribute of the author of the text. In other words, language features that reflect the author's demographic information should not be taken as evidence for the main task. Thus, we propose to understand the implicit bias of a deep text classification model by examining the overlapping between salient words for the main task and the words correlated with the 1 Without confusion, we use xi to denote both a word and its word embedding vector. With the interpretation model, we can estimate the saliency distributions of the input words for the classification task and the demographic attribute prediction task, respectively, and then check their overlapping. As shown in Figure 1, we train two models M Y and M Z with the same architecture for the former and the latter tasks, respectively. Then, two corresponding explainers E Y and E Z are trained for them. Thus, given an input text X, two explainers can estimate the saliency distributions S Y and S Z on two tasks, respectively. We use the Jensen-Shannon (JS) divergence JS(S Y ||S Z ) to measure the overlap between language features that these two tasks relying on to make the predictions on Y and Z.

Empirical Analysis
In this subsection, we present the experiments to verify our hypothesis on the formulation of implicit bias. Following the experimental settings in Section 2.2, we vary the "balance rate" of the training data and then observe how the saliency correlation changes. We use CNN text classifiers (see 5.2 for details) for both M Y and M Z . In Figure 2, we show how the average JS divergence and the demographic parity difference (DPD) vary with the changes of the balance rate. DPD is the absolute value of the difference between the demographic parity rates for the two groups. We only report the results for DIAL and PAN16 datasets and DPD as the fairness metric since we achieved similar results for other settings. For each task and each demographic attribute, the DPD is small when the training data are balanced and becomes large when the data are imbalanced. However, the JS divergence is large for balanced data while small for imbalanced data. A larger DPD indicates stronger implicit bias and a smaller JS divergence stands for a stronger overlap between the saliency distributions for the two tasks. Thus, these observations suggest that when the training data are imbalanced, the text classifiers tend to use language features related to the demographic attribute of authors to make the prediction.

The Bias Mitigation Framework
In the previous section, we showed that a model with implicit bias tends to utilize features related to the demographic attribute of authors to make the prediction, especially when training data is imbalanced in terms of the demographic attribute of authors. One potential solution is to balance the training data by augmenting more examples from underrepresented groups. However, collecting new data from authors of different demographics is expensive. Thus, to mitigate the implicit bias, we propose the novel framework Debiased-TC. Our proposed approach can mitigate implicit bias by automatically correcting their selection of input features. In this section, we will first introduce the proposed framework with the corresponding optimization method.

Debiased Text Classification Model
An illustration of Debiased-TC is shown in Figure 3. Similar to the explainer in the interpretation model, we equip the base model Figure 3: An illustration of the bias mitigation model.
a corrector layer C after the input layer. The corrector C : X → S learns to correct the model's feature selection. It first maps an input text X = (x 1 , x 2 , . . . , x n ) to a saliency distribution S = (s 1 , s 2 , . . . , s n ), which is expected to give high scores to words related to the main tasks and low scores to words related to demographic attributes of authors. Then, it assigns weights to the input features with the saliency scores by calculating X S = X S, which is fed into the classification model M Y for prediction.
To train a corrector to achieve the expected goal, we adopt the idea of adversarial training. More specifically, in addition to the main classifier M Y , we introduce an adversarial classifier M Z , which takes X S as the input and predicts the demographic attribute Z. During the adversarial training, the corrector attempts to help M Y make correct predictions while preventing M Z from predicting demographic attributes. To make this feasible, we use the gradient reversal technique (Ganin and Lempitsky, 2015), where we add a gradient-reversal layer between the weighted inputs X S and the adversarial classifier M Z . The gradient-reversal layer has no effect on its downstream components (i.e., the adversarial classifier M Z ). However, during backpropagation, the gradients that pass down through this layer to its upstream components (i.e., the corrector C) are getting reversed. As a result, the corrector C receives opposite gradients from M Z . The outputs of the M Y and M Z are used as signals to train the corrector such that it can upweight the words correlated with the main task label Y and downweight the words correlated with the demographic attribute Z. We set the adversarial classifier M Z with the same architecture as the main classifier M Y . The corrector C has the same architecture as the explainer introduced in Section 3.

An Optimization Method for Debiased-TC
In this subsection, we discuss the optimization method for the proposed framework. We denote the parameters of M Y , M Z and C as W Y , W Z and Θ, respectively. The optimization task is to jointly optimize the parameters of the classifiers, i.e., W Y and W Z , and the parameters of the corrector, i.e., Θ. We can view the optimization as an architecture search problem. Since our debiasing framework is end-to-end and differentiable, we develop an optimization method for our framework based on the differentiable architecture search (DARTS) techniques (Liu et al., 2018;Zhao et al., 2020). We update M Y , M Z by optimizing the training losses L Y train and L Z train on the training set and update Θ by optimizing the validation loss L val on the validation set through gradient descent. We denote the cross-entropy losses for M Y and M Z as L Y and L Z , respectively. L Y train and L Z train indicate the cross-entropy losses L Y and L Z on the training set. L val denotes the combined loss of the two crossentropy losses L = L Y + L Z on the validation set.
The goal of optimizing the corrector is to find optimal parameters Θ * that minimizes the validation loss L val (W Y * , W Z * , Θ), where the optimal parameters W Y * and W Z * are obtained by minimizing the training losses as follows.
The above goal forms a bi-level optimization problem (Maclaurin et al., 2015;Pham et al., 2018), where Θ is the upper-level variable and W Y and W Z are the lower-level variables: Optimizing Θ is time-consuming due to the expensive inner optimization of W Y and W Z . Therefore, we leverage the approximation scheme as DARTS: where ξ is the learning rate for updating W Y and W Z . The approximation scheme estimates W Y * (Θ) and W Z * (Θ) by updating W Y and W Z for a single training step, which avoids total optimization W * (Θ) = arg min W L train (W, Θ * ) to the convergence. In our implementation, we apply first-order approximation with ξ = 0, which can even lead to more speed-up. Also, in our specific experiments, since the amount of validation data is limited, we build an augmented validation dataset V = V ∪ T combining the original validation set V with the training set T for optimizing Θ. More details of the DARTS-based optimization algorithm are shown in Appendix C.

Experiment
In this section, we conduct experiments to evaluate our proposed debiasing framework. Through the experiments, we try to answer two questions: 1) Does our framework effectively mitigate the implicit bias in various deep text classification models? and 2) Does our framework maintain the performance of the original models (without debasing) while reducing the bias?

Baselines
In our experiments, we compare our proposed debiasing framework with two baselines. Since there is no established method for mitigating implicit bias, we adopt two debiasing methods designed for traditional explicit bias and adapt them for implicit bias.
Data Augmentation* (Data Aug) (Dixon et al., 2018). We manually balance the training data of two demographic groups by adding sufficient negative examples for Group I and positive examples for Group II. As a result, the ratio of positive and negative training examples for both groups is 1:1. As discussed in the introduction, obtaining additional labeled data from specific authors is very expensive. In this work, we seek to develop bias mitigation methodology without extra data. Since Data Aug introduces more training data, it's not fair to directly compare it with other debiasing methods that only utilize original training data (including our method). We include Data Aug as a special baseline for reference.
Instance Weighting (Ins Weigh) (Zhang et al., 2020). We re-weight each training instance with a numerical weight P (Y ) P (Y |Z) based on the label distribution for each demographic group to mitigate explicit bias. In this method, a random forest classifier is built to estimate the conditional distribu-tion P (Y |Z) and the marginal distribution P (Y ) is manually calculated.

Experimental Settings
We conduct our experiments for implicit bias mitigation on two representative base models: CNN (Kim, 2014) and RNN (Chung et al., 2014). We use the same datasets with manually designed proportions, as described in Section 2.2. The details of the base models, as well as the implementation details for the replication of the experiments, can be found in Appendix D.

Performance Comparison
We train the base models with our proposed debiasing framework as well as the baseline debiasing methods. We report the performance on the test set in terms of fairness and classification performance. Fairness Evaluation. Table 3 shows the results for fairness evaluation metrics: false positive equality difference (FPED), false negative equality difference (FNED), and DPD. FPED/FNED indicates the absolute value of the difference between the false positive/negative rates of the two groups. We make the following observations. First, the base models attain high FPED, FNED, and DPD, which indicates the existence of significant implicit bias towards the authors of the texts. Ins Weigh seems ineffective in mitigating implicit bias since it only achieved comparable fairness scores with the base models. Note that not every example that belongs to a certain group necessarily results in bias towards that group. Thus, assigning a uniform weight for all examples with the same label Y and demographic attribute Z is not a proper way to reduce implicit bias. Third, both Data Aug and Debiased-TC can mitigate the implicit bias by achieving lower equality and demographic parity differences. However, compared to Data Aug, Debiased-TC has two advantages. First, Data Aug needs to add more training data while Debiased-TC does not. Debiased-TC can locate the main source of implicit bias by analyzing how it forms in a deep text classification model. Due to the proposed corrector model, it can make a classification model focus on the relevant features for predictions and discard the features that may lead to implicit bias. Second, Debiased-TC is more stable than Data Aug. For the sentiment classification task with race as the demographic attribute, the CNN and RNN classifiers trained on augmented data still result in high FPED and DPD scores. This suggests that balancing the training data cannot always mitigate implicit bias. In fact, only training examples with demographic language features can contribute to the implicit bias. Since some texts in the training set do not contain any language features belonging to a demographic group, they do not help balance the data. Text Classification Performance Evaluation. The prediction performance of the text classification models trained under various debiasing methods is shown in Table 4, where we report the accuracy and F1 scores. First, it is not surprising to see that Data Aug achieves the best performances, since the data augmentation technique introduces more training data. It's not fair to directly compare it with other debiasing methods that only utilize original training data. Second, in most cases, our method achieves comparable or even better performance than the original base models. As we verified before, the implicit bias of a text classification model is caused by the fact that it learns a wrong correlation between labels and demographic language features. Debiased-TC corrects the model's selection of language features for predictions and thereby improves its performance on the classification task.
In conclusion, our proposed debiasing framework significantly mitigates the implicit bias, while maintaining or even slightly improve the classification performance.

Related Work
Fairness in NLP. Recent research has demonstrated that word embeddings exhibit human biases for text data. For example, in word embeddings trained on large-scale real-world text data, the word "man" is mapped to "programmer" while "woman" is mapped to "homemaker" (Bolukbasi et al., 2016). Some works extend the research of biases in word embeddings to that of sentence embeddings. The work (May et al., 2019) examines popular sentence encoding models from CBoW, GPT, ELMo to BERT, and shows that those models inherit human's prejudices from the training data. For the task of coreference resolution, a benchmark named WinoBias is proposed (Zhao et al., 2018) to measure the gender biases with a debiasing method based on data augmentation. Prates et al. (2018) reveal that Google's machine translation system shows gender biases in various languages. Existing debiasing methods for word embeddings are adopted to mitigate the biases in machine translation systems . In the task of dialogue generation, it is first studied by (Liu et al., 2020a) on the biases learned by dialogue  agents from human conversation data. It is shown that significant gender and race biases exist in popular dialogue models. As a countermeasure, Liu et al. (2020b) propose to mitigate gender bias in neural dialogue models with adversarial learning.
Fairness in Text Classification. For the text classification problem, Dixon et al. (2018) demonstrate that the source of unintended bias in models is the imbalance of training data, and they provide a debiasing method, which introduces new data to balance the training data. In (Park et al., 2018), gender biases are measured on abusive language detection models, and the effect of different pre-trained word embeddings and model architectures are analyzed. By considering the various ways that a classifier's score distribution can vary across designated groups, a suite of threshold-agnostic metrics is introduced in (Borkan et al., 2019), which provides a nuanced view of this unintended bias. Furthermore, the work (Zhang et al., 2020) proposes to debias text classification models using instance weighting, i.e., different weights are assigned to the training samples involving different demographic groups. The works discussed above focus on explicit bias, where the demographic attributes are explicitly expressed in the text. However, works studying implicit bias are rather limited. Huang et al. (2020) introduce the first multilingual hate speech dataset with inferred author demographic attributes. Through experiments on this dataset, they show that popular text classifiers can learn the bias towards the demographic attribute of the author. But this work doesn't discuss how the bias is produced, and no debiasing method is provided.

Conclusion
In this paper, we demonstrate that a text classifier with implicit bias makes predictions based on language features correlated with demographic groups of authors, and propose a novel learning framework Debiased-TC to mitigate such implicit bias. The experimental results show that Debiased-TC sig-nificantly mitigates implicit bias, and maintains or even improves the text classification performance of the original models. In the future, we will investigate implicit bias in other NLP applications.

B Optimization of the Explainer
We train the explainer E by maximizing the mutual information between the response variable Y and the selected features X S . The optimization problem can be formulated as: Solving the optimization problem in Eq.
(2) is equivalent to finding an explainer E satisfying the following: Hence, we train the explainer E by optimizing P M (Y |X S ) with the parameters of the classification model M fixed. In our implementation, we adopt the cross-entropy loss for training, as we do when we train the classification model M.

C An Optimization Method for Debiased-TC
We present our DARTS-based optimization algorithm in Algorithm 1. In each iteration, we first update the corrector's parameters based on the augmented validation set V (lines 2-3). Then, we collect a new mini-batch of training data (line 4). We generate the saliency scores S = (s 1 , s 2 , . . . , s n ) for the training examples via the corrector with its current parameters (line 5). Next, we make predictions via the classifiers with their current parameters and X S (line 6). Eventually, we update the parameters of the classifiers (line 7). 2 Output: classifier parameters W Y * and W Z * ; and corrector parameters Θ * 3 Initialize W Y , W Z and Θ 1: while not converged do 2: Sample a mini-batch of validation data from V = V ∪ T 3: Update Θ by descending train (W Z , Θ), Θ (ξ = 0 for first-order approximation) 4: Collect a mini-batch of training data from T 5: Generate S via the corrector with current parameters Θ 6: Generate predictions via the classifiers with current parameters W Y , W Z and XS 7: Update W Y and W Z by descending ∇ W Y L Y train (W Y , Θ) and ∇ W Z L Z train (W Z , Θ) 8: end while D Implementation Details

D.1 Details of Base Models
In the base model CNN, we use 100 filters with three different kernel sizes (3, 4, and 5) in the convolution layer, where we use a Rectified Linear Unit (ReLU) as the non-linear activation function. Each obtained feature map is processed by a maxpooling layer. Then, the features are concatenated and fed into a linear prediction layer to get the final predictions. A dropout with a rate of 0.3 is applied before the linear prediction layer.
For the base model RNN, we use a one-layer unidirectional RNN with Gated Recurrent Units (GRU). The hidden size is set to 300. The last hidden state of the RNN is fed into a linear prediction layer to get the final predictions. We apply a dropout with a rate of 0.2 before the linear prediction layer.

D.2 Details of Experimental Settings
For the text classifiers, we use randomly initialized word embeddings with a size of 300. All the models are trained by an Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.001. We apply gradient clipping with a clip-value of 0.25 to prevent the exploding gradient problem. The batch size is set to 64. For the base model and the baseline methods, when the prediction accuracy of the validation data doesn't improve for 5 consecutive epochs, the training is terminated, and we pick the model with the best performance on the validation set. Our model utilizes the validation data for train-ing. To avoid it overfitting the validation data, we don't select the model based on its performance on the validation set. Instead, we train the model for a fixed number of epochs (5 epochs, the same for all the three datasets) and evaluate the obtained model.