<?xml version="1.0" encoding="UTF-8" ?>
<volume id="W18">
  <paper id="6200">
    <title>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</title>
    <editor>Alexandra Balahur</editor>
    <editor>Saif M. Mohammad</editor>
    <editor>Veronique Hoste</editor>
    <editor>Roman Klinger</editor>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <url>http://aclweb.org/anthology/W18-62</url>
    <bibtype>book</bibtype>
    <bibkey>WASSA2018:2018</bibkey>
  </paper>

  <paper id="6201">
    <title>Identifying Affective Events and the Reasons for their Polarity</title>
    <author><first>Ellen</first><last>Riloff</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>1</pages>
    <url>http://aclweb.org/anthology/W18-6201</url>
    <abstract>Many events have a positive or negative impact on our lives (e.g., &#x201c;I</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>riloff:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6202">
    <title>Deep contextualized word representations for detecting sarcasm and irony</title>
    <author><first>Suzana</first><last>Ilić</last></author>
    <author><first>Edison</first><last>Marrese-Taylor</last></author>
    <author><first>Jorge</first><last>Balazs</last></author>
    <author><first>Yutaka</first><last>Matsuo</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>2&#8211;7</pages>
    <url>http://aclweb.org/anthology/W18-6202</url>
    <abstract>Predicting context-dependent and non-literal utterances like sarcastic and ironic expressions still remains a challenging task in NLP, as it goes beyond linguistic patterns, encompassing common sense and shared knowledge as crucial components. To capture complex morpho-syntactic features that can usually serve as indicators for irony or sarcasm across dynamic contexts, we propose a model that uses character-level vector representations of words, based on ELMo. We test our model on 7 different datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them, and otherwise offering competitive results.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>ili-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6203">
    <title>Implicit Subjective and Sentimental Usages in Multi-sense Word Embeddings</title>
    <author><first>Yuqi</first><last>Sun</last></author>
    <author><first>Haoyue</first><last>Shi</last></author>
    <author><first>Junfeng</first><last>Hu</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>8&#8211;13</pages>
    <url>http://aclweb.org/anthology/W18-6203</url>
    <abstract>In multi-sense word embeddings, contextual variations in corpus may cause a univocal word to be embedded into different sense vectors.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>sun-shi-hu:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6204">
    <title>Language Independent Sentiment Analysis with Sentiment-Specific Word Embeddings</title>
    <author><first>Carl</first><last>Saroufim</last></author>
    <author><first>Akram</first><last>Almatarky</last></author>
    <author><first>Mohammad</first><last>AbdelHady</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>14&#8211;23</pages>
    <url>http://aclweb.org/anthology/W18-6204</url>
    <abstract>Data annotation is a critical step to train a text model but it is tedious, expensive and time-consuming. We present a language independent method to train a sentiment polarity model with limited amount of manually-labeled data. Word embeddings such as Word2Vec are efficient at incorporating semantic and syntactic properties of words, yielding good results for document classification. However, these embeddings might map words with opposite polarities, to vectors close to each other. We train Sentiment Specific Word Embeddings (SSWE) on top of an unsupervised Word2Vec model, using either Recurrent Neural Networks (RNN) or Convolutional Neural Networks (CNN) on data auto-labeled as "Positive" or "Negative". For this task, we rely on the universality of emojis to auto-label a large number of French tweets using a small set of positive and negative emojis. Finally, we apply a transfer learning approach to refine the network weights with a small-size manually-labeled training data set. Experiments are conducted to evaluate the performance of this approach on French sentiment classification using benchmark data sets from SemEval 2016 competition. We were able to achieve a performance improvement by using SSWE over Word2Vec. We also used a graph-based approach for label propagation to auto-generate a sentiment lexicon.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>saroufim-almatarky-abdelhady:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6205">
    <title>Creating a Dataset for Multilingual Fine-grained Emotion-detection Using Gamification-based Annotation</title>
    <author><first>Emily</first><last>Öhman</last></author>
    <author><first>Kaisla</first><last>Kajava</last></author>
    <author><first>Jörg</first><last>Tiedemann</last></author>
    <author><first>Timo</first><last>Honkela</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>24&#8211;30</pages>
    <url>http://aclweb.org/anthology/W18-6205</url>
    <abstract>This paper introduces a gamified framework for fine-grained sentiment analysis and emotion detection. We present a flexible tool that can be used for efficient annotation based on crowd sourcing and a self-perpetuating gold standard. We also present a novel dataset with multi-dimensional annotations of emotions and sentiments in movie subtitles that enables research on sentiment preservation across languages and the creation of robust multilingual emotion detection tools. The tools and datasets are public and open-source and can easily be extended and applied for various purposes.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>hman-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6206">
    <title>IEST: WASSA-2018 Implicit Emotions Shared Task</title>
    <author><first>Roman</first><last>Klinger</last></author>
    <author><first>Orphee</first><last>De Clercq</last></author>
    <author><first>Saif</first><last>Mohammad</last></author>
    <author><first>Alexandra</first><last>Balahur</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>31&#8211;42</pages>
    <url>http://aclweb.org/anthology/W18-6206</url>
    <abstract>Past shared tasks on emotions use data with both overt expressions of emotions (I am so happy to see you!) as well as subtle expressions where the emotions have to be inferred, for instance from event descriptions. Further, most datasets do not focus on the cause or the stimulus of the emotion. Here, for the first time, we propose a shared task where systems have to predict the emotions in a large automatically labeled dataset of tweets without access to words denoting emotions. Based on this intention, we call this the Implicit Emotion Shared Task (IEST) because the systems have to infer the emotion mostly from the context. Every tweet has an occurrence of an explicit emotion word that is masked. The tweets are collected in a manner such that they are likely to include a description of the cause of the emotion &#8211; the stimulus. Altogether, 30 teams submitted results which range from macro F1 scores of 21 % to 71 %. The baseline (Max- Ent bag of words and bigrams) obtains an F1 score of 60 % which was available to the participants during the development phase. A study with human annotators suggests that automatic methods outperform human predictions, possibly by honing into subtle textual clues not used by humans. Corpora, resources, and results are available at the shared task website at http://implicitemotions.wassa2018.com.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>klinger-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6207">
    <title>Amobee at IEST 2018: Transfer Learning from Language Models</title>
    <author><first>Alon</first><last>Rozental</last></author>
    <author><first>Daniel</first><last>Fleischer</last></author>
    <author><first>Zohar</first><last>Kelrich</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>43&#8211;49</pages>
    <url>http://aclweb.org/anthology/W18-6207</url>
    <abstract>This paper describes the system developed at Amobee for the WASSA 2018 implicit emotions shared task (IEST). The goal of this task was to predict the emotion expressed by missing words in tweets without an explicit mention of those words. We developed an ensemble system consisting of language models together with LSTM-based networks containing a CNN attention mechanism. Our approach represents a novel use of language models—specifically trained on a large Twitter dataset—to predict and classify emotions. Our system reached 1st place with a macro F1 score of 0.7145.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>rozental-fleischer-kelrich:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6208">
    <title>IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations</title>
    <author><first>Jorge</first><last>Balazs</last></author>
    <author><first>Edison</first><last>Marrese-Taylor</last></author>
    <author><first>Yutaka</first><last>Matsuo</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>50&#8211;56</pages>
    <url>http://aclweb.org/anthology/W18-6208</url>
    <abstract>In this paper we describe our system designed for the WASSA 2018 Implicit Emotion Shared Task (IEST), which obtained second place out of 30 teams with a test macro F1 score of 0.710. The system is composed of a single pre-trained ELMo layer for encoding words, a Bidirectional Long-Short Memory Network BiLSTM for enriching word representations with context, a max-pooling operation for creating sentence representations from them, and a Dense Layer for projecting the sentence representations into label space. Our</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>balazs-marresetaylor-matsuo:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6209">
    <title>NTUA-SLP at IEST 2018: Ensemble of Neural Transfer Methods for Implicit Emotion Classification</title>
    <author><first>Alexandra</first><last>Chronopoulou</last></author>
    <author><first>Aikaterini</first><last>Margatina</last></author>
    <author><first>Christos</first><last>Baziotis</last></author>
    <author><first>Alexandros</first><last>Potamianos</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>57&#8211;64</pages>
    <url>http://aclweb.org/anthology/W18-6209</url>
    <abstract>In this paper we present our approach to tackle</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>chronopoulou-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6210">
    <title>Sentiment analysis under temporal shift</title>
    <author><first>Jan</first><last>Lukeš</last></author>
    <author><first>Anders</first><last>Søgaard</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>65&#8211;71</pages>
    <url>http://aclweb.org/anthology/W18-6210</url>
    <abstract>Sentiment analysis models often rely on training data that is several years old. In this paper, we show that lexical features change polarity over time, leading to degrading performance. This effect is particularly strong in sparse models relying only on highly predictive features. Using predictive feature selection, we are able to significantly improve the accuracy of such models over time.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>luke-sgaard:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6211">
    <title>Not Just Depressed: Bipolar Disorder Prediction on Reddit</title>
    <author><first>Ivan</first><last>Sekulic</last></author>
    <author><first>Matej</first><last>Gjurković</last></author>
    <author><first>Jan</first><last>Šnajder</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>72&#8211;78</pages>
    <url>http://aclweb.org/anthology/W18-6211</url>
    <abstract>Bipolar disorder, an illness characterized by manic and depressive episodes, affects more than 60 million people worldwide. We present a preliminary study on bipolar disorder prediction from user-generated text on Reddit, which relies on users’ self-reported labels. Our benchmark classifiers for bipolar disorder prediction outperform the baselines and reach accuracy and F1-scores of above 86%. Feature analysis shows interesting differences in language use between users with bipolar disorders and the control group, including differences in the use of emotion-expressive words.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>sekulic-gjurkovi-najder:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6212">
    <title>Topic-Specific Sentiment Analysis Can Help Identify Political Ideology</title>
    <author><first>Sumit</first><last>Bhatia</last></author>
    <author><first>Deepak</first><last>P</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>79&#8211;84</pages>
    <url>http://aclweb.org/anthology/W18-6212</url>
    <abstract>Ideological leanings of an individual can often</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>bhatia-p:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6213">
    <title>Saying no but meaning yes: negation and sentiment analysis in Basque</title>
    <author><first>Jon</first><last>Alkorta</last></author>
    <author><first>Koldo</first><last>Gojenola</last></author>
    <author><first>Mikel</first><last>Iruskieta</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>85&#8211;90</pages>
    <url>http://aclweb.org/anthology/W18-6213</url>
    <abstract>Negation is one of the shifters or operators that can change the semantic orientation of a word or a sentence and, consequently, it has to be taken into consideration in sentiment analysis. In this work, we have analyzed the effects of negation on the semantic orientation in Basque. The analysis shows that negation markers can strengthen, weaken or have no effect on sentiment orientation of a word or a group of words. Using the Constraint Grammar formalism, we have designed and evaluated a set of linguistic rules to formalize these three phenomena. The results show that two phenomena, strengthening and no change, have been identified accurately and the third one, weakening, with acceptable results.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>alkorta-gojenola-iruskieta:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6214">
    <title>Leveraging Writing Systems Change for Deep Learning Based Chinese Emotion Analysis</title>
    <author><first>Rong</first><last>Xiang</last></author>
    <author><first>Yunfei</first><last>Long</last></author>
    <author><first>Qin</first><last>Lu</last></author>
    <author><first>Dan</first><last>Xiong</last></author>
    <author><first>I-Hsuan</first><last>Chen</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>91&#8211;96</pages>
    <url>http://aclweb.org/anthology/W18-6214</url>
    <abstract>Social media text written in Chinese communities contains mixed scripts including major text written with Chinese characters, an ideograph-based writing system, and some minor text using Latin letters, an alphabet-based writing system. This phenomenon is called writing systems change (WSCs). Past studies have shown that WSCs can be used to express emotions, particularly where the social and political environment is more conservative. However, because WSCs can break the syntax of the major text, it poses more challenges in NLP tasks like emotion classification. In this work, we present a novel deep learning based method to include WSCs as an effective feature for emotion analysis. The method first identifies all WSCs points. Representation of the major text is learned through an LSTM model whereas the presentation of the minority text is learned by a separate CNN.Emotions expressed in the minority text are further highlighted through an attention mechanism before emotion classification. It has proven to be significant that incorporating WSCs features in deep learning models can improve the performance which is valid by both F1-scores and p-value. It indicates that WSCs serve as an effective feature in emotion analysis of the social network.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>xiang-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6215">
    <title>Ternary Twitter Sentiment Classification with Distant Supervision and Sentiment-Specific Word Embeddings</title>
    <author><first>Mats</first><last>Byrkjeland</last></author>
    <author><first>Frederik</first><last>Gørvell de Lichtenberg</last></author>
    <author><first>Björn</first><last>Gambäck</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>97&#8211;106</pages>
    <url>http://aclweb.org/anthology/W18-6215</url>
    <abstract>The paper proposes the Ternary Sentiment Embedding Model, a new model for creating sentiment embeddings based on the Hybrid Ranking Model of Tang et al. (2016), but trained on ternary-labeled data instead of binary-labeled, utilizing sentiment embeddings from datasets made with different distant supervision methods. </abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>byrkjeland-grvelldelichtenberg-gambck:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6216">
    <title>Linking News Sentiment to Microblogs: A Distributional Semantics Approach to Enhance Microblog Sentiment Classification</title>
    <author><first>Tobias</first><last>Daudert</last></author>
    <author><first>Paul</first><last>Buitelaar</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>107&#8211;115</pages>
    <url>http://aclweb.org/anthology/W18-6216</url>
    <abstract>Social media's popularity in society and research is gaining momentum and simultaneously increasing the importance of short textual content such as microblogs. Microblogs are affected by many factors including the news media, therefore, we exploit sentiments conveyed from news to detect and classify sentiment in microblogs. Given that texts can deal with the same entity but might not be vastly related when it comes to sentiment, it becomes necessary to introduce further measures ensuring the relatedness of texts while leveraging the contained sentiments. This paper describes ongoing research introducing distributional semantics to improve the exploitation of news-contained sentiment to enhance microblog sentiment classification.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>daudert-buitelaar:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6217">
    <title>Aspect Based Sentiment Analysis into the Wild</title>
    <author><first>Caroline</first><last>Brun</last></author>
    <author><first>Vassilina</first><last>Nikoulina</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>116&#8211;122</pages>
    <url>http://aclweb.org/anthology/W18-6217</url>
    <abstract>In this paper, we test state-of-the-art Aspect Based Sentiment Analysis (ABSA) systems trained on a widely used dataset on actual data. We created a new manually annotated dataset of user generated data from the same domain as the training dataset, but from other sources and analyse the differences between the new and the standard ABSA dataset. We then analyse the results in performance of different versions of the same system on both datasets. We also propose light adaptation methods to increase system robustness.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>brun-nikoulina:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6218">
    <title>The Role of Emotions in Native Language Identification</title>
    <author><first>Ilia</first><last>Markov</last></author>
    <author><first>Vivi</first><last>Nastase</last></author>
    <author><first>Carlo</first><last>Strapparava</last></author>
    <author><first>Grigori</first><last>Sidorov</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>123&#8211;129</pages>
    <url>http://aclweb.org/anthology/W18-6218</url>
    <abstract>We explore the hypothesis that emotion is one of the dimensions of language that surfaces from the native language into a second language. To check the role of emotions in native language identification (NLI), we model emotion information through polarity and emotion load features, and use document representations using these features to classify the native language of the author. The results indicate that emotion is relevant for NLI, even for high proficiency levels and across topics.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>markov-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6219">
    <title>Self-Attention: A Better Building Block for Sentiment Analysis Neural Network Classifiers</title>
    <author><first>Artaches</first><last>Ambartsoumian</last></author>
    <author><first>Fred</first><last>Popowich</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>130&#8211;139</pages>
    <url>http://aclweb.org/anthology/W18-6219</url>
    <abstract>Sentiment Analysis has seen much progress in the past two decades. For the past few years, neural network approaches, primarily RNNs and CNNs, have been the most successful for this task. Recently, a new category of neural networks, self-attention networks (SANs), have been created which utilizes the attention mechanism as the basic building block. Self-attention networks have been shown to be effective for sequence modeling tasks, while having no recurrence or convolutions. In this work we explore the effectiveness of the SANs for sentiment analysis. We demonstrate that SANs are superior in performance to their RNN and CNN counterparts by comparing their classification accuracy on six datasets as well as their model characteristics such as training speed and memory consumption. Finally, we explore the effects of various SAN modifications such as multi-head attention as well as two methods of incorporating sequence position information into SANs.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>ambartsoumian-popowich:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6220">
    <title>Dual Memory Network Model for Biased Product Review Classification</title>
    <author><first>Yunfei</first><last>Long</last></author>
    <author><first>Mingyu</first><last>Ma</last></author>
    <author><first>Qin</first><last>Lu</last></author>
    <author><first>Rong</first><last>Xiang</last></author>
    <author><first>Chu-Ren</first><last>Huang</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>140&#8211;148</pages>
    <url>http://aclweb.org/anthology/W18-6220</url>
    <abstract>In sentiment analysis (SA) of product reviews,</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>long-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6221">
    <title>Measuring Issue Ownership using Word Embeddings</title>
    <author><first>Amaru</first><last>Cuba Gyllensten</last></author>
    <author><first>Magnus</first><last>Sahlgren</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>149&#8211;155</pages>
    <url>http://aclweb.org/anthology/W18-6221</url>
    <abstract>Sentiment and topic analysis are common methods used for social media monitoring. Essentially, these methods answers questions such as, &#x201c;What is being talked about, regarding X", and &#x201c;What do people feel, regarding X". In this paper, we investigate another venue for social media monitoring, namely issue ownership. In political science, issue ownership has been used to explain voter choice and electoral outcomes. The theory states that voters value certain issues, and cast votes according to the party which they feel best address these issues. We argue that issue alignment can be seen as a kind of semantic source similarity of the kind &#x201c;How similar is source A to issue owner P, when talking about issue X", and as such can be measured using Word/Document embedding techniques. We present work in progress towards measuring that kind of conditioned similarity, and introduce a new notion of similarity for predictive embeddings. </abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>cubagyllensten-sahlgren:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6222">
    <title>Sentiment Expression Boundaries in Sentiment Polarity Classification</title>
    <author><first>Rasoul</first><last>Kaljahi</last></author>
    <author><first>Jennifer</first><last>Foster</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>156&#8211;166</pages>
    <url>http://aclweb.org/anthology/W18-6222</url>
    <abstract>We investigate the effect of using sentiment expression boundaries in predicting sentiment polarity in aspect-level sentiment analysis. We manually annotate a freely available English sentiment polarity dataset with these boundaries and carry out a series of experiments which demonstrate that high quality sentiment expressions can boost the performance of polarity classification. Our experiments with various neural architectures also show that CNN networks outperform LSTMs on this task.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>kaljahi-foster:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6223">
    <title>Exploring and Learning Suicidal Ideation Connotations on Social Media with Deep Learning</title>
    <author><first>Ramit</first><last>Sawhney</last></author>
    <author><first>Prachi</first><last>Manchanda</last></author>
    <author><first>Puneet</first><last>Mathur</last></author>
    <author><first>Rajiv</first><last>Shah</last></author>
    <author><first>Raj</first><last>Singh</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>167&#8211;175</pages>
    <url>http://aclweb.org/anthology/W18-6223</url>
    <abstract>The increasing suicide rates amongst youth and its high correlation with suicidal ideation expression on social media warrants a deeper investigation into models for the detection of suicidal intent in text such as tweets to enable prevention. However, the complexity of the natural language constructs makes this task very challenging. Deep Learning architectures such as LSTMs, CNNs, and RNNs show promise in sentence level classification problems. This work investigates the ability of deep learning architectures to build an accurate and robust model for suicidal ideation detection and compares their performance with standard baselines in text classification problems. The experimental results reveal the merit in C-LSTM based models compared to other deep learning and machine learning based classification models for suicidal ideation detection.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>sawhney-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6224">
    <title>UTFPR at IEST 2018: Exploring Character-to-Word Composition for Emotion Analysis</title>
    <author><first>Gustavo</first><last>Paetzold</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>176&#8211;181</pages>
    <url>http://aclweb.org/anthology/W18-6224</url>
    <abstract>We introduce the UTFPR system for the Implicit Emotions Shared Task of 2018: A compositional character-to-word recurrent neural network that does not exploit heavy and/or hard-to-obtain resources. We find that our approach can outperform multiple baselines, and offers an elegant and effective solution to the problem of orthographic variance in tweets.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>paetzold:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6225">
    <title>HUMIR at IEST-2018: Lexicon-Sensitive and Left-Right Context-Sensitive BiLSTM for Implicit Emotion Recognition</title>
    <author><first>Behzad</first><last>Naderalvojoud</last></author>
    <author><first>Alaettin</first><last>Ucan</last></author>
    <author><first>Ebru</first><last>Akcapinar Sezer</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>182&#8211;188</pages>
    <url>http://aclweb.org/anthology/W18-6225</url>
    <abstract>This paper describes the approaches used in HUMIR system for the WASSA-2018 shared task on the implicit emotion recognition. The objective of this task is to predict the emotion expressed by the target word that has been excluded from the given tweet. We suppose this task as a word sense disambiguation in which the target word is considered as a synthetic word that can express 6 emotions depending on the context. To predict the correct emotion, we propose a deep neural network model that uses two BiLSTM networks to represent the contexts in the left and right sides of the target word. The BiLSTM outputs achieved from the left and right contexts are considered as context-sensitive features. These features are used in a feed-forward neural network to predict the target word emotion. Besides this approach, we also combine the BiLSTM model with lexicon-based and emotion-based features. Finally, we employ all models in the final system using Bagging ensemble method. We achieved macro F-measure value of 68.8 on the official test set and ranked sixth out of 30 participants.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>naderalvojoud-ucan-akcapinarsezer:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6226">
    <title>NLP at IEST 2018: BiLSTM-Attention and LSTM-Attention via Soft Voting in Emotion Classification</title>
    <author><first>Qimin</first><last>Zhou</last></author>
    <author><first>Hao</first><last>Wu</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>189&#8211;194</pages>
    <url>http://aclweb.org/anthology/W18-6226</url>
    <abstract>This paper describes our method that competed at WASSA2018 Implicit Emotion Shared Task. The goal of this task is to classify the emotions of excluded words in tweets into six different classes: sad, joy, disgust, surprise, anger and fear. For this, we examine a BiLSTM architecture with attention mechanism (BiLSTM-Attention) and a LSTM architecture with attention mechanism (LSTM-Attention), and try different dropout rates based on these two models. We then exploit an ensemble of these methods to give the final prediction which improves the model performance significantly compared with the baseline model. The proposed method achieves 7th position out of 30 teams and outperforms the baseline method by 12.5% in terms of macro F1.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>zhou-wu:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6227">
    <title>SINAI at IEST 2018: Neural Encoding of Emotional External Knowledge for Emotion Classification</title>
    <author><first>Flor Miriam</first><last>Plaza del Arco</last></author>
    <author><first>Eugenio</first><last>Mart&#237;nez-C&#225;mara</last></author>
    <author><first>Maite</first><last>Martin</last></author>
    <author><first>L. Alfonso</first><last>Urena Lopez</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>195&#8211;200</pages>
    <url>http://aclweb.org/anthology/W18-6227</url>
    <abstract>In this paper, we describe our participation in WASSA 2018 Implicit Emotion Shared Task (IEST 2018). We claim that the use of emotional external knowledge may enhance the performance and the capacity of generalization of an emotion classification system based on neural networks. Accordingly, we submitted four deep learning systems grounded in a sequence encoding layer. They mainly differ in the feature vector space and the recurrent neural network used in the sequence encoding layer. The official results show that the systems that used emotional external knowledge have a higher capacity of generalization, hence our claim holds.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>plazadelarco-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6228">
    <title>EmoNLP at IEST 2018: An Ensemble of Deep Learning Models and Gradient Boosting Regression Tree for Implicit Emotion Prediction in Tweets</title>
    <author><first>Man</first><last>Liu</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>201&#8211;204</pages>
    <url>http://aclweb.org/anthology/W18-6228</url>
    <abstract>This paper describes our system submitted to IEST 2018, a shared task to predict the emotion types. Six emotion types are involved: anger, joy, fear, surprise, disgust and sad. We perform three different approaches: feed forward neural network (FFNN), convolutional BLSTM (ConBLSTM) and Gradient Boosting Regression Tree Method (GBM). Word embeddings used in convolutional BLSTM are pre-trained on 470 million tweets which are filtered using the emotional words and emojis. In addition, broad sets of features (i.e. syntactic features, lexicon features, cluster features) are adopted to train GBM and FFNN. The three approaches are finally ensembled by the weighted average of predicted probabilities of each emotion type.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>liu:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6229">
    <title>HGSGNLP at IEST 2018: An Ensemble of Machine Learning and Deep Neural Architectures for Implicit Emotion Classification in Tweets</title>
    <author><first>wenting</first><last>wang</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>205&#8211;210</pages>
    <url>http://aclweb.org/anthology/W18-6229</url>
    <abstract>This paper describes our system designed for the WASSA-2018 Implicit Emotion Shared Task (IEST). The task is to predict the emotion category expressed in a tweet by removing the terms angry, afraid, happy, sad, surprised, disgusted and their synonyms. Our final submission is an ensemble of one supervised learning model and three deep neural network based models, where each model approaches the problem from essentially different directions. Our system achieves the macro F1 score of 65.8%, which is a 5.9% performance improvement over the baseline and is ranked 12 out of 30 participating teams.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wang:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6230">
    <title>DataSEARCH at IEST 2018: Multiple Word Embedding based Models for Implicit Emotion Classification of Tweets with Deep Learning</title>
    <author><first>Yasas</first><last>Senarath</last></author>
    <author><first>Uthayasanker</first><last>Thayasivam</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>211&#8211;216</pages>
    <url>http://aclweb.org/anthology/W18-6230</url>
    <abstract>This paper describes an approach to solve implicit emotion classification with the use of pre-trained word embedding models to train multiple neural networks. The system described in this paper is composed of a sequential combination of Long Short-Term Memory and Convolutional Neural Network for feature extraction and Feedforward Neural Network for classification. In this paper, we successfully show that features extracted using multiple pre-trained embeddings can be used to improve the overall performance of the system with Emoji being one of the significant features. The evaluations show that our approach outperforms the baseline system by more than 8% without using any external corpus or lexicon. This approach is ranked 8th in Implicit Emotion Shared Task (IEST) at WASSA-2018.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>senarath-thayasivam:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6231">
    <title>NL-FIIT at IEST-2018: Emotion Recognition utilizing Neural Networks and Multi-level Preprocessing</title>
    <author><first>Samuel</first><last>Pecar</last></author>
    <author><first>Michal</first><last>Farkaš</last></author>
    <author><first>Marian</first><last>Simko</last></author>
    <author><first>Peter</first><last>Lacko</last></author>
    <author><first>Maria</first><last>Bielikova</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>217&#8211;223</pages>
    <url>http://aclweb.org/anthology/W18-6231</url>
    <abstract>In this paper, we present neural models submitted to Shared Task on Implicit Emotion Recognition, organized as part of WASSA 2018. We propose a Bi-LSTM architecture with regularization through dropout and Gaussian noise. Our models use three different embedding layers: GloVe word embeddings trained on Twitter dataset, ELMo embeddings and also sentence embeddings. We see preprocessing as one of the most important parts of the task. We focused on handling emojis, emoticons, hashtags, and also various shortened word forms. In some cases, we proposed to remove some parts of the text, as they do not affect emotion of the original sentence. We also experimented with other modifications like category weights for learning and stacking multiple layers. Our model achieved a macro average F1 score of 65.55%, significantly outperforming the baseline model produced by a simple logistic regression.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>pecar-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6232">
    <title>UWB at IEST 2018: Emotion Prediction in Tweets with Bidirectional Long Short-Term Memory Neural Network</title>
    <author><first>Pavel</first><last>Přib&#225;ň</last></author>
    <author><first>Jiř&#237;</first><last>Mart&#237;nek</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>224&#8211;230</pages>
    <url>http://aclweb.org/anthology/W18-6232</url>
    <abstract>This paper describes our system created for the WASSA 2018 Implicit Emotion Shared Task. The goal of this task is to predict the emotion of a given tweet, from which a certain emotion word is removed. The removed word can be sad, happy, disgusted, angry, afraid or a synonym of one of them. Our proposed system is based on deep-learning methods. We use Bidirectional Long Short-Term Memory (BiLSTM) with word embeddings as an input. Pre-trained DeepMoji model and pre-trained emoji2vec emoji embeddings are also used as additional inputs.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>pib-martnek:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6233">
    <title>USI-IR at IEST 2018: Sequence Modeling and Pseudo-Relevance Feedback for Implicit Emotion Detection</title>
    <author><first>Esteban</first><last>Rissola</last></author>
    <author><first>Anastasia</first><last>Giachanou</last></author>
    <author><first>Fabio</first><last>Crestani</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>231&#8211;234</pages>
    <url>http://aclweb.org/anthology/W18-6233</url>
    <abstract>This paper describes the participation of USI-IR in WASSA 2018 Implicit Emotion Shared Task. We propose a relevance feedback approach employing a sequential model (biLSTM) and word embeddings derived from a large collection of tweets. To this end, we assume that the top-k predictions produce at a first classification step are correct (based on the model accuracy) and use them as new examples to re-train the network.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>rissola-giachanou-crestani:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6234">
    <title>EmotiKLUE at IEST 2018: Topic-Informed Classification of Implicit Emotions</title>
    <author><first>Thomas</first><last>Proisl</last></author>
    <author><first>Philipp</first><last>Heinrich</last></author>
    <author><first>Besim</first><last>Kabashi</last></author>
    <author><first>Stefan</first><last>Evert</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>235&#8211;242</pages>
    <url>http://aclweb.org/anthology/W18-6234</url>
    <abstract>EmotiKLUE is a submission to the Implicit Emotion Shared Task. It is a deep learning system that combines independent representations of the left and right contexts of the emotion word with the topic distribution of an LDA topic model. EmotiKLUE achieves a macro average F1 score of 67.13%, significantly outperforming the baseline produced by a simple ML classifier. Further enhancements after the evaluation period lead to an improved F1 score of 68.10%.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>proisl-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6235">
    <title>BrainT at IEST 2018: Fine-tuning Multiclass Perceptron For Implicit Emotion Classification</title>
    <author><first>Vachagan</first><last>Gratian</last></author>
    <author><first>Marina</first><last>Haid</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>243&#8211;247</pages>
    <url>http://aclweb.org/anthology/W18-6235</url>
    <abstract>We present BrainT, a multiclass, averaged perceptron tested on implicit emotion prediction of tweets. We show that the dataset is linearly separable and explore ways in fine-tuning the baseline classifier. Our results indicate that the bag-of-words features benefit the model moderately and prediction can be improved significantly with bigrams, trigrams, skip-one- tetragrams and POS-tags. Furthermore, we find preprocessing of the n-grams, including stemming, lowercasing, stopword filtering, emoji and emoticon conversion, generally not useful. The model is trained on an annotated corpus of 153,383 tweets and predictions on the test data were submitted to the WASSA-2018 Implicit Emotion Shared Task. BrainT attained a Macro F-score of 0.63.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>gratian-haid:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6236">
    <title>Disney at IEST 2018: Predicting Emotions using an Ensemble</title>
    <author><first>Wojciech</first><last>Witon</last></author>
    <author><first>Pierre</first><last>Colombo</last></author>
    <author><first>Ashutosh</first><last>Modi</last></author>
    <author><first>Mubbasir</first><last>Kapadia</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>248&#8211;253</pages>
    <url>http://aclweb.org/anthology/W18-6236</url>
    <abstract>This paper describes our participating system in the WASSA 2018 shared task on emotion prediction. The task focusses on implicit emo- tion prediction in a tweet. In this task, key- words corresponding to the six emotion label names (anger, fear, disgust, joy, sad, and sur- prise ) have been removed from the tweet text, making emotion prediction implicit and the task challenging. We propose a model based on ensemble of classifiers for prediction. Each classifier in the ensemble uses sequence of Convolutional Neural Network (CNN) archi- tecture blocks and uses ELMo (Embeddings from Language Model) (Peters et al., 2018) as input. Our system achieves 66.2% F1 score on the test set. The best performing system in the shared task has reported 71.4% F1 score.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>witon-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6237">
    <title>Sentylic at IEST 2018: Gated Recurrent Neural Network and Capsule Network Based Approach for Implicit Emotion Detection</title>
    <author><first>Prabod</first><last>Rathnayaka</last></author>
    <author><first>Supun</first><last>Abeysinghe</last></author>
    <author><first>Chamod</first><last>Samarajeewa</last></author>
    <author><first>Isura</first><last>Manchanayake</last></author>
    <author><first>Malaka</first><last>Walpola</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>254&#8211;259</pages>
    <url>http://aclweb.org/anthology/W18-6237</url>
    <abstract>In this paper, we present the system we have used for the Implicit WASSA 2018 Implicit Emotion Shared Task. The task is to predict the emotion of a tweet of which the explicit mentions of emotion terms have been removed. The idea is to come up with a model which has the ability to implicitly identify the emotion expressed given the context words. We have used a Gated Recurrent Neural Network (GRU) and a Capsule Network based model for the task. Pre-trained word embeddings have been utilized to incorporate contextual knowledge about words into the model. GRU layer learns latent representations using the input word embeddings. Subsequent Capsule Network layer learns high-level features from that hidden representation. The proposed model managed to achieve a macro-F1 score of 0.692.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>rathnayaka-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6238">
    <title>Fast Approach to Build an Automatic Sentiment Annotator for Legal Domain using Transfer Learning</title>
    <author><first>Viraj</first><last>Salaka</last></author>
    <author><first>Menuka</first><last>Warushavithana</last></author>
    <author><first>Nisansa</first><last>de Silva</last></author>
    <author><first>Amal Shehan</first><last>Perera</last></author>
    <author><first>Gathika</first><last>Ratnayaka</last></author>
    <author><first>Thejan</first><last>Rupasinghe</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>260&#8211;265</pages>
    <url>http://aclweb.org/anthology/W18-6238</url>
    <abstract>This study proposes a novel way of identifying the sentiment of the phrases used in the legal domain. The added complexity of the language used in law, and the inability of the existing systems to accurately predict the sentiments of words in law are the main motivations behind this study. This is a transfer learning approach, which can be used for other domain adaptation tasks as well. The proposed methodology achieves an improvement of over 6% compared to the source model's accuracy in the legal domain.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>salaka-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6239">
    <title>What Makes You Stressed? Finding Reasons From Tweets</title>
    <author><first>Reshmi</first><last>Gopalakrishna Pillai</last></author>
    <author><first>Mike</first><last>Thelwall</last></author>
    <author><first>Constantin</first><last>Orasan</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>266&#8211;272</pages>
    <url>http://aclweb.org/anthology/W18-6239</url>
    <abstract>Detecting stress from social media gives a non-intrusive and inexpensive alternative to traditional tools such as questionnaires or physiological sensors for monitoring mental state of individuals. This paper introduces a novel framework for finding reasons for stress from tweets, analyzing multiple categories for the first time. Three word-vector based methods are evaluated on collections of tweets about politics or airlines and are found to be more accurate than standard machine learning algorithms.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>gopalakrishnapillai-thelwall-orasan:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6240">
    <title>EmojiGAN: learning emojis distributions with a generative model</title>
    <author><first>Bogdan</first><last>Mazoure</last></author>
    <author><first>Thang</first><last>DOAN</last></author>
    <author><first>Saibal</first><last>Ray</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>273&#8211;279</pages>
    <url>http://aclweb.org/anthology/W18-6240</url>
    <abstract>Generative models have recently experienced a surge in popularity due to the development of more efficient training algorithms and increasing computational power. Models such as adversarial generative networks (GANs) have been successfully used in various areas such as computer vision, medical imaging, style transfer and natural language generation. Adversarial nets were recently shown to yield results in the image-to-text task, where given a set of images, one has to provide their corresponding text description. In this paper, we take a similar approach and propose a image-to-emoji architecture, which is trained on data from social networks and can be used to score a given picture using ideograms. We show empirical results of our algorithm on data obtained from the most influential Instagram accounts.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>mazoure-doan-ray:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6241">
    <title>Identifying Opinion-Topics and Polarity of Parliamentary Debate Motions</title>
    <author><first>Gavin</first><last>Abercrombie</last></author>
    <author><first>Riza Theresa</first><last>Batista-Navarro</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>280&#8211;285</pages>
    <url>http://aclweb.org/anthology/W18-6241</url>
    <abstract>Analysis of the topics mentioned and opinions expressed in parliamentary debate motions&#8211;or proposals&#8211;is difficult for human readers, but necessary for understanding and automatic processing of the content of the subsequent speeches. We present a dataset of debate motions with pre-existing 'policy' labels, and investigate the utility of these labels for simultaneous topic and opinion polarity analysis. For topic detection, we apply one-versus-the-rest supervised topic classification, finding that good performance is achieved in predicting the policy topics, and that textual features derived from the debate titles associated with the motions are particularly indicative of motion topic. We then examine whether the output could also be used to determine the positions taken by proposers towards the different policies by investigating how well humans agree in interpreting the opinion polarities of the motions. Finding very high levels of agreement, we conclude that the policies used can be reliable labels for use in these tasks, and that successful topic detection can therefore provide opinion analysis of the motions 'for free'.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>abercrombie-batistanavarro:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6242">
    <title>Homonym Detection For Humor Recognition In Short Text</title>
    <author><first>Sven</first><last>van den Beukel</last></author>
    <author><first>Lora</first><last>Aroyo</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>286&#8211;291</pages>
    <url>http://aclweb.org/anthology/W18-6242</url>
    <abstract>In this paper, automatic homophone- and homograph detection are suggested as new useful features for humor recognition systems. The system combines style-features from previous studies on humor recognition in short text with ambiguity-based features. The performance of two potentially useful homograph detection methods is evaluated using crowdsourced annotations as ground truth. Adding homophones and homographs as features to the classifier results in a small but significant improvement over the style-features alone. For the task of humor recognition, recall appears to be a more important quality measure than precision. Although the system was designed for humor recognition in oneliners, it also performs well at the classification of longer humorous texts.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>vandenbeukel-aroyo:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6243">
    <title>Emo2Vec: Learning Generalized Emotion Representation by Multi-task Training</title>
    <author><first>Peng</first><last>Xu</last></author>
    <author><first>Andrea</first><last>Madotto</last></author>
    <author><first>Chien-Sheng</first><last>Wu</last></author>
    <author><first>Ji Ho</first><last>Park</last></author>
    <author><first>Pascale</first><last>Fung</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>292&#8211;298</pages>
    <url>http://aclweb.org/anthology/W18-6243</url>
    <abstract>In this paper, we propose Emo2Vec which encodes emotional semantics into vectors. We train Emo2Vec by multi-task learning six different emotion-related tasks, including emotion/sentiment analysis, sarcasm classification, stress detection, abusive language classification, insult detection, and personality recognition. Our evaluation on Emo2Vec shows that it outperforms existing affect-related representations, such as Sentiment-Specific Word Embedding and DeepMoji embeddings with much smaller training corpora. When concatenated with GloVe, Emo2Vec achieves competitive performances to state-of-the-art results on several tasks using simple logistic regression classifier. Finally, we visualize the learned vectors, showing that Emo2Vec can cluster words with similar emotion together.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>xu-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6244">
    <title>Learning representations for sentiment classification using Multi-task framework</title>
    <author><first>Hardik</first><last>Meisheri</last></author>
    <author><first>Harshad</first><last>Khadilkar</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>299&#8211;308</pages>
    <url>http://aclweb.org/anthology/W18-6244</url>
    <abstract>Most of the existing state of the art sentiment classification techniques involve the use of pre-trained embeddings. This paper postulates a generalized representation that collates training on multiple datasets using a Multi-task learning framework. We incorporate publicly available, pre-trained embeddings with Bidirectional LSTM's to develop the multi-task model. We validate the representations on an independent test Irony dataset that can contain several sentiments within each sample, with an arbitrary distribution. Our experiments show a significant improvement in results as compared to the available baselines for individual datasets on which independent models are trained. Results also suggest superior performance of the representations generated over Irony dataset.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>meisheri-khadilkar:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6245">
    <title>Super Characters: A Conversion from Sentiment Classification to Image Classification</title>
    <author><first>Baohua</first><last>Sun</last></author>
    <author><first>Lin</first><last>Yang</last></author>
    <author><first>Patrick</first><last>Dong</last></author>
    <author><first>Wenhan</first><last>Zhang</last></author>
    <author><first>Jason</first><last>Dong</last></author>
    <author><first>Charles</first><last>Young</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>309&#8211;315</pages>
    <url>http://aclweb.org/anthology/W18-6245</url>
    <abstract>We propose a method named Super Characters for sentiment classification. This method converts the sentiment classification problem into image classification problem by projecting texts into images and then applying CNN models for classification. Text features are extracted automatically from the generated Super Characters images, hence there is no need of any explicit step of embedding the words or characters into numerical vector representations. Experimental results on large social media corpus show that the Super Characters method consistently outperforms other methods for sentiment classification and topic classification tasks on ten large social media datasets of millions of contents in four different languages, including Chinese, Japanese,Korean and English.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>sun-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6246">
    <title>Learning Comment Controversy Prediction in Web Discussions Using Incidentally Supervised Multi-Task CNNs</title>
    <author><first>Nils</first><last>Rethmeier</last></author>
    <author><first>Marc</first><last>H&#252;bner</last></author>
    <author><first>Leonhard</first><last>Hennig</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>316&#8211;321</pages>
    <url>http://aclweb.org/anthology/W18-6246</url>
    <abstract>Comments on web news contain controversies that manifest as inter-group agreement-conflicts. Tracking such rapidly evolving controversy may be used to ease conflict resolution and author-user interaction.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>rethmeier-hbner-hennig:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6247">
    <title>Words Worth: Verbal Content and Hirability Impressions in YouTube Video Resumes</title>
    <author><first>Skanda</first><last>Muralidhar</last></author>
    <author><first>Laurent</first><last>Nguyen</last></author>
    <author><first>Daniel</first><last>Gatica-Perez</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>322&#8211;327</pages>
    <url>http://aclweb.org/anthology/W18-6247</url>
    <abstract>Automatic hirability prediction from video resumes is gaining increasing attention in both psychology and computing. Most existing works have investigated hirability from the perspective of nonverbal behavior, with verbal content receiving little interest. In this study, we leverage the advances in deep-learning based text representation techniques (like word embedding) in natural language processing to investigate the relationship between verbal content and perceived hirability ratings. To this end, we use 292 conversational video resumes from YouTube, develop a computational framework to automatically extract various representations of verbal content, and evaluate them in a regression task. We obtain a best performance of R2 = 0.23 using GloVe, and R2 = 0.22 using Word2Vec representations for manual and automatically tran- scribed texts respectively. Our inference results indicate the feasibility of using deep learning based verbal content representation in inferring hirability scores from online conversational video resumes.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>muralidhar-nguyen-gaticaperez:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6248">
    <title>Predicting Adolescents' Educational Track from Chat Messages on Dutch Social Media</title>
    <author><first>Lisa</first><last>Hilte</last></author>
    <author><first>Walter</first><last>Daelemans</last></author>
    <author><first>Reinhild</first><last>Vandekerckhove</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>328&#8211;334</pages>
    <url>http://aclweb.org/anthology/W18-6248</url>
    <abstract>We aim to predict Flemish adolescents' educational track based on their Dutch social media writing. We distinguish between the three main types of Belgian secondary education: General (theory-oriented), Vocational (practice-oriented), and Technical Secondary Education (hybrid). The best results are obtained with a Naive Bayes model, i.e. an F-score of 0.68 (std. dev. 0.05) in 10-fold cross-validation experiments on the train data and an F-score of 0.60 on unseen data. Many of the most informative features are character n-grams containing specific occurrences of chatspeak phenomena such as emoticons. While the detection of the most theory- and practice-oriented educational tracks seems to be a relatively easy task, the hybrid Technical level appears to be much harder to capture based on online writing style, as expected.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>hilte-daelemans-vandekerckhove:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6249">
    <title>Arabizi sentiment analysis based on transliteration and automatic corpus annotation</title>
    <author><first>Imane</first><last>GUELLIL</last></author>
    <author><first>Ahsan</first><last>Adeel</last></author>
    <author><first>Faical</first><last>AZOUAOU</last></author>
    <author><first>fodil</first><last>benali</last></author>
    <author><first>Ala-eddine</first><last>Hachani</last></author>
    <author><first>Amir</first><last>Hussain</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>335&#8211;341</pages>
    <url>http://aclweb.org/anthology/W18-6249</url>
    <abstract>Arabizi is a form of writing Arabic text which relies on Latin letters, numerals and punctuation rather than Arabic letters. In the literature, the difficulties associated with Arabizi sentiment analysis have been underestimated, principally due to the complexity of Arabizi. In this paper, we present an approach to automatically classify sentiments of Arabizi messages into positives or negatives. In the proposed approach, Arabizi messages are first transliterated into Arabic. Afterwards, we automatically classify the sentiment of the transliterated corpus using an automatically annotated corpus. For corpus validation, shallow machine learning algorithms such as Support Vectors Machine (SVM) and Naive Bays (NB) are used. Simulations results demonstrate the outperformance of NB algorithm over all others. The highest achieved F1-score is up to 78% and 76% for manually and automatically transliterated dataset respectively. Ongoing work is aimed at improving the transliterator module and annotated sentiment dataset.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>guellil-EtAl:2018:WASSA2018</bibkey>
  </paper>

  <paper id="6250">
    <title>UBC-NLP at IEST 2018: Learning Implicit Emotion With an Ensemble of Language Models</title>
    <author><first>Hassan</first><last>Alhuzali</last></author>
    <author><first>Mohamed</first><last>Elaraby</last></author>
    <author><first>Muhammad</first><last>Abdul-Mageed</last></author>
    <booktitle>Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</booktitle>
    <month>October</month>
    <year>2018</year>
    <address>Brussels, Belgium</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>342&#8211;347</pages>
    <url>http://aclweb.org/anthology/W18-6250</url>
    <abstract>We describe UBC-NLP contribution to IEST-2018, focused at learning implicit emotion in Twitter data. Among the 30 participating teams, our system ranked the 4th (with 69.3%F-score). Post competition, we were able to score slightly higher than the 3rd ranking system (reaching 70.7%). Our system is trained on top of a pre-trained language model (LM),fine-tuned on the data provided by the task organizers. Our best results are acquired by an average of an ensemble of language models.We also offer an analysis of system performance and the impact of training data size on the task. For example, we show that training our best model for only one epoch with<40%of the data enables better performance than the baseline reported by Klinger et al. (2018) for the task.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>alhuzali-elaraby-abdulmageed:2018:WASSA2018</bibkey>
  </paper>

</volume>

