<?xml version="1.0" encoding="UTF-8" ?>
<volume id="W16">
  <paper id="5000">
    <title>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</title>
    <editor>Eduardo Blanco</editor>
    <editor>Roser Morante</editor>
    <editor>Roser Saur&#237;</editor>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <url>http://aclweb.org/anthology/W16-50</url>
    <bibtype>book</bibtype>
    <bibkey>ExProM:2016</bibkey>
  </paper>

  <paper id="5001">
    <title>‘Who would have thought of that!’: A Hierarchical Topic Model for Extraction of Sarcasm-prevalent Topics and Sarcasm Detection</title>
    <author><first>Aditya</first><last>Joshi</last></author>
    <author><first>Prayas</first><last>Jain</last></author>
    <author><first>Pushpak</first><last>Bhattacharyya</last></author>
    <author><first>Mark</first><last>Carman</last></author>
    <booktitle>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</booktitle>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <pages>1&#8211;10</pages>
    <url>http://aclweb.org/anthology/W16-5001</url>
    <abstract>Topic Models have been reported to be beneficial for aspect-based sentiment
	analysis. This paper reports the first topic model for sarcasm detection, to
	the best of our knowledge. Designed on the basis of the intuition that
	sarcastic tweets are likely to have a mixture of words of both sentiments as
	against tweets with literal sentiment (either positive or negative), our
	hierarchical topic model discovers sarcasm-prevalent topics and topic-level
	sentiment. Using a dataset of tweets labeled using hashtags, the model
	estimates topic-level, and sentiment-level distributions. Our evaluation shows
	that topics such as `work', `gun laws', `weather' are sarcasm-prevalent topics.
	Our model is also able to discover the mixture of sentiment-bearing words that
	exist in a text of a given sentiment-related label. Finally, we apply our model
	to predict sarcasm in tweets. We outperform two prior work based on statistical
	classifiers with specific features, by around 25%.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>joshi-EtAl:2016:ExProM</bibkey>
  </paper>

  <paper id="5002">
    <title>Detecting Uncertainty Cues in Hungarian Social Media Texts</title>
    <author><first>Veronika</first><last>Vincze</last></author>
    <booktitle>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</booktitle>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <pages>11&#8211;21</pages>
    <url>http://aclweb.org/anthology/W16-5002</url>
    <abstract>In this paper, we aim at identifying uncertainty cues in Hungarian social media
	texts. We present our machine learning based uncertainty detector which is
	based on a rich features set including lexical, morphological, syntactic,
	semantic and discourse-based features, and we evaluate our system on a small
	set of manually annotated social media texts. We also carry out cross-domain
	and domain adaptation experiments using an annotated corpus of standard
	Hungarian texts and show that domain differences significantly affect machine
	learning. Furthermore, we argue that differences among uncertainty cue types
	may also affect the efficiency of uncertainty detection.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>vincze:2016:ExProM</bibkey>
  </paper>

  <paper id="5003">
    <title>Detecting Level of Belief in Chinese and Spanish</title>
    <author><first>Juan Pablo</first><last>Colomer</last></author>
    <author><first>Keyu</first><last>Lai</last></author>
    <author><first>Owen</first><last>Rambow</last></author>
    <booktitle>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</booktitle>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <pages>22&#8211;30</pages>
    <url>http://aclweb.org/anthology/W16-5003</url>
    <abstract>There has been extensive work on detecting the level of committed belief
	  (also known as ``factuality'') that an author is expressing towards the
	  propositions in his or her utterances.  Previous work on English has
	  revealed that this can be done as a sequence tagging task.  In this
	  paper, we investigate the same task for Chinese and Spanish, two very
	  different languages from English and from each other.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>colomer-lai-rambow:2016:ExProM</bibkey>
  </paper>

  <paper id="5004">
    <title>Contradiction Detection for Rumorous Claims</title>
    <author><first>Piroska</first><last>Lendvai</last></author>
    <author><first>Uwe</first><last>Reichel</last></author>
    <booktitle>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</booktitle>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <pages>31&#8211;40</pages>
    <url>http://aclweb.org/anthology/W16-5004</url>
    <abstract>The utilization of social media material in journalistic workflows is
	increasing, demanding automated methods for the identification of mis- and
	disinformation. Since textual contradiction across social media posts can be a
	signal of rumorousness, we seek to model how claims in Twitter posts are being
	textually contradicted. We identify two different contexts in which
	contradiction emerges: its broader form can be observed across independently
	posted tweets and its more specific form in threaded conversations. We define
	how the two scenarios differ in terms of central elements of argumentation:
	claims and conversation structure. We design and evaluate models for the two
	scenarios uniformly as 3-way Recognizing Textual Entailment tasks in order to
	represent claims and conversation structure implicitly in a generic inference
	model, while previous studies used explicit or no representation of these
	properties. To address noisy text, our classifiers use simple similarity
	features derived from the string and part-of-speech level. Corpus statistics
	reveal distribution differences for these features in contradictory as opposed
	to non-contradictory tweet relations, and the classifiers yield state of the
	art performance.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>lendvai-reichel:2016:ExProM</bibkey>
  </paper>

  <paper id="5005">
    <title>Negation and Modality in Machine Translation</title>
    <author><first>Preslav</first><last>Nakov</last></author>
    <booktitle>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</booktitle>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <pages>41</pages>
    <url>http://aclweb.org/anthology/W16-5005</url>
    <abstract>Negation and modality are two important grammatical phenomena that have
	attracted recent research attention as they can contribute to
	extra-propositional meaning aspects, among with factuality, attribution, irony
	and sarcasm. These aspects go beyond analysis such as semantic role labeling,
	and modeling them is important as a step towards a higher level of language
	understanding, which is needed for practical applications such as sentiment
	analysis. In this talk, I will go beyond English, and I will discuss how
	negation and modality are expressed in other languages. I will also go beyond
	sentiment analysis and I will present some challenges that the two phenomena
	pose for machine translation (MT). In particular, I will demonstrate how
	contemporary MT systems fail on them, and I will discuss some possible
	solutions.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>nakov:2016:ExProM</bibkey>
  </paper>

  <paper id="5006">
    <title>Problematic Cases in the Annotation of Negation in Spanish</title>
    <author><first>Salud Mar&#237;a</first><last>Jim&#233;nez-Zafra</last></author>
    <author><first>Maite</first><last>Martin</last></author>
    <author><first>L. Alfonso</first><last>Urena Lopez</last></author>
    <author><first>Toni</first><last>Marti</last></author>
    <author><first>Mariona</first><last>Taul&#233;</last></author>
    <booktitle>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</booktitle>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <pages>42&#8211;48</pages>
    <url>http://aclweb.org/anthology/W16-5006</url>
    <abstract>This paper presents the main sources of disagreement found during the
	annotation of the Spanish SFU Review Corpus with negation (SFU ReviewSP -NEG).
	Negation detection is a challenge in most of the task related to NLP, so the
	availability of corpora annotated with this phenomenon is essential in order to
	advance in tasks related to this area. A thorough analysis of the problems
	found during the annotation could help in the study of this phenomenon.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>jimenezzafra-EtAl:2016:ExProM</bibkey>
  </paper>

  <paper id="5007">
    <title>Building a Dictionary of Affixal Negations</title>
    <author><first>Chantal</first><last>van Son</last></author>
    <author><first>Emiel</first><last>van Miltenburg</last></author>
    <author><first>Roser</first><last>Morante</last></author>
    <booktitle>Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</booktitle>
    <month>December</month>
    <year>2016</year>
    <address>Osaka, Japan</address>
    <publisher>The COLING 2016 Organizing Committee</publisher>
    <pages>49&#8211;56</pages>
    <url>http://aclweb.org/anthology/W16-5007</url>
    <abstract>This paper discusses the need for a dictionary of affixal negations and regular
	antonyms to facilitate their automatic detection in text. Without such a
	dictionary, affixal negations are very difficult to detect. In addition, we
	show that the set of affixal negations is not homogeneous, and that different
	NLP tasks may require different subsets. A dictionary can store the subtypes of
	affixal negations, making it possible to select a certain subset or to make
	inferences on the basis of these subtypes. We take a first step towards
	creating a negation dictionary by annotating all direct antonym pairs inWordNet
	using an existing typology of affixal negations. By highlighting some of the
	issues that were encountered in this annotation experiment, we hope to provide
	some insights into the necessary steps of building a negation dictionary.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>vanson-vanmiltenburg-morante:2016:ExProM</bibkey>
  </paper>

</volume>

