<?xml version="1.0" encoding="UTF-8" ?>
<volume id="W17">
  <paper id="0900">
    <title>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</title>
    <editor>Michael Roth</editor>
    <editor>Nasrin Mostafazadeh</editor>
    <editor>Nathanael Chambers</editor>
    <editor>Annie Louis</editor>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <url>http://aclweb.org/anthology/W/W17/W17-09</url>
    <bibtype>book</bibtype>
    <bibkey>LSDSem:2017</bibkey>
  </paper>

  <paper id="0901">
    <title>Inducing Script Structure from Crowdsourced Event Descriptions via Semi-Supervised Clustering</title>
    <author><first>Lilian</first><last>Wanzare</last></author>
    <author><first>Alessandra</first><last>Zarcone</last></author>
    <author><first>Stefan</first><last>Thater</last></author>
    <author><first>Manfred</first><last>Pinkal</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>1&#8211;11</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0901</url>
    <abstract>We present a semi-supervised clustering approach to induce script structure
	from crowdsourced descriptions of event sequences by grouping event
	descriptions into paraphrase sets (representing event types) and inducing their
	temporal order. Our approach exploits semantic and positional similarity and
	allows for flexible event order, thus overcoming the rigidity of previous
	approaches. We incorporate crowdsourced alignments as prior knowledge and show
	that exploiting a small number of alignments results in a substantial
	improvement in cluster quality over state-of-the-art models and provides an
	appropriate basis for the induction of temporal order. We also show a coverage
	study to demonstrate the scalability of our approach.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wanzare-EtAl:2017:LSDSem</bibkey>
  </paper>

  <paper id="0902">
    <title>A Consolidated Open Knowledge Representation for Multiple Texts</title>
    <author><first>Rachel</first><last>Wities</last></author>
    <author><first>Vered</first><last>Shwartz</last></author>
    <author><first>Gabriel</first><last>Stanovsky</last></author>
    <author><first>Meni</first><last>Adler</last></author>
    <author><first>Ori</first><last>Shapira</last></author>
    <author><first>Shyam</first><last>Upadhyay</last></author>
    <author><first>Dan</first><last>Roth</last></author>
    <author><first>Eugenio</first><last>Mart&#237;nez-C&#225;mara</last></author>
    <author><first>Iryna</first><last>Gurevych</last></author>
    <author><first>Ido</first><last>Dagan</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>12&#8211;24</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0902</url>
    <abstract>We propose to move from Open Information Extraction (OIE) ahead to Open
	Knowledge Representation (OKR), aiming to represent information conveyed
	jointly in a set of texts in an open text- based manner. We do so by
	consolidating OIE extractions using entity and predicate coreference, while
	modeling information containment between coreferring elements via lexical
	entailment. We suggest that generating OKR structures can be a useful step in
	the NLP pipeline, to give semantic applications an easy handle on consolidated
	information across multiple texts.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wities-EtAl:2017:LSDSem</bibkey>
  </paper>

  <paper id="0903">
    <title>Event-Related Features in Feedforward Neural Networks Contribute to Identifying Causal Relations in Discourse</title>
    <author><first>Edoardo Maria</first><last>Ponti</last></author>
    <author><first>Anna</first><last>Korhonen</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>25&#8211;30</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0903</url>
    <abstract>Causal relations play a key role in information extraction and reasoning. Most
	of the times, their expression is ambiguous or implicit, i.e. without signals
	in the text. This makes their identification challenging. We aim to improve
	their identification by implementing a Feedforward Neural Network with a novel
	set of features for this task. In particular, these are based on the position
	of event mentions and the semantics of events and participants. The resulting
	classifier outperforms strong baselines on two datasets (the Penn Discourse
	Treebank and the CSTNews corpus) annotated with different schemes and
	containing examples in two languages, English and Portuguese. This result
	demonstrates the importance of events for identifying discourse relations.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>ponti-korhonen:2017:LSDSem</bibkey>
  </paper>

  <paper id="0904">
    <title>Stance Detection in Facebook Posts of a German Right-wing Party</title>
    <author><first>Manfred</first><last>Klenner</last></author>
    <author><first>Don</first><last>Tuggener</last></author>
    <author><first>Simon</first><last>Clematide</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>31&#8211;40</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0904</url>
    <abstract>We argue that in order to detect stance, not only the explicit attitudes
	of the stance holder towards the targets are crucial. It is the whole narrative
	the writer drafts that counts, including the way  he hypostasizes the discourse
	referents: as benefactors or villains, as victims or beneficiaries. 
	We exemplify the ability of our system to identify targets and detect 
	the writer's stance towards them on the basis of about 100 000 Facebook posts
	of a German right-wing party.
	A reader and writer model on top of our verb-based attitude extraction 
	directly reveal stance conflicts.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>klenner-tuggener-clematide:2017:LSDSem</bibkey>
  </paper>

  <paper id="0905">
    <title>Behind the Scenes of an Evolving Event Cloze Test</title>
    <author><first>Nathanael</first><last>Chambers</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>41&#8211;45</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0905</url>
    <abstract>This paper analyzes the narrative event cloze test and its recent evolution.
	The test removes one event from a document's chain of events, and systems
	predict the missing event.
	Originally proposed to evaluate learned knowledge of event scenarios (e.g.,
	scripts and frames), most recent work now builds ngram-like language models
	(LM) to beat the test.
	This paper argues that the test has slowly/unknowingly been altered to
	accommodate LMs.5
	Most notably, tests are auto-generated rather than by hand, and no effort is
	taken to include core script events.
	Recent work is not clear on evaluation goals and contains contradictory
	results.
	We implement several models, and show that the test's bias to high-frequency
	events explains the inconsistencies.
	We conclude with recommendations on how to return to the test's original
	intent, and offer brief suggestions on a path forward.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>chambers:2017:LSDSem</bibkey>
  </paper>

  <paper id="0906">
    <title>LSDSem 2017 Shared Task: The Story Cloze Test</title>
    <author><first>Nasrin</first><last>Mostafazadeh</last></author>
    <author><first>Michael</first><last>Roth</last></author>
    <author><first>Annie</first><last>Louis</last></author>
    <author><first>Nathanael</first><last>Chambers</last></author>
    <author><first>James</first><last>Allen</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>46&#8211;51</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0906</url>
    <abstract>The LSDSem'17 shared task is the Story Cloze Test, a new evaluation for story
	understanding and script learning. This test provides a system with a
	four-sentence story and two possible endings, and the system must choose the
	correct ending to the story. Successful narrative understanding (getting closer
	to human performance of 100%) requires systems to link various levels of
	semantics to commonsense knowledge. A total of eight systems participated in
	the shared task, with a variety of approaches including.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>mostafazadeh-EtAl:2017:LSDSem</bibkey>
  </paper>

  <paper id="0907">
    <title>Story Cloze Task: UW NLP System</title>
    <author><first>Roy</first><last>Schwartz</last></author>
    <author><first>Maarten</first><last>Sap</last></author>
    <author><first>Ioannis</first><last>Konstas</last></author>
    <author><first>Leila</first><last>Zilles</last></author>
    <author><first>Yejin</first><last>Choi</last></author>
    <author><first>Noah A.</first><last>Smith</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>52&#8211;55</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0907</url>
    <abstract>This paper describes University of Washington NLP’s submission for the
	Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem
	2017) shared task—the Story Cloze Task. Our system is a linear classifier
	with a variety of features, including both the scores of a neural language
	model and style features. We report 75.2% accuracy on the task. A further
	discussion of our results can be found in Schwartz et al. (2017).</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>schwartz-EtAl:2017:LSDSem</bibkey>
  </paper>

  <paper id="0908">
    <title>LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test</title>
    <author><first>Michael</first><last>Bugert</last></author>
    <author><first>Yevgeniy</first><last>Puzikov</last></author>
    <author><first>Andreas</first><last>R&#252;ckl&#233;</last></author>
    <author><first>Judith</first><last>Eckle-Kohler</last></author>
    <author><first>Teresa</first><last>Martin</last></author>
    <author><first>Eugenio</first><last>Mart&#237;nez-C&#225;mara</last></author>
    <author><first>Daniil</first><last>Sorokin</last></author>
    <author><first>Maxime</first><last>Peyrard</last></author>
    <author><first>Iryna</first><last>Gurevych</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>56&#8211;61</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0908</url>
    <abstract>The Story Cloze test is a recent effort in providing a common test scenario for
	text understanding systems.
	As part of the LSDSem 2017 shared task, we present a system based on a deep
	learning architecture combined with a rich set of manually-crafted linguistic
	features. The system outperforms all known baselines for the task, suggesting
	that the chosen approach is promising. We additionally present two methods for
	generating further training data based on stories from the ROCStories corpus.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>bugert-EtAl:2017:LSDSem</bibkey>
  </paper>

  <paper id="0909">
    <title>Sentiment Analysis and Lexical Cohesion for the Story Cloze Task</title>
    <author><first>Michael</first><last>Flor</last></author>
    <author><first>Swapna</first><last>Somasundaran</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>62&#8211;67</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0909</url>
    <abstract>We present two NLP components for the Story Cloze Task &#8211; dictionary-based
	sentiment analysis and lexical cohesion. While previous research found no
	contribution from sentiment analysis to the accuracy on this task, we
	demonstrate that sentiment is an important aspect. We describe a new approach,
	using a rule that estimates sentiment congruence in a story. Our
	sentiment-based system achieves strong results on this task. Our lexical
	cohesion system achieves accuracy comparable to previously published baseline
	results. A combination of the two systems achieves better accuracy than
	published baselines. We argue that sentiment analysis should be considered an
	integral part of narrative comprehension.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>flor-somasundaran:2017:LSDSem</bibkey>
  </paper>

  <paper id="0910">
    <title>Resource-Lean Modeling of Coherence in Commonsense Stories</title>
    <author><first>Niko</first><last>Schenk</last></author>
    <author><first>Christian</first><last>Chiarcos</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>68&#8211;73</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0910</url>
    <abstract>We present a resource-lean neural recognizer for modeling coherence in
	commonsense stories. Our lightweight system is inspired by successful attempts
	to modeling discourse relations and stands out due to its simplicity and easy
	optimization compared to prior approaches to narrative script learning. 
	We evaluate our approach in the Story Cloze Test demonstrating an absolute
	improvement in accuracy of 4.7% over state-of-the-art implementations.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>schenk-chiarcos:2017:LSDSem</bibkey>
  </paper>

  <paper id="0911">
    <title>An RNN-based Binary Classifier for the Story Cloze Test</title>
    <author><first>Melissa</first><last>Roemmele</last></author>
    <author><first>Sosuke</first><last>Kobayashi</last></author>
    <author><first>Naoya</first><last>Inoue</last></author>
    <author><first>Andrew</first><last>Gordon</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>74&#8211;80</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0911</url>
    <abstract>The Story Cloze Test consists of choosing a sentence that best completes a
	story given two choices. In this paper we present a system that performs this
	task using a supervised binary classifier on top of a recurrent neural network
	to predict the probability that a given story ending is correct. The classifier
	is trained to distinguish correct story endings given in the training data from
	incorrect ones that we artificially generate. Our experiments evaluate
	different methods for generating these negative examples, as well as different
	embedding-based representations of the stories. Our best result obtains 67.2%
	accuracy on the test set, outperforming the existing top baseline of 58.5%.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>roemmele-EtAl:2017:LSDSem</bibkey>
  </paper>

  <paper id="0912">
    <title>IIT (BHU): System Description for LSDSem'17 Shared Task</title>
    <author><first>Pranav</first><last>Goel</last></author>
    <author><first>Anil Kumar</first><last>Singh</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>81&#8211;86</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0912</url>
    <abstract>This paper describes an ensemble system submitted as part of the LSDSem Shared
	Task 2017 - the Story Cloze Test. The main conclusion from our results is that
	an approach based on semantic similarity alone may not be enough for this task.
	We test various approaches and compare them with two ensemble systems. One is
	based on voting and the other on logistic regression based classifier. Our
	final system is able to outperform the previous state of the art for the Story
	Cloze test. Another very interesting observation is the performance of 
	sentiment based approach which works almost as well on its own as our final
	ensemble system.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>goel-singh:2017:LSDSem</bibkey>
  </paper>

  <paper id="0913">
    <title>Story Cloze Ending Selection Baselines and Data Examination</title>
    <author><first>Todor</first><last>Mihaylov</last></author>
    <author><first>Anette</first><last>Frank</last></author>
    <booktitle>Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>87&#8211;92</pages>
    <url>http://aclweb.org/anthology/W/W17/W17-0913</url>
    <abstract>This paper describes two supervised baseline systems for the Story Cloze Test
	Shared Task (Mostafazadeh et al., 2016a). We first build a classifier using
	features based on word embeddings and semantic similarity computation. We
	further implement a neural LSTM system with different encoding strategies that
	try to model the relation between the story and the
	provided endings. Our experiments show that a model using representation
	features based on average word embedding vectors over the given story words and
	the candidate ending sentences words, joint with similarity features between
	the story and candidate ending representations performed better than the neural
	models. Our best model based on achieves an accuracy
	of 72.42, ranking 3rd in the official evaluation.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>mihaylov-frank:2017:LSDSem</bibkey>
  </paper>

</volume>

