<?xml version="1.0" encoding="UTF-8" ?>
<volume id="W17">
  <paper id="1900">
    <title>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</title>
    <editor>Jose Camacho-Collados</editor>
    <editor>Mohammad Taher Pilehvar</editor>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <url>http://www.aclweb.org/anthology/W17-19</url>
    <bibtype>book</bibtype>
    <bibkey>SENSE2017:2017</bibkey>
  </paper>

  <paper id="1901">
    <title>Compositional Semantics using Feature-Based Models from WordNet</title>
    <author><first>Pablo</first><last>Gamallo</last></author>
    <author><first>Mart&#237;n</first><last>Pereira-Fari&#241;a</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>1&#8211;11</pages>
    <url>http://www.aclweb.org/anthology/W17-1901</url>
    <abstract>This article describes a method to build semantic representations of composite
	expressions in a compositional way by using WordNet relations to represent the
	meaning of words. The meaning of a target word is modelled as a vector in which
	its semantically related words are assigned weights according to both the type
	of the relationship and the distance to the target word. Word vectors are
	compositionally combined by syntactic dependencies. Each syntactic dependency
	triggers two complementary compositional functions: the named head function and
	 dependent function. The experiments show that the proposed compositional
	method outperforms the state-of-the-art for both intransitive subject-verb and
	transitive subject-verb-object constructions.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>gamallo-pereirafarina:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1902">
    <title>Automated WordNet Construction Using Word Embeddings</title>
    <author><first>Mikhail</first><last>Khodak</last></author>
    <author><first>Andrej</first><last>Risteski</last></author>
    <author><first>Christiane</first><last>Fellbaum</last></author>
    <author><first>Sanjeev</first><last>Arora</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>12&#8211;23</pages>
    <url>http://www.aclweb.org/anthology/W17-1902</url>
    <abstract>We present a fully unsupervised method for automated construction of WordNets
	based upon recent advances in distributional representations of sentences and
	word-senses combined with readily available machine translation tools. The
	approach requires very few linguistic resources and is thus extensible to
	multiple target languages. To evaluate our method we construct two 600-word
	testsets for word-to-synset matching in French and Russian using native
	speakers and evaluate the performance of our method along with several other
	recent approaches. Our method exceeds the best language-specific and
	multi-lingual automated WordNets in F-score for both languages. The databases
	we construct for French and Russian, both languages without large publicly
	available manually constructed WordNets, will be publicly released along with
	the testsets.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>khodak-EtAl:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1903">
    <title>Improving Verb Metaphor Detection by Propagating Abstractness to Words, Phrases and Individual Senses</title>
    <author><first>Maximilian</first><last>K&#246;per</last></author>
    <author><first>Sabine</first><last>Schulte im Walde</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>24&#8211;30</pages>
    <url>http://www.aclweb.org/anthology/W17-1903</url>
    <abstract>Abstract words refer to things that can not be seen, heard, felt, smelled, or
	tasted as opposed to concrete words. Among other applications, the degree of
	abstractness has been shown to be a useful information for metaphor detection.
	Our
	contribution to this topic are as follows: i) we compare supervised techniques
	to
	learn and extend abstractness ratings for huge vocabularies ii) we learn and
	investigate norms for larger units by propagating abstractness to verb-noun
	pairs which lead to better metaphor detection iii) we overcome the limitation
	of learning a single rating per word and show that multi-sense abstractness
	ratings are potentially useful for metaphor detection. Finally, with this paper
	we publish automatically created abstractness norms for 3million English words
	and multi-words as well as automatically created sense specific abstractness
	ratings</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>koper-schulteimwalde:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1904">
    <title>Improving Clinical Diagnosis Inference through Integration of Structured and Unstructured Knowledge</title>
    <author><first>Yuan</first><last>Ling</last></author>
    <author><first>Yuan</first><last>An</last></author>
    <author><first>Sadid</first><last>Hasan</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>31&#8211;36</pages>
    <url>http://www.aclweb.org/anthology/W17-1904</url>
    <abstract>This paper presents a novel approach to the task of automatically inferring the
	most probable diagnosis from a given clinical narrative. Structured Knowledge
	Bases (KBs) can be useful for such complex tasks but not sufficient. Hence, we
	leverage a vast amount of unstructured free text to integrate with structured
	KBs. The key innovative ideas include building a concept graph from both
	structured and unstructured knowledge sources and ranking the diagnosis
	concepts using the enhanced word embedding vectors learned from integrated
	sources. Experiments on the TREC CDS and HumanDx datasets showed that our
	methods improved the results of clinical diagnosis inference.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>ling-an-hasan:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1905">
    <title>Classifying Lexical-semantic Relationships by Exploiting Sense/Concept Representations</title>
    <author><first>Kentaro</first><last>Kanada</last></author>
    <author><first>Tetsunori</first><last>Kobayashi</last></author>
    <author><first>Yoshihiko</first><last>Hayashi</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>37&#8211;46</pages>
    <url>http://www.aclweb.org/anthology/W17-1905</url>
    <abstract>This paper proposes a method for classifying the type of lexical-semantic
	relation between a given pair of words. Given an inventory of target
	relationships, this task can be seen as a multi-class classification problem.
	We train a supervised classifier by assuming: (1) a specific type of
	lexical-semantic relation between a pair of words would be indicated by a
	carefully designed set of relation-specific similarities associated with the
	words; and (2) the similarities could be effectively computed by &#x201c;sense
	representations&#x201d; (sense/concept embeddings). The experimental results show
	that the proposed method clearly outperforms an existing state-of-the-art
	method that does not utilize sense/concept embeddings, thereby demonstrating
	the effectiveness of the sense representations.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>kanada-kobayashi-hayashi:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1906">
    <title>Supervised and unsupervised approaches to measuring usage similarity</title>
    <author><first>Milton</first><last>King</last></author>
    <author><first>Paul</first><last>Cook</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>47&#8211;52</pages>
    <url>http://www.aclweb.org/anthology/W17-1906</url>
    <abstract>Usage similarity (USim) is an approach to determining word meaning in context
	that does not rely on a sense inventory. Instead, pairs of usages of a target
	lemma are rated on a scale. In this paper we propose unsupervised approaches to
	USim based on embeddings for words, contexts, and sentences, and achieve
	state-of-the-art results over two USim datasets. We further consider supervised
	approaches to USim, and find that although they outperform unsupervised
	approaches, they are unable to generalize to lemmas that are unseen in the
	training data.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>king-cook:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1907">
    <title>Lexical Disambiguation of Igbo using Diacritic Restoration</title>
    <author><first>Ignatius</first><last>Ezeani</last></author>
    <author><first>Mark</first><last>Hepple</last></author>
    <author><first>Ikechukwu</first><last>Onyenwe</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>53&#8211;60</pages>
    <url>http://www.aclweb.org/anthology/W17-1907</url>
    <abstract>Properly written texts in Igbo, a low-resource African language, are rich in
	both orthographic and tonal diacritics. Diacritics are essential in capturing
	the distinctions in pronunciation and meaning of words, as well as in lexical
	disambiguation. Unfortunately, most electronic texts in diacritic languages are
	written without diacritics. This makes diacritic restoration a necessary step
	in corpus building and language processing tasks for languages with diacritics.
	In our previous work, we built some n-gram models with simple smoothing
	techniques based on a closed-world assumption. However, as a classification
	task, diacritic restoration is well suited for and will be more generalisable
	with machine learning. This paper, therefore, presents a more standard approach
	to dealing with the task which involves the application of machine learning
	algorithms.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>ezeani-hepple-onyenwe:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1908">
    <title>Creating and Validating Multilingual Semantic Representations for Six Languages: Expert versus Non-Expert Crowds</title>
    <author><first>Mahmoud</first><last>El-Haj</last></author>
    <author><first>Paul</first><last>Rayson</last></author>
    <author><first>Scott</first><last>Piao</last></author>
    <author><first>Stephen</first><last>Wattam</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>61&#8211;71</pages>
    <url>http://www.aclweb.org/anthology/W17-1908</url>
    <abstract>Creating high-quality wide-coverage multilingual
	semantic lexicons to support
	knowledge-based approaches is a challenging
	time-consuming manual task.
	This has traditionally been performed by
	linguistic experts: a slow and expensive
	process. We present an experiment in
	which we adapt and evaluate crowdsourcing
	methods employing native speakers to
	generate a list of coarse-grained senses under
	a common multilingual semantic taxonomy
	for sets of words in six languages.
	451 non-experts (including 427 Mechanical
	Turk workers) and 15 expert participants
	semantically annotated 250 words
	manually for Arabic, Chinese, English,
	Italian, Portuguese and Urdu lexicons. In
	order to avoid erroneous (spam) crowdsourced
	results, we used a novel task-specific
	two-phase filtering process where
	users were asked to identify synonyms in
	the target language, and remove erroneous
	senses.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>elhaj-EtAl:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1909">
    <title>Using Linked Disambiguated Distributional Networks for Word Sense Disambiguation</title>
    <author><first>Alexander</first><last>Panchenko</last></author>
    <author><first>Stefano</first><last>Faralli</last></author>
    <author><first>Simone Paolo</first><last>Ponzetto</last></author>
    <author><first>Chris</first><last>Biemann</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>72&#8211;78</pages>
    <url>http://www.aclweb.org/anthology/W17-1909</url>
    <abstract>We introduce a new method for unsupervised knowledge-based word sense
	disambiguation (WSD) based on a resource that links two types of sense-aware
	lexical networks: one is induced from a corpus using distributional semantics,
	the other is manually constructed. The combination of two networks reduces the
	sparsity of sense representations used for WSD. We evaluate these enriched
	representations within two lexical sample sense disambiguation benchmarks. Our
	results indicate that (1) features extracted from the corpus-based resource
	help to significantly outperform a model based solely on the lexical resource;
	(2) our method achieves results comparable or better to four state-of-the-art
	unsupervised knowledge-based WSD systems including three hybrid systems that
	also rely on text corpora. In contrast to these hybrid methods, our approach
	does not require access to web search engines, texts mapped to a sense
	inventory, or machine translation systems.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>panchenko-EtAl:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1910">
    <title>One Representation per Word - Does it make Sense for Composition?</title>
    <author><first>Thomas</first><last>Kober</last></author>
    <author><first>Julie</first><last>Weeds</last></author>
    <author><first>John</first><last>Wilkie</last></author>
    <author><first>Jeremy</first><last>Reffin</last></author>
    <author><first>David</first><last>Weir</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>79&#8211;90</pages>
    <url>http://www.aclweb.org/anthology/W17-1910</url>
    <abstract>In this paper, we investigate whether an a priori disambiguation of word senses
	is strictly necessary or whether the meaning of a word in context can be
	disambiguated through composition alone. We evaluate the performance of
	off-the-shelf single-vector and multi-sense vector models on a benchmark phrase
	similarity task and a novel task for word-sense discrimination. We find that
	single-sense vector models perform as well or better than multi-sense vector
	models despite arguably less clean elementary representations. Our findings
	furthermore show that simple composition functions such as pointwise addition
	are able to recover sense specific information from a single-sense vector model
	remarkably well.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>kober-EtAl:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1911">
    <title>Elucidating Conceptual Properties from Word Embeddings</title>
    <author><first>Kyoung-Rok</first><last>Jang</last></author>
    <author><first>Sung-Hyon</first><last>Myaeng</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>91&#8211;95</pages>
    <url>http://www.aclweb.org/anthology/W17-1911</url>
    <abstract>In this paper, we introduce a method of identifying the components (i.e.
	dimensions) of word embeddings that strongly signifies properties of a word. By
	elucidating such properties hidden in word embeddings, we could make word
	embeddings more interpretable, and also could perform property-based meaning
	comparison. With the capability, we can answer questions like "To what degree
	a given word has the property cuteness?" or "In what perspective two words
	are similar?". We verify our method by examining how the strength of
	property-signifying components correlates with the degree of prototypicality of
	a target word.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>jang-myaeng:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1912">
    <title>TTCSe: a Vectorial Resource for Computing Conceptual Similarity</title>
    <author><first>Enrico</first><last>Mensa</last></author>
    <author><first>Daniele P.</first><last>Radicioni</last></author>
    <author><first>Antonio</first><last>Lieto</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>96&#8211;101</pages>
    <url>http://www.aclweb.org/anthology/W17-1912</url>
    <abstract>In this paper we introduce the TTCSe, a linguistic resource that relies on
	BabelNet, NASARI and ConceptNet, that has now been used to compute the
	conceptual similarity between concept pairs. The conceptual representation
	herein provides uniform access to concepts based on BabelNet synset IDs, and
	consists of a vector-based semantic representation which is compliant with the
	Conceptual Spaces, a geometric framework for common-sense knowledge
	representation and reasoning. The TTCSe has been evaluated in a preliminary
	experimentation on a conceptual similarity task.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>mensa-radicioni-lieto:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1913">
    <title>Measuring the Italian-English lexical gap for action verbs and its impact on translation</title>
    <author><first>Lorenzo</first><last>Gregori</last></author>
    <author><first>Alessandro</first><last>Panunzi</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>102&#8211;109</pages>
    <url>http://www.aclweb.org/anthology/W17-1913</url>
    <abstract>This paper describes a method to measure the lexical gap of action verbs in
	Italian and English by using the IMAGACT ontology of action. The fine-grained
	categorization of action concepts of the data source allowed to have wide
	overview of the relation between concepts in the two languages. The calculated
	lexical gap for both English and Italian is about 30% of the action concepts,
	much higher than previous results. Beyond this general numbers a deeper
	analysis has been performed in order to evaluate the impact that lexical gaps
	can have on translation. In particular a distinction has been made between the
	cases in which the presence of a lexical gap affects translation correctness
	and completeness at a semantic level. The results highlight a high percentage
	of concepts that can be considered hard to translate (about 18% from English to
	Italian and 20% from Italian to English) and confirms that action verbs are a
	critical lexical class for translation tasks.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>gregori-panunzi:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1914">
    <title>Word Sense Filtering Improves Embedding-Based Lexical Substitution</title>
    <author><first>Anne</first><last>Cocos</last></author>
    <author><first>Marianna</first><last>Apidianaki</last></author>
    <author><first>Chris</first><last>Callison-Burch</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>110&#8211;119</pages>
    <url>http://www.aclweb.org/anthology/W17-1914</url>
    <abstract>The role of word sense disambiguation in lexical substitution has been
	questioned due to the high performance of vector space models which propose
	good substitutes without explicitly accounting for sense. We show that a
	filtering
	mechanism based on a sense inventory optimized for substitutability can improve
	the results of these models. Our sense inventory is constructed using a
	clustering method which generates paraphrase clusters that are congruent with
	lexical substitution annotations in a development set. The results show that
	lexical substitution can still benefit from senses which can improve the output
	of vector space paraphrase ranking models.</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>cocos-apidianaki-callisonburch:2017:SENSE2017</bibkey>
  </paper>

  <paper id="1915">
    <title>Supervised and Unsupervised Word Sense Disambiguation on Word Embedding Vectors of Unambigous Synonyms</title>
    <author><first>Aleksander</first><last>Wawer</last></author>
    <author><first>Agnieszka</first><last>Mykowiecka</last></author>
    <booktitle>Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications</booktitle>
    <month>April</month>
    <year>2017</year>
    <address>Valencia, Spain</address>
    <publisher>Association for Computational Linguistics</publisher>
    <pages>120&#8211;125</pages>
    <url>http://www.aclweb.org/anthology/W17-1915</url>
    <abstract>This paper compares two approaches to word sense disambiguation using word
	embeddings trained on unambiguous synonyms. The first is unsupervised method
	based on computing log probability from sequences of word embedding vectors,
	taking into account ambiguous word senses and guessing correct sense from
	context. The second method is supervised. We use a multilayer neural network
	model to learn a context-sensitive transformation that maps an input vector of
	ambiguous word into an output vector representing its sense. We evaluate both
	methods on corpora with manual annotations of word senses from the Polish
	wordnet (plWordnet).</abstract>
    <bibtype>inproceedings</bibtype>
    <bibkey>wawer-mykowiecka:2017:SENSE2017</bibkey>
  </paper>

</volume>

