Rosa Tsegaye Aga
2016
CogALex-V Shared Task: HsH-Supervised – Supervised similarity learning using entry wise product of context vectors
Christian Wartena
|
Rosa Tsegaye Aga
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex - V)
The CogALex-V Shared Task provides two datasets that consists of pairs of words along with a classification of their semantic relation. The dataset for the first task distinguishes only between related and unrelated, while the second data set distinguishes several types of semantic relations. A number of recent papers propose to construct a feature vector that represents a pair of words by applying a pairwise simple operation to all elements of the feature vector. Subsequently, the pairs can be classified by training any classification algorithm on these vectors. In the present paper we apply this method to the provided datasets. We see that the results are not better than from the given simple baseline. We conclude that the results of the investigated method are strongly depended on the type of data to which it is applied.
Learning Thesaurus Relations from Distributional Features
Rosa Tsegaye Aga
|
Christian Wartena
|
Lucas Drumond
|
Lars Schmidt-Thieme
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.
Integrating Distributional and Lexical Information for Semantic Classification of Words using MRMF
Rosa Tsegaye Aga
|
Lucas Drumond
|
Christian Wartena
|
Lars Schmidt-Thieme
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Semantic classification of words using distributional features is usually based on the semantic similarity of words. We show on two different datasets that a trained classifier using the distributional features directly gives better results. We use Support Vector Machines (SVM) and Multi-relational Matrix Factorization (MRMF) to train classifiers. Both give similar results. However, MRMF, that was not used for semantic classification with distributional features before, can easily be extended with more matrices containing more information from different sources on the same problem. We demonstrate the effectiveness of the novel approach by including information from WordNet. Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results.
Search