Lars Schmidt-Thieme


pdf bib
Learning Thesaurus Relations from Distributional Features
Rosa Tsegaye Aga | Christian Wartena | Lucas Drumond | Lars Schmidt-Thieme
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.

pdf bib
Integrating Distributional and Lexical Information for Semantic Classification of Words using MRMF
Rosa Tsegaye Aga | Lucas Drumond | Christian Wartena | Lars Schmidt-Thieme
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Semantic classification of words using distributional features is usually based on the semantic similarity of words. We show on two different datasets that a trained classifier using the distributional features directly gives better results. We use Support Vector Machines (SVM) and Multi-relational Matrix Factorization (MRMF) to train classifiers. Both give similar results. However, MRMF, that was not used for semantic classification with distributional features before, can easily be extended with more matrices containing more information from different sources on the same problem. We demonstrate the effectiveness of the novel approach by including information from WordNet. Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results.


pdf bib
Automatic Content-Based Categorization of Wikipedia Articles
Zeno Gantner | Lars Schmidt-Thieme
Proceedings of the 2009 Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources (People’s Web)