2024
pdf
bib
Estimating Word Concreteness from Contextualized Embeddings
Christian Wartena
Proceedings of the 20th Conference on Natural Language Processing (KONVENS 2024)
2023
pdf
bib
abs
Unsupervised Methods for Domain Specific Ambiguity Detection. The Case of German Physics Language
Vitor Fontanella
|
Christian Wartena
|
Gunnar Friege
Proceedings of the 15th International Conference on Computational Semantics
Many terms used in physics have a different meaning or usage pattern in general language, constituting a learning barrier in physics teaching. The systematic identification of such terms is considered to be useful for science education as well as for terminology extraction. This article compares three methods based on vector semantics and a simple frequency-based baseline for automatically identifying terms used in general language with domain-specific use in physics. For evaluation, we use ambiguity scores from a survey among physicists and data about the number of term senses from Wiktionary. We show that the so-called Vector Initialization method obtains the best results.
pdf
bib
Journal for Language Technology and Computational Linguistics, Vol. 36 No. 2
Christian Wartena
Journal for Language Technology and Computational Linguistics, Vol. 36 No. 2
pdf
bib
Proceedings of the 1st Workshop on Teaching for NLP
Annemarie Friedrich
|
Stefan Gr{\"u}newald
|
Margot Mieskes
|
Jannik Str{\"o}tgen
|
Christian Wartena
Proceedings of the 1st Workshop on Teaching for NLP
2022
pdf
bib
abs
On the Geometry of Concreteness
Christian Wartena
Proceedings of the 7th Workshop on Representation Learning for NLP
In this paper we investigate how concreteness and abstractness are represented in word embedding spaces. We use data for English and German, and show that concreteness and abstractness can be determined independently and turn out to be completely opposite directions in the embedding space. Various methods can be used to determine the direction of concreteness, always resulting in roughly the same vector. Though concreteness is a central aspect of the meaning of words and can be detected clearly in embedding spaces, it seems not as easy to subtract or add concreteness to words to obtain other words or word senses like e.g. can be done with a semantic property like gender.
2019
pdf
bib
abs
Predicting Word Concreteness and Imagery
Jean Charbonnier
|
Christian Wartena
Proceedings of the 13th International Conference on Computational Semantics - Long Papers
Concreteness of words has been studied extensively in psycholinguistic literature. A number of datasets have been created with average values for perceived concreteness of words. We show that we can train a regression model on these data, using word embeddings and morphological features, that can predict these concreteness values with high accuracy. We evaluate the model on 7 publicly available datasets. Only for a few small subsets of these datasets prediction of concreteness values are found in the literature. Our results clearly outperform the reported results for these datasets.
pdf
bib
abs
Sentiment Independent Topic Detection in Rated Hospital Reviews
Christian Wartena
|
Uwe Sander
|
Christiane Patzelt
Proceedings of the 13th International Conference on Computational Semantics - Short Papers
We present a simple method to find topics in user reviews that accompany ratings for products or services. Standard topic analysis will perform sub-optimal on such data since the word distributions in the documents are not only determined by the topics but by the sentiment as well. We reduce the influence of the sentiment on the topic selection by adding two explicit topics, representing positive and negative sentiment. We evaluate the proposed method on a set of over 15,000 hospital reviews. We show that the proposed method, Latent Semantic Analysis with explicit word features, finds topics with a much smaller bias for sentiments than other similar methods.
pdf
bib
abs
Detecting Paraphrases of Standard Clause Titles in Insurance Contracts
Frieda Josi
|
Christian Wartena
|
Ulrich Heid
RELATIONS - Workshop on meaning relations between phrases and sentences
For the analysis of contract texts, validated model texts, such as model clauses, can be used to identify reused contract clauses. This paper investigates how to calculate the similarity between titles of model clauses and headings extracted from contracts, and which similarity measure is most suitable for this. For the calculation of the similarities between title pairs we tested various variants of string similarity and token based similarity. We also compare two more semantic similarity measures based on word embeddings using pretrained embeddings and word embeddings trained on contract texts. The identification of the model clause title can be used as a starting point for the mapping of clauses found in contracts to verified clauses.
2018
pdf
bib
abs
Using Word Embeddings for Unsupervised Acronym Disambiguation
Jean Charbonnier
|
Christian Wartena
Proceedings of the 27th International Conference on Computational Linguistics
Scientific papers from all disciplines contain many abbreviations and acronyms. In many cases these acronyms are ambiguous. We present a method to choose the contextual correct definition of an acronym that does not require training for each acronym and thus can be applied to a large number of different acronyms with only few instances. We constructed a set of 19,954 examples of 4,365 ambiguous acronyms from image captions in scientific papers along with their contextually correct definition from different domains. We learn word embeddings for all words in the corpus and compare the averaged context vector of the words in the expansion of an acronym with the weighted average vector of the words in the context of the acronym. We show that this method clearly outperforms (classical) cosine similarity. Furthermore, we show that word embeddings learned from a 1 billion word corpus of scientific texts outperform word embeddings learned on much large general corpora.
2016
pdf
bib
abs
CogALex-V Shared Task: HsH-Supervised – Supervised similarity learning using entry wise product of context vectors
Christian Wartena
|
Rosa Tsegaye Aga
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex - V)
The CogALex-V Shared Task provides two datasets that consists of pairs of words along with a classification of their semantic relation. The dataset for the first task distinguishes only between related and unrelated, while the second data set distinguishes several types of semantic relations. A number of recent papers propose to construct a feature vector that represents a pair of words by applying a pairwise simple operation to all elements of the feature vector. Subsequently, the pairs can be classified by training any classification algorithm on these vectors. In the present paper we apply this method to the provided datasets. We see that the results are not better than from the given simple baseline. We conclude that the results of the investigated method are strongly depended on the type of data to which it is applied.
pdf
bib
abs
Learning Thesaurus Relations from Distributional Features
Rosa Tsegaye Aga
|
Christian Wartena
|
Lucas Drumond
|
Lars Schmidt-Thieme
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics.
pdf
bib
abs
Integrating Distributional and Lexical Information for Semantic Classification of Words using MRMF
Rosa Tsegaye Aga
|
Lucas Drumond
|
Christian Wartena
|
Lars Schmidt-Thieme
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Semantic classification of words using distributional features is usually based on the semantic similarity of words. We show on two different datasets that a trained classifier using the distributional features directly gives better results. We use Support Vector Machines (SVM) and Multi-relational Matrix Factorization (MRMF) to train classifiers. Both give similar results. However, MRMF, that was not used for semantic classification with distributional features before, can easily be extended with more matrices containing more information from different sources on the same problem. We demonstrate the effectiveness of the novel approach by including information from WordNet. Thus we show, that MRMF provides an interesting approach for building semantic classifiers that (1) gives better results than unsupervised approaches based on vector similarity, (2) gives similar results as other supervised methods and (3) can naturally be extended with other sources of information in order to improve the results.
2013
pdf
bib
HsH: Estimating Semantic Similarity of Words and Short Phrases with Frequency Normalized Distance Measures
Christian Wartena
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)
2000
pdf
bib
Extending Linear Indexed Grammars
Christian Wartena
Proceedings of the Fifth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+5)