Rémi Gilleron
Also published as: Remi Gilleron
2023
WordNet Is All You Need: A Surprisingly Effective Unsupervised Method for Graded Lexical Entailment
Joseph Renner
|
Pascal Denis
|
Rémi Gilleron
Findings of the Association for Computational Linguistics: EMNLP 2023
We propose a simple unsupervised approach which exclusively relies on WordNet (Miller,1995) for predicting graded lexical entailment (GLE) in English. Inspired by the seminal work of Resnik (1995), our method models GLE as the sum of two information-theoretic scores: a symmetric semantic similarity score and an asymmetric specificity loss score, both exploiting the hierarchical synset structure of WordNet. Our approach also includes a simple disambiguation mechanism to handle polysemy in a given word pair. Despite its simplicity, our method achieves performance above the state of the art (Spearman 𝜌 = 0.75) on HyperLex (Vulic et al., 2017), the largest GLE dataset, outperforming all previous methods, including specialized word embeddings approaches that use WordNet as weak supervision.
Exploring Category Structure with Contextual Language Models and Lexical Semantic Networks
Joseph Renner
|
Pascal Denis
|
Remi Gilleron
|
Angèle Brunellière
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
The psychological plausibility of word embeddings has been studied through different tasks such as word similarity, semantic priming, and lexical entailment. Recent work on predicting category structure with word embeddings report low correlations with human ratings. (Heyman and Heyman, 2019) showed that static word embeddings fail at predicting typicality using cosine similarity between category and exemplar words, while (Misra et al., 2021)obtain equally modest results for various contextual language models (CLMs) using a Cloze task formulation over hand-crafted taxonomic sentences. In this work, we test a wider array of methods for probing CLMs for predicting typicality scores. Our experiments, using BERT (Devlin et al., 2018), show the importance of using the right type of CLM probes, as our best BERT-based typicality prediction methods improve on previous works. Second, our results highlight the importance of polysemy in this task, as our best results are obtained when contextualization is paired with a disambiguation mechanism as in (Chronis and Erk, 2020). Finally, additional experiments and analyses reveal that Information Content-based WordNet (Miller, 1995) similarities with disambiguation match the performance of the best BERT-based method, and in fact capture complementary information, and when combined with BERT allow for enhanced typicality predictions.
2021
An End-to-End Approach for Full Bridging Resolution
Joseph Renner
|
Priyansh Trivedi
|
Gaurav Maheshwari
|
Rémi Gilleron
|
Pascal Denis
Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue
Search