Jordi Bernad


2023

pdf bib
MEAN: Metaphoric Erroneous ANalogies dataset for PTLMs metaphor knowledge probing
Lucia Pitarch | Jordi Bernad | Jorge Gracia
Proceedings of the 4th Conference on Language, Data and Knowledge

pdf bib
No clues good clues: out of context Lexical Relation Classification
Lucia Pitarch | Jordi Bernad | Lacramioara Dranca | Carlos Bobed Lisbona | Jorge Gracia
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The accurate prediction of lexical relations between words is a challenging task in Natural Language Processing (NLP). The most recent advances in this direction come with the use of pre-trained language models (PTLMs). A PTLM typically needs “well-formed” verbalized text to interact with it, either to fine-tune it or to exploit it. However, there are indications that commonly used PTLMs already encode enough linguistic knowledge to allow the use of minimal (or none) textual context for some linguistically motivated tasks, thus notably reducing human effort, the need for data pre-processing, and favoring techniques that are language neutral since do not rely on syntactic structures. In this work, we explore this idea for the tasks of lexical relation classification (LRC) and graded Lexical Entailment (LE). After fine-tuning PTLMs for LRC with different verbalizations, our evaluation results show that very simple prompts are competitive for LRC and significantly outperform graded LE SoTA. In order to gain a better insight into this phenomenon, we perform a number of quantitative statistical analyses on the results, as well as a qualitative visual exploration based on embedding projections.