Jorge Hermosillo Valadez


2023

pdf bib
The Analysis of Synonymy and Antonymy in Discourse Relations: An Interpretable Modeling Approach
Asela Reig Alamillo | David Torres Moreno | Eliseo Morales González | Mauricio Toledo Acosta | Antoine Taroni | Jorge Hermosillo Valadez
Computational Linguistics, Volume 49, Issue 2 - June 2023

The idea that discourse relations are interpreted both by explicit content and by shared knowledge between producer and interpreter is pervasive in discourse and linguistic studies. How much weight should be ascribed in this process to the lexical semantics of the arguments is, however, uncertain. We propose a computational approach to analyze contrast and concession relations in the PDTB corpus. Our work sheds light on the question of how much lexical relations contribute to the signaling of such explicit and implicit relations, as well as on the contribution of different parts of speech to these semantic relations. This study contributes to bridging the gap between corpus and computational linguistics by proposing transparent and explainable computational models of discourse relations based on the synonymy and antonymy of their arguments.

2022

pdf bib
UAEM-ITAM at SemEval-2022 Task 5: Vision-Language Approach to Recognize Misogynous Content in Memes
Edgar Roman-Rangel | Jorge Fuentes-Pacheco | Jorge Hermosillo Valadez
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

In the context of the Multimedia Automatic Misogyny Identification (MAMI) competition 2022, we developed a framework for extracting lexical-semantic features from text and combine them with semantic descriptions of images, together with image content representation. We enriched the text modality description by incorporating word representations for each object present within the images. Images and text are then described at two levels of detail, globally and locally, using standard dimensionality reduction techniques for images in order to obtain 4 embeddings for each meme. These embeddings are finally concatenated and passed to a classifier. Our results overcome the baseline by 4%, falling behind the best performance by 12% for Sub-task B.