Marco Valenzuela-Escárcega

Also published as: Marco Valenzuela-Escarcega


2023

pdf bib
Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning
Mihai Surdeanu | Ellen Riloff | Laura Chiticariu | Dayne Frietag | Gus Hahn-Powell | Clayton T. Morrison | Enrique Noriega-Atala | Rebecca Sharp | Marco Valenzuela-Escarcega
Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning

2022

pdf bib
Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning
Laura Chiticariu | Yoav Goldberg | Gus Hahn-Powell | Clayton T. Morrison | Aakanksha Naik | Rebecca Sharp | Mihai Surdeanu | Marco Valenzuela-Escárcega | Enrique Noriega-Atala
Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning

2019

pdf bib
Lightly-supervised Representation Learning with Global Interpretability
Andrew Zupon | Maria Alexeeva | Marco Valenzuela-Escárcega | Ajay Nagesh | Mihai Surdeanu
Proceedings of the Third Workshop on Structured Prediction for NLP

We propose a lightly-supervised approach for information extraction, in particular named entity classification, which combines the benefits of traditional bootstrapping, i.e., use of limited annotations and interpretability of extraction patterns, with the robust learning approaches proposed in representation learning. Our algorithm iteratively learns custom embeddings for both the multi-word entities to be extracted and the patterns that match them from a few example entities per category. We demonstrate that this representation-based approach outperforms three other state-of-the-art bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes. Additionally, using these embeddings, our approach outputs a globally-interpretable model consisting of a decision list, by ranking patterns based on their proximity to the average entity embedding in a given class. We show that this interpretable model performs close to our complete bootstrapping model, proving that representation learning can be used to produce interpretable models with small loss in performance. This decision list can be edited by human experts to mitigate some of that loss and in some cases outperform the original model.