Benno Kruit


2024

pdf bib
Retrieval-based Question Answering with Passage Expansion Using a Knowledge Graph
Benno Kruit | Yiming Xu | Jan-Christoph Kalo
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Recent advancements in dense neural retrievers and language models have led to large improvements in state-of-the-art approaches to open-domain Question Answering (QA) based on retriever-reader architectures. However, issues stemming from data quality and imbalances in the use of dense embeddings have hindered performance, particularly for less common entities and facts. To tackle these problems, this study explores a multi-modal passage retrieval model’s potential to bolster QA system performance. This study poses three key questions: (1) Can a distantly supervised question-relation extraction model enhance retrieval using a knowledge graph (KG), compensating for dense neural retrievers’ shortcomings with rare entities? (2) How does this multi-modal approach compare to existing QA systems based on textual features? (3) Can this QA system alleviate poor performance on less common entities on common benchmarks? We devise a multi-modal retriever combining entity features and textual data, leading to improved retrieval precision in some situations, particularly for less common entities. Experiments across different datasets confirm enhanced performance for entity-centric questions, but challenges remain in handling complex generalized questions.

2023

pdf bib
Minimalist Entity Disambiguation for Mid-Resource Languages
Benno Kruit
Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)

2014

pdf bib
Annotating by Proving using SemAnTE
Assaf Toledo | Stavroula Alexandropoulou | Sophie Chesney | Robert Grimm | Pepijn Kokke | Benno Kruit | Kyriaki Neophytou | Antony Nguyen | Yoad Winter
Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Towards a Semantic Model for Textual Entailment Annotation
Assaf Toledo | Stavroula Alexandropoulou | Sophie Chesney | Sophia Katrenko | Heid Klockmann | Pepjin Kokke | Benno Kruit | Yoad Winter
Linguistic Issues in Language Technology, Volume 9, 2014 - Perspectives on Semantic Representations for Textual Inference

We introduce a new formal semantic model for annotating textual entailments that describes restrictive, intersective, and appositive modification. The model contains a formally defined interpreted lexicon, which specifies the inventory of symbols and the supported semantic operators, and an informally defined annotation scheme that instructs annotators in which way to bind words and constructions from a given pair of premise and hypothesis to the interpreted lexicon. We explore the applicability of the proposed model to the Recognizing Textual Entailment (RTE) 1–4 corpora and describe a first-stage annotation scheme on which we based the manual annotation work. The constructions we annotated were found to occur in 80.65% of the entailments in RTE 1–4 and were annotated with cross-annotator agreement of 68% on average. The annotated parts of the RTE corpora are publicly available for further research.