Longquan Jiang


2024

pdf bib
TextGraphs 2024 Shared Task on Text-Graph Representations for Knowledge Graph Question Answering
Andrey Sakhovskiy | Mikhail Salnikov | Irina Nikishina | Aida Usmanova | Angelie Kraft | Cedric Möller | Debayan Banerjee | Junbo Huang | Longquan Jiang | Rana Abdullah | Xi Yan | Dmitry Ustalov | Elena Tutubalina | Ricardo Usbeck | Alexander Panchenko
Proceedings of TextGraphs-17: Graph-based Methods for Natural Language Processing

This paper describes the results of the Knowledge Graph Question Answering (KGQA) shared task that was co-located with the TextGraphs 2024 workshop. In this task, given a textual question and a list of entities with the corresponding KG subgraphs, the participating system should choose the entity that correctly answers the question. Our competition attracted thirty teams, four of which outperformed our strong ChatGPT-based zero-shot baseline. In this paper, we overview the participating systems and analyze their performance according to a large-scale automatic evaluation. To the best of our knowledge, this is the first competition aimed at the KGQA problem using the interaction between large language models (LLMs) and knowledge graphs.

2022

pdf bib
Knowledge Graph Question Answering Leaderboard: A Community Resource to Prevent a Replication Crisis
Aleksandr Perevalov | Xi Yan | Liubov Kovriguina | Longquan Jiang | Andreas Both | Ricardo Usbeck
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Data-driven systems need to be evaluated to establish trust in the scientific approach and its applicability. In particular, this is true for Knowledge Graph (KG) Question Answering (QA), where complex data structures are made accessible via natural-language interfaces. Evaluating the capabilities of these systems has been a driver for the community for more than ten years while establishing different KGQA benchmark datasets. However, comparing different approaches is cumbersome. The lack of existing and curated leaderboards leads to a missing global view over the research field and could inject mistrust into the results. In particular, the latest and most-used datasets in the KGQA community, LC-QuAD and QALD, miss providing central and up-to-date points of trust. In this paper, we survey and analyze a wide range of evaluation results with significant coverage of 100 publications and 98 systems from the last decade. We provide a new central and open leaderboard for any KGQA benchmark dataset as a focal point for the community - https://kgqa.github.io/leaderboard/. Our analysis highlights existing problems during the evaluation of KGQA systems. Thus, we will point to possible improvements for future evaluations.