Florina Piroi


2024

pdf bib
An Analysis of Tasks and Datasets in Peer Reviewing
Moritz Staudinger | Wojciech Kusa | Florina Piroi | Allan Hanbury
Proceedings of the Fourth Workshop on Scholarly Document Processing (SDP 2024)

Taking note of the current challenges of the peer review system, this paper inventories the research tasks for analysing and possibly automating parts of the reviewing process, like matching submissions with a reviewer’s domain of expertise. For each of these tasks we list their associated datasets, analysing their quality in terms of available documentation of creation and use. Building up on this, we give a set of recommendations to take into account when collecting and releasing data.

2023

pdf bib
Vers l’évaluation continue des systèmes de recherche d’information.
Petra Galuscakova | Romain Deveaud | Gabriela Gonzalez-Saez | Philippe Mulhem | Lorraine Goeuriot | Florina Piroi | Martin Popel
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)

Cet article présente le corpus de données associé à la première campagne évaluation LongEval dans le cadre de CLEF 2023. L’objectif de cette évaluation est d’étudier comment les systèmes de recherche d’informations réagissent à l’évolution des données qu’ils manipulent (notamment les documents et les requêtes). Nous détaillons les objectifs de la tâche, le processus d’acquisition des données et les mesures d’évaluation utilisées.

2022

pdf bib
Benchmark for Research Theme Classification of Scholarly Documents
Óscar E. Mendoza | Wojciech Kusa | Alaa El-Ebshihy | Ronin Wu | David Pride | Petr Knoth | Drahomira Herrmannova | Florina Piroi | Gabriella Pasi | Allan Hanbury
Proceedings of the Third Workshop on Scholarly Document Processing

We present a new gold-standard dataset and a benchmark for the Research Theme Identification task, a sub-task of the Scholarly Knowledge Graph Generation shared task, at the 3rd Workshop on Scholarly Document Processing. The objective of the shared task was to label given research papers with research themes from a total of 36 themes. The benchmark was compiled using data drawn from the largest overall assessment of university research output ever undertaken globally (the Research Excellence Framework - 2014). We provide a performance comparison of a transformer-based ensemble, which obtains multiple predictions for a research paper, given its multiple textual fields (e.g. title, abstract, reference), with traditional machine learning models. The ensemble involves enriching the initial data with additional information from open-access digital libraries and Argumentative Zoning techniques (CITATION). It uses a weighted sum aggregation for the multiple predictions to obtain a final single prediction for the given research paper. Both data and the ensemble are publicly available on https://www.kaggle.com/competitions/sdp2022-scholarly-knowledge-graph-generation/data?select=task1_test_no_label.csv and https://github.com/ProjectDoSSIER/sdp2022, respectively.

2020

pdf bib
ARTU / TU Wien and Artificial Researcher@ LongSumm 20
Alaa El-Ebshihy | Annisa Maulida Ningtyas | Linda Andersson | Florina Piroi | Andreas Rauber
Proceedings of the First Workshop on Scholarly Document Processing

In this paper, we present our approach to solve the LongSumm 2020 Shared Task, at the 1st Workshop on Scholarly Document Processing. The objective of the long summaries task is to generate long summaries that cover salient information in scientific articles. The task is to generate abstractive and extractive summaries of a given scientific article. In the proposed approach, we are inspired by the concept of Argumentative Zoning (AZ) that de- fines the main rhetorical structure in scientific articles. We define two aspects that should be covered in scientific paper summary, namely Claim/Method and Conclusion/Result aspects. We use Solr index to expand the sentences of the paper abstract. We formulate each abstract sentence in a given publication as query to retrieve similar sentences from the text body of the document itself. We utilize a sentence selection algorithm described in previous literature to select sentences for the final summary that covers the two aforementioned aspects.