2023
pdf
bib
abs
Vers l’évaluation continue des systèmes de recherche d’information.
Petra Galuscakova
|
Romain Deveaud
|
Gabriela Gonzalez-Saez
|
Philippe Mulhem
|
Lorraine Goeuriot
|
Florina Piroi
|
Martin Popel
Actes de CORIA-TALN 2023. Actes de la 18e Conférence en Recherche d'Information et Applications (CORIA)
Cet article présente le corpus de données associé à la première campagne évaluation LongEval dans le cadre de CLEF 2023. L’objectif de cette évaluation est d’étudier comment les systèmes de recherche d’informations réagissent à l’évolution des données qu’ils manipulent (notamment les documents et les requêtes). Nous détaillons les objectifs de la tâche, le processus d’acquisition des données et les mesures d’évaluation utilisées.
2022
pdf
bib
abs
Constrained Regeneration for Cross-Lingual Query-Focused Extractive Summarization
Elsbeth Turcan
|
David Wan
|
Faisal Ladhak
|
Petra Galuscakova
|
Sukanta Sen
|
Svetlana Tchistiakova
|
Weijia Xu
|
Marine Carpuat
|
Kenneth Heafield
|
Douglas Oard
|
Kathleen McKeown
Proceedings of the 29th International Conference on Computational Linguistics
Query-focused summaries of foreign-language, retrieved documents can help a user understand whether a document is actually relevant to the query term. A standard approach to this problem is to first translate the source documents and then perform extractive summarization to find relevant snippets. However, in a cross-lingual setting, the query term does not necessarily appear in the translations of relevant documents. In this work, we show that constrained machine translation and constrained post-editing can improve human relevance judgments by including a query term in a summary when its translation appears in the source document. We also present several strategies for selecting only certain documents for regeneration which yield further improvements
2021
pdf
bib
abs
Cross-language Sentence Selection via Data Augmentation and Rationale Training
Yanda Chen
|
Chris Kedzie
|
Suraj Nair
|
Petra Galuscakova
|
Rui Zhang
|
Douglas Oard
|
Kathleen McKeown
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
This paper proposes an approach to cross-language sentence selection in a low-resource setting. It uses data augmentation and negative sampling techniques on noisy parallel sentence data to directly learn a cross-lingual embedding-based query relevance model. Results show that this approach performs as well as or better than multiple state-of-the-art machine translation + monolingual retrieval systems trained on the same parallel data. Moreover, when a rationale training secondary objective is applied to encourage the model to match word alignment hints from a phrase-based statistical machine translation model, consistent improvements are seen across three language pairs (English-Somali, English-Swahili and English-Tagalog) over a variety of state-of-the-art baselines.
pdf
bib
abs
Segmenting Subtitles for Correcting ASR Segmentation Errors
David Wan
|
Chris Kedzie
|
Faisal Ladhak
|
Elsbeth Turcan
|
Petra Galuscakova
|
Elena Zotkina
|
Zhengping Jiang
|
Peter Bell
|
Kathleen McKeown
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Typical ASR systems segment the input audio into utterances using purely acoustic information, which may not resemble the sentence-like units that are expected by conventional machine translation (MT) systems for Spoken Language Translation. In this work, we propose a model for correcting the acoustic segmentation of ASR models for low-resource languages to improve performance on downstream tasks. We propose the use of subtitles as a proxy dataset for correcting ASR acoustic segmentation, creating synthetic acoustic utterances by modeling common error modes. We train a neural tagging model for correcting ASR acoustic segmentation and show that it improves downstream performance on MT and audio-document cross-language information retrieval (CLIR).
2020
pdf
bib
abs
MATERIALizing Cross-Language Information Retrieval: A Snapshot
Petra Galuscakova
|
Douglas Oard
|
Joe Barrow
|
Suraj Nair
|
Shing Han-Chin
|
Elena Zotkina
|
Ramy Eskander
|
Rui Zhang
Proceedings of the workshop on Cross-Language Search and Summarization of Text and Speech (CLSSTS2020)
At about the midpoint of the IARPA MATERIAL program in October 2019, an evaluation was conducted on systems’ abilities to find Lithuanian documents based on English queries. Subsequently, both the Lithuanian test collection and results from all three teams were made available for detailed analysis. This paper capitalizes on that opportunity to begin to look at what’s working well at this stage of the program, and to identify some promising directions for future work.
2018
pdf
bib
Low Resource Methods for Medieval Document Sections Analysis
Petra Galuščáková
|
Lucie Neužilová
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
2013
pdf
bib
PhraseFix: Statistical Post-Editing of TectoMT
Petra Galuščáková
|
Martin Popel
|
Ondřej Bojar
Proceedings of the Eighth Workshop on Statistical Machine Translation
2012
pdf
bib
Selecting Data for English-to-Czech Machine Translation
Aleš Tamchyna
|
Petra Galuščáková
|
Amir Kamran
|
Miloš Stanojević
|
Ondřej Bojar
Proceedings of the Seventh Workshop on Statistical Machine Translation
pdf
bib
abs
The Joy of Parallelism with CzEng 1.0
Ondřej Bojar
|
Zdeněk Žabokrtský
|
Ondřej Dušek
|
Petra Galuščáková
|
Martin Majliš
|
David Mareček
|
Jiří Maršík
|
Michal Novák
|
Martin Popel
|
Aleš Tamchyna
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
CzEng 1.0 is an updated release of our Czech-English parallel corpus, freely available for non-commercial research or educational purposes. In this release, we approximately doubled the corpus size, reaching 15 million sentence pairs (about 200 million tokens per language). More importantly, we carefully filtered the data to reduce the amount of non-matching sentence pairs. CzEng 1.0 is automatically aligned at the level of sentences as well as words. We provide not only the plain text representation, but also automatic morphological tags, surface syntactic as well as deep syntactic dependency parse trees and automatic co-reference links in both English and Czech. This paper describes key properties of the released resource including the distribution of text domains, the corpus data formats, and a toolkit to handle the provided rich annotation. We also summarize the procedure of the rich annotation (incl. co-reference resolution) and of the automatic filtering. Finally, we provide some suggestions on exploiting such an automatically annotated sentence-parallel corpus.
2011
pdf
bib
Two-step translation with grammatical post-processing
David Mareček
|
Rudolf Rosa
|
Petra Galuščáková
|
Ondřej Bojar
Proceedings of the Sixth Workshop on Statistical Machine Translation