Karol Kaczmarek
2022
Challenging America: Modeling language in longer time scales
Jakub Pokrywka
|
Filip Graliński
|
Krzysztof Jassem
|
Karol Kaczmarek
|
Krzysztof Jurkiewicz
|
Piotr Wierzchon
Findings of the Association for Computational Linguistics: NAACL 2022
The aim of the paper is to apply, for historical texts, the methodology used commonly to solve various NLP tasks defined for contemporary data, i.e. pre-train and fine-tune large Transformer models. This paper introduces an ML challenge, named Challenging America (ChallAm), based on OCR-ed excerpts from historical newspapers collected from the Chronicling America portal. ChallAm provides a dataset of clippings, labeled with metadata on their origin, and paired with their textual contents retrieved by an OCR tool. Three, publicly available, ML tasks are defined in the challenge: to determine the article date, to detect the location of the issue, and to deduce a word in a text gap (cloze test). Strong baselines are provided for all three ChallAm tasks. In particular, we pre-trained a RoBERTa model from scratch from the historical texts. We also discuss the issues of discrimination and hate-speech present in the historical American texts.
2020
Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines
Łukasz Borchmann
|
Dawid Wisniewski
|
Andrzej Gretkowski
|
Izabela Kosmala
|
Dawid Jurkiewicz
|
Łukasz Szałkiewicz
|
Gabriela Pałka
|
Karol Kaczmarek
|
Agnieszka Kaliska
|
Filip Graliński
Findings of the Association for Computational Linguistics: EMNLP 2020
We propose a new shared task of semantic retrieval from legal texts, in which a so-called contract discovery is to be performed – where legal clauses are extracted from documents, given a few examples of similar clauses from other legal acts. The task differs substantially from conventional NLI and shared tasks on legal information extraction (e.g., one has to identify text span instead of a single document, page, or paragraph). The specification of the proposed task is followed by an evaluation of multiple solutions within the unified framework proposed for this branch of methods. It is shown that state-of-the-art pretrained encoders fail to provide satisfactory results on the task proposed. In contrast, Language Model-based solutions perform better, especially when unsupervised fine-tuning is applied. Besides the ablation studies, we addressed questions regarding detection accuracy for relevant text fragments depending on the number of examples available. In addition to the dataset and reference results, LMs specialized in the legal domain were made publicly available.