David Mareček


2024

pdf bib
Exploring Interpretability of Independent Components of Word Embeddings with Automated Word Intruder Test
Tomáš Musil | David Mareček
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Independent Component Analysis (ICA) is an algorithm originally developed for finding separate sources in a mixed signal, such as a recording of multiple people in the same room speaking at the same time. Unlike Principal Component Analysis (PCA), ICA permits the representation of a word as an unstructured set of features, without any particular feature being deemed more significant than the others. In this paper, we used ICA to analyze word embeddings. We have found that ICA can be used to find semantic features of the words and these features can easily be combined to search for words that satisfy the combination. We show that most of the independent components represent such features. To quantify the interpretability of the components, we use the word intruder test, performed both by humans and by large language models. We propose to use the automated version of the word intruder test as a fast and inexpensive way of quantifying vector interpretability without the need for human effort.

2023

pdf bib
Tokenization Impacts Multilingual Language Modeling: Assessing Vocabulary Allocation and Overlap Across Languages
Tomasz Limisiewicz | Jiří Balhar | David Mareček
Findings of the Association for Computational Linguistics: ACL 2023

Multilingual language models have recently gained attention as a promising solution for representing multiple languages in a single model. In this paper, we propose new criteria to evaluate the quality of lexical representation and vocabulary overlap observed in sub-word tokenizers.Our findings show that the overlap of vocabulary across languages can be actually detrimental to certain downstream tasks (POS, dependency tree labeling). In contrast, NER and sentence-level tasks (cross-lingual retrieval, NLI) benefit from sharing vocabulary. We also observe that the coverage of the language-specific tokens in the multilingual vocabulary significantly impacts the word-level tasks. Our study offers a deeper understanding of the role of tokenizers in multilingual language models and guidelines for future model developers to choose the most suitable tokenizer for their specific application before undertaking costly model pre-training.

pdf bib
The Functional Relevance of Probed Information: A Case Study
Michael Hanna | Roberto Zamparelli | David Mareček
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Recent studies have shown that transformer models like BERT rely on number information encoded in their representations of sentences’ subjects and head verbs when performing subject-verb agreement. However, probing experiments suggest that subject number is also encoded in the representations of all words in such sentences. In this paper, we use causal interventions to show that BERT only uses the subject plurality information encoded in its representations of the subject and words that agree with it in number. We also demonstrate that current probing metrics are unable to determine which words’ representations contain functionally relevant information. This both provides a revised view of subject-verb agreement in language models, and suggests potential pitfalls for current probe usage and evaluation.

pdf bib
Exploring the Impact of Training Data Distribution and Subword Tokenization on Gender Bias in Machine Translation
Bar Iluz | Tomasz Limisiewicz | Gabriel Stanovsky | David Mareček
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

2022

pdf bib
Don’t Forget About Pronouns: Removing Gender Bias in Language Models Without Losing Factual Gender Information
Tomasz Limisiewicz | David Mareček
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)

The representations in large language models contain multiple types of gender information. We focus on two types of such signals in English texts: factual gender information, which is a grammatical or semantic property, and gender bias, which is the correlation between a word and specific gender. We can disentangle the model’s embeddings and identify components encoding both types of information with probing. We aim to diminish the stereotypical bias in the representations while preserving the factual gender signal. Our filtering method shows that it is possible to decrease the bias of gender-neutral profession names without significant deterioration of language modeling capabilities. The findings can be applied to language generation to mitigate reliance on stereotypes while preserving gender agreement in coreferences.

pdf bib
THEaiTRobot: An Interactive Tool for Generating Theatre Play Scripts
Rudolf Rosa | Patrícia Schmidtová | Alisa Zakhtarenko | Ondrej Dusek | Tomáš Musil | David Mareček | Saad Ul Islam | Marie Novakova | Klara Vosecka | Daniel Hrbek | David Kostak
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations

We present a free online demo of THEaiTRobot, an open-source bilingual tool for interactively generating theatre play scripts, in two versions. THEaiTRobot 1.0 uses the GPT-2 language model with minimal adjustments. THEaiTRobot 2.0 uses two models created by fine-tuning GPT-2 on purposefully collected and processed datasets and several other components, generating play scripts in a hierarchical fashion (title synopsis script). The underlying tool is used in the THEaiTRE project to generate scripts for plays, which are then performed on stage by a professional theatre.

pdf bib
GPT-2-based Human-in-the-loop Theatre Play Script Generation
Rudolf Rosa | Patrícia Schmidtová | Ondřej Dušek | Tomáš Musil | David Mareček | Saad Obaid | Marie Nováková | Klára Vosecká | Josef Doležal
Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)

We experiment with adapting generative language models for the generation of long coherent narratives in the form of theatre plays. Since fully automatic generation of whole plays is not currently feasible, we created an interactive tool that allows a human user to steer the generation somewhat while minimizing intervention. We pursue two approaches to long-text generation: a flat generation with summarization of context, and a hierarchical text-to-text two-stage approach, where a synopsis is generated first and then used to condition generation of the final script. Our preliminary results and discussions with theatre professionals show improvements over vanilla language model generation, but also identify important limitations of our approach.

2021

pdf bib
Introducing Orthogonal Constraint in Structural Probes
Tomasz Limisiewicz | David Mareček
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

With the recent success of pre-trained models in NLP, a significant focus was put on interpreting their representations. One of the most prominent approaches is structural probing (Hewitt and Manning, 2019), where a linear projection of word embeddings is performed in order to approximate the topology of dependency structures. In this work, we introduce a new type of structural probing, where the linear projection is decomposed into 1. iso-morphic space rotation; 2. linear scaling that identifies and scales the most relevant dimensions. In addition to syntactic dependency, we evaluate our method on two novel tasks (lexical hypernymy and position in a sentence). We jointly train the probes for multiple tasks and experimentally show that lexical and syntactic information is separated in the representations. Moreover, the orthogonal constraint makes the Structural Probes less vulnerable to memorization.

pdf bib
Analyzing BERT’s Knowledge of Hypernymy via Prompting
Michael Hanna | David Mareček
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

The high performance of large pretrained language models (LLMs) such as BERT on NLP tasks has prompted questions about BERT’s linguistic capabilities, and how they differ from humans’. In this paper, we approach this question by examining BERT’s knowledge of lexical semantic relations. We focus on hypernymy, the “is-a” relation that relates a word to a superordinate category. We use a prompting methodology to simply ask BERT what the hypernym of a given word is. We find that, in a setting where all hypernyms are guessable via prompting, BERT knows hypernyms with up to 57% accuracy. Moreover, BERT with prompting outperforms other unsupervised models for hypernym discovery even in an unconstrained scenario. However, BERT’s predictions and performance on a dataset containing uncommon hyponyms and hypernyms indicate that its knowledge of hypernymy is still limited.

pdf bib
Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes
Tomasz Limisiewicz | David Mareček
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

State-of-the-art contextual embeddings are obtained from large language models available only for a few languages. For others, we need to learn representations using a multilingual model. There is an ongoing debate on whether multilingual embeddings can be aligned in a space shared across many languages. The novel Orthogonal Structural Probe (Limisiewicz and Mareček, 2021) allows us to answer this question for specific linguistic features and learn a projection based only on mono-lingual annotated datasets. We evaluate syntactic (UD) and lexical (WordNet) structural information encoded inmBERT’s contextual representations for nine diverse languages. We observe that for languages closely related to English, no transformation is needed. The evaluated information is encoded in a shared cross-lingual embedding space. For other languages, it is beneficial to apply orthogonal transformation learned separately for each language. We successfully apply our findings to zero-shot and few-shot cross-lingual parsing.

2020

pdf bib
Universal Dependencies According to BERT: Both More Specific and More General
Tomasz Limisiewicz | David Mareček | Rudolf Rosa
Findings of the Association for Computational Linguistics: EMNLP 2020

This work focuses on analyzing the form and extent of syntactic abstraction captured by BERT by extracting labeled dependency trees from self-attentions. Previous work showed that individual BERT heads tend to encode particular dependency relation types. We extend these findings by explicitly comparing BERT relations to Universal Dependencies (UD) annotations, showing that they often do not match one-to-one. We suggest a method for relation identification and syntactic tree construction. Our approach produces significantly more consistent dependency trees than previous work, showing that it better explains the syntactic abstractions in BERT. At the same time, it can be successfully applied with only a minimal amount of supervision and generalizes well across languages.

2019

pdf bib
Derivational Morphological Relations in Word Embeddings
Tomáš Musil | Jonáš Vidra | David Mareček
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Derivation is a type of a word-formation process which creates new words from existing ones by adding, changing or deleting affixes. In this paper, we explore the potential of word embeddings to identify properties of word derivations in the morphologically rich Czech language. We extract derivational relations between pairs of words from DeriNet, a Czech lexical network, which organizes almost one million Czech lemmas into derivational trees. For each such pair, we compute the difference of the embeddings of the two words, and perform unsupervised clustering of the resulting vectors. Our results show that these clusters largely match manually annotated semantic categories of the derivational relations (e.g. the relation ‘bake–baker’ belongs to category ‘actor’, and a correct clustering puts it into the same cluster as ‘govern–governor’).

pdf bib
From Balustrades to Pierre Vinken: Looking for Syntax in Transformer Self-Attentions
David Mareček | Rudolf Rosa
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

We inspect the multi-head self-attention in Transformer NMT encoders for three source languages, looking for patterns that could have a syntactic interpretation. In many of the attention heads, we frequently find sequences of consecutive states attending to the same position, which resemble syntactic phrases. We propose a transparent deterministic method of quantifying the amount of syntactic information present in the self-attentions, based on automatically building and evaluating phrase-structure trees from the phrase-like sequences. We compare the resulting trees to existing constituency treebanks, both manually and by computing precision and recall.

2018

pdf bib
CUNI x-ling: Parsing Under-Resourced Languages in CoNLL 2018 UD Shared Task
Rudolf Rosa | David Mareček
Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies

This is a system description paper for the CUNI x-ling submission to the CoNLL 2018 UD Shared Task. We focused on parsing under-resourced languages, with no or little training data available. We employed a wide range of approaches, including simple word-based treebank translation, combination of delexicalized parsers, and exploitation of available morphological dictionaries, with a dedicated setup tailored to each of the languages. In the official evaluation, our submission was identified as the clear winner of the Low-resource languages category.

pdf bib
Extracting Syntactic Trees from Transformer Encoder Self-Attentions
David Mareček | Rudolf Rosa
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

This is a work in progress about extracting the sentence tree structures from the encoder’s self-attention weights, when translating into another language using the Transformer neural network architecture. We visualize the structures and discuss their characteristics with respect to the existing syntactic theories and annotations.

pdf bib
Input Combination Strategies for Multi-Source Transformer Decoder
Jindřich Libovický | Jindřich Helcl | David Mareček
Proceedings of the Third Conference on Machine Translation: Research Papers

In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoder-decoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.

2017

pdf bib
Slavic Forest, Norwegian Wood
Rudolf Rosa | Daniel Zeman | David Mareček | Zdeněk Žabokrtský
Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)

We once had a corp, or should we say, it once had us They showed us its tags, isn’t it great, unified tags They asked us to parse and they told us to use everything So we looked around and we noticed there was near nothing We took other langs, bitext aligned: words one-to-one We played for two weeks, and then they said, here is the test The parser kept training till morning, just until deadline So we had to wait and hope what we get would be just fine And, when we awoke, the results were done, we saw we’d won So, we wrote this paper, isn’t it good, Norwegian wood.

pdf bib
Communication with Robots using Multilayer Recurrent Networks
Bedřich Pišl | David Mareček
Proceedings of the First Workshop on Language Grounding for Robotics

In this paper, we describe an improvement on the task of giving instructions to robots in a simulated block world using unrestricted natural language commands.

pdf bib
CUNI submission in WMT17: Chimera goes neural
Roman Sudarikov | David Mareček | Tom Kocmi | Dušan Variš | Ondřej Bojar
Proceedings of the Second Conference on Machine Translation

pdf bib
CUNI Experiments for WMT17 Metrics Task
David Mareček | Ondřej Bojar | Ondřej Hübsch | Rudolf Rosa | Dušan Variš
Proceedings of the Second Conference on Machine Translation

2016

pdf bib
Planting Trees in the Desert: Delexicalized Tagging and Parsing Combined
Daniel Zeman | David Mareček | Zhiwei Yu | Zdeněk Žabokrtský
Proceedings of the 30th Pacific Asia Conference on Language, Information and Computation: Oral Papers

pdf bib
Merged bilingual trees based on Universal Dependencies in Machine Translation
David Mareček
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers

pdf bib
Moses & Treex Hybrid MT Systems Bestiary
Rudolf Rosa | Martin Popel | Ondřej Bojar | David Mareček | Ondřej Dušek
Proceedings of the 2nd Deep Machine Translation Workshop

pdf bib
If You Even Don’t Have a Bit of Bible: Learning Delexicalized POS Taggers
Zhiwei Yu | David Mareček | Zdeněk Žabokrtský | Daniel Zeman
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Part-of-speech (POS) induction is one of the most popular tasks in research on unsupervised NLP. Various unsupervised and semi-supervised methods have been proposed to tag an unseen language. However, many of them require some partial understanding of the target language because they rely on dictionaries or parallel corpora such as the Bible. In this paper, we propose a different method named delexicalized tagging, for which we only need a raw corpus of the target language. We transfer tagging models trained on annotated corpora of one or more resource-rich languages. We employ language-independent features such as word length, frequency, neighborhood entropy, character classes (alphabetic vs. numeric vs. punctuation) etc. We demonstrate that such features can, to certain extent, serve as predictors of the part of speech, represented by the universal POS tag.

2014

pdf bib
HamleDT 2.0: Thirty Dependency Treebanks Stanfordized
Rudolf Rosa | Jan Mašek | David Mareček | Martin Popel | Daniel Zeman | Zdeněk Žabokrtský
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

We present HamleDT 2.0 (HArmonized Multi-LanguagE Dependency Treebank). HamleDT 2.0 is a collection of 30 existing treebanks harmonized into a common annotation style, the Prague Dependencies, and further transformed into Stanford Dependencies, a treebank annotation style that became popular in recent years. We use the newest basic Universal Stanford Dependencies, without added language-specific subtypes. We describe both of the annotation styles, including adjustments that were necessary to make, and provide details about the conversion process. We also discuss the differences between the two styles, evaluating their advantages and disadvantages, and note the effects of the differences on the conversion. We regard the stanfordization as generally successful, although we admit several shortcomings, especially in the distinction between direct and indirect objects, that have to be addressed in future. We release part of HamleDT 2.0 freely; we are not allowed to redistribute the whole dataset, but we do provide the conversion pipeline.

2013

pdf bib
Stop-probability estimates computed on a large corpus improve Unsupervised Dependency Parsing
David Mareček | Milan Straka
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Coordination Structures in Dependency Treebanks
Martin Popel | David Mareček | Jan Štěpánek | Daniel Zeman | Zdeněk Žabokrtský
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Deepfix: Statistical Post-editing of Statistical Machine Translation Using Deep Syntactic Analysis
Rudolf Rosa | David Mareček | Aleš Tamchyna
51st Annual Meeting of the Association for Computational Linguistics Proceedings of the Student Research Workshop

2012

pdf bib
Unsupervised Dependency Parsing using Reducibility and Fertility features
David Mareček | Zdeněk Žabokrtský
Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure

pdf bib
Formemes in English-Czech Deep Syntactic MT
Ondřej Dušek | Zdeněk Žabokrtský | Martin Popel | Martin Majliš | Michal Novák | David Mareček
Proceedings of the Seventh Workshop on Statistical Machine Translation

pdf bib
DEPFIX: A System for Automatic Correction of Czech MT Outputs
Rudolf Rosa | David Mareček | Ondřej Dušek
Proceedings of the Seventh Workshop on Statistical Machine Translation

pdf bib
Using Parallel Features in Parsing of Machine-Translated Sentences for Correction of Grammatical Errors
Rudolf Rosa | Ondřej Dušek | David Mareček | Martin Popel
Proceedings of the Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation

pdf bib
HamleDT: To Parse or Not to Parse?
Daniel Zeman | David Mareček | Martin Popel | Loganathan Ramasamy | Jan Štěpánek | Zdeněk Žabokrtský | Jan Hajič
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

We propose HamleDT ― HArmonized Multi-LanguagE Dependency Treebank. HamleDT is a compilation of existing dependency treebanks (or dependency conversions of other treebanks), transformed so that they all conform to the same annotation style. While the license terms prevent us from directly redistributing the corpora, most of them are easily acquirable for research purposes. What we provide instead is the software that normalizes tree structures in the data obtained by the user from their original providers.

pdf bib
The Joy of Parallelism with CzEng 1.0
Ondřej Bojar | Zdeněk Žabokrtský | Ondřej Dušek | Petra Galuščáková | Martin Majliš | David Mareček | Jiří Maršík | Michal Novák | Martin Popel | Aleš Tamchyna
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

CzEng 1.0 is an updated release of our Czech-English parallel corpus, freely available for non-commercial research or educational purposes. In this release, we approximately doubled the corpus size, reaching 15 million sentence pairs (about 200 million tokens per language). More importantly, we carefully filtered the data to reduce the amount of non-matching sentence pairs. CzEng 1.0 is automatically aligned at the level of sentences as well as words. We provide not only the plain text representation, but also automatic morphological tags, surface syntactic as well as deep syntactic dependency parse trees and automatic co-reference links in both English and Czech. This paper describes key properties of the released resource including the distribution of text domains, the corpus data formats, and a toolkit to handle the provided rich annotation. We also summarize the procedure of the rich annotation (incl. co-reference resolution) and of the automatic filtering. Finally, we provide some suggestions on exploiting such an automatically annotated sentence-parallel corpus.

pdf bib
Exploiting Reducibility in Unsupervised Dependency Parsing
David Mareček | Zdeněk Žabokrtský
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

2011

pdf bib
Two-step translation with grammatical post-processing
David Mareček | Rudolf Rosa | Petra Galuščáková | Ondřej Bojar
Proceedings of the Sixth Workshop on Statistical Machine Translation

pdf bib
Influence of Parser Choice on Dependency-Based MT
Martin Popel | David Mareček | Nathan Green | Zdeněk Žabokrtský
Proceedings of the Sixth Workshop on Statistical Machine Translation

pdf bib
Gibbs Sampling with Treeness Constraint in Unsupervised Dependency Parsing
David Mareček | Zdeněk Žabokrtský
Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing

2010

pdf bib
Tackling Sparse Data Issue in Machine Translation Evaluation
Ondřej Bojar | Kamil Kos | David Mareček
Proceedings of the ACL 2010 Conference Short Papers

pdf bib
Maximum Entropy Translation Model in Dependency-Based MT Framework
Zdeněk Žabokrtský | Martin Popel | David Mareček
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

2009

pdf bib
English-Czech MT in 2008
Ondřej Bojar | David Mareček | Václav Novák | Martin Popel | Jan Ptáček | Jan Rouš | Zdeněk Žabokrtský
Proceedings of the Fourth Workshop on Statistical Machine Translation

pdf bib
Converting Russian Treebank SynTagRus into Praguian PDT Style
David Mareček | Natalia Kljueva
Proceedings of the Workshop Multilingual resources, technologies and evaluation for central and Eastern European languages

2008

pdf bib
Automatic alignment of Czech and English deep syntactic dependency trees
David Mareček | Zdeněk Žabokrtský | Václav Novák
Proceedings of the 12th Annual Conference of the European Association for Machine Translation