Grzegorz Chrupała

Also published as: Grzegorz Chrupala


2024

pdf bib
Encoding of lexical tone in self-supervised models of spoken language
Gaofei Shen | Michaela Watkins | Afra Alishahi | Arianna Bisazza | Grzegorz Chrupała
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Interpretability research has shown that self-supervised Spoken LanguageModels (SLMs) encode a wide variety of features in human speech from theacoustic, phonetic, phonological, syntactic and semantic levels, to speakercharacteristics. The bulk of prior research on representations of phonologyhas focused on segmental features such as phonemes; the encoding ofsuprasegmental phonology (such as tone and stress patterns) in SLMs is not yetwell understood. Tone is a suprasegmental feature that is present in more thanhalf of the world’s languages. This paper aims to analyze the tone encodingcapabilities of SLMs, using Mandarin and Vietnamese as case studies. We showthat SLMs encode lexical tone to a significant degree even when they aretrained on data from non-tonal languages. We further find that SLMs behavesimilarly to native and non-native human participants in tone and consonantperception studies, but they do not follow the same developmental trajectory.

2023

pdf bib
Putting Natural in Natural Language Processing
Grzegorz Chrupała
Findings of the Association for Computational Linguistics: ACL 2023

Human language is firstly spoken and only secondarily written. Text, however, is a very convenient and efficient representation of language, and modern civilization has made it ubiquitous. Thus the field of NLP has overwhelmingly focused on processing written rather than spoken language. Work on spoken language, on the other hand, has been siloed off within the largely separate speech processing community which has been inordinately preoccupied with transcribing speech into text. Recent advances in deep learning have led to a fortuitous convergence in methods between speech processing and mainstream NLP. Arguably, the time is ripe for a unification of these two fields, and for starting to take spoken language seriously as the primary mode of human communication. Truly natural language processing could lead to better integration with the rest of language science and could lead to systems which are more data-efficient and more human-like, and which can communicate beyond the textual modality.

pdf bib
Quantifying Context Mixing in Transformers
Hosein Mohebbi | Willem Zuidema | Grzegorz Chrupała | Afra Alishahi
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Self-attention weights and their transformed variants have been the main source of information for analyzing token-to-token interactions in Transformer-based models. But despite their ease of interpretation, these weights are not faithful to the models’ decisions as they are only one part of an encoder, and other components in the encoder layer can have considerable impact on information mixing in the output representations. In this work, by expanding the scope of analysis to the whole encoder block, we propose Value Zeroing, a novel context mixing score customized for Transformers that provides us with a deeper understanding of how information is mixed at each encoder layer. We demonstrate the superiority of our context mixing score over other analysis methods through a series of complementary evaluations with different viewpoints based on linguistically informed rationales, probing, and faithfulness analysis.

pdf bib
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
Hosein Mohebbi | Grzegorz Chrupała | Willem Zuidema | Afra Alishahi
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Transformers have become a key architecture in speech processing, but our understanding of how they build up representations of acoustic and linguistic structure is limited. In this study, we address this gap by investigating how measures of ‘context-mixing’ developed for text models can be adapted and applied to models of spoken language. We identify a linguistic phenomenon that is ideal for such a case study: homophony in French (e.g. livre vs livres), where a speech recognition model has to attend to syntactic cues such as determiners and pronouns in order to disambiguate spoken words with identical pronunciations and transcribe them while respecting grammatical agreement. We perform a series of controlled experiments and probing analyses on Transformer-based speech models. Our findings reveal that representations in encoder-only models effectively incorporate these cues to identify the correct transcription, whereas encoders in encoder-decoder models mainly relegate the task of capturing contextual dependencies to decoder modules.

2022

pdf bib
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
Chris Emmery | Ákos Kádár | Grzegorz Chrupała | Walter Daelemans
Proceedings of the Thirteenth Language Resources and Evaluation Conference

A limited amount of studies investigates the role of model-agnostic adversarial behavior in toxic content classification. As toxicity classifiers predominantly rely on lexical cues, (deliberately) creative and evolving language-use can be detrimental to the utility of current corpora and state-of-the-art models when they are deployed for content moderation. The less training data is available, the more vulnerable models might become. This study is, to our knowledge, the first to investigate the effect of adversarial behavior and augmentation for cyberbullying detection. We demonstrate that model-agnostic lexical substitutions significantly hurt classifier performance. Moreover, when these perturbed samples are used for augmentation, we show models become robust against word-level perturbations at a slight trade-off in overall task performance. Augmentations proposed in prior work on toxicity prove to be less effective. Our results underline the need for such evaluations in online harm areas with small corpora.

pdf bib
Learning English with Peppa Pig
Mitja Nikolaus | Afra Alishahi | Grzegorz Chrupała
Transactions of the Association for Computational Linguistics, Volume 10

Recent computational models of the acquisition of spoken language via grounding in perception exploit associations between spoken and visual modalities and learn to represent speech and visual data in a joint vector space. A major unresolved issue from the point of ecological validity is the training data, typically consisting of images or videos paired with spoken descriptions of what is depicted. Such a setup guarantees an unrealistically strong correlation between speech and the visual data. In the real world the coupling between the linguistic and the visual modality is loose, and often confounded by correlations with non-semantic aspects of the speech signal. Here we address this shortcoming by using a dataset based on the children’s cartoon Peppa Pig. We train a simple bi-modal architecture on the portion of the data consisting of dialog between characters, and evaluate on segments containing descriptive narrations. Despite the weak and confounded signal in this training data, our model succeeds at learning aspects of the visual semantics of spoken language.

2021

pdf bib
Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling
Chris Emmery | Ákos Kádár | Grzegorz Chrupała
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Written language contains stylistic cues that can be exploited to automatically infer a variety of potentially sensitive author information. Adversarial stylometry intends to attack such models by rewriting an author’s text. Our research proposes several components to facilitate deployment of these adversarial attacks in the wild, where neither data nor target models are accessible. We introduce a transformer-based extension of a lexical replacement attack, and show it achieves high transferability when trained on a weakly labeled corpus—decreasing target model performance below chance. While not completely inconspicuous, our more successful attacks also prove notably less detectable by humans. Our framework therefore provides a promising direction for future privacy-preserving adversarial attacks.

pdf bib
Discrete representations in neural models of spoken language
Bertrand Higy | Lieke Gelderloos | Afra Alishahi | Grzegorz Chrupała
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

The distributed and continuous representations used by neural networks are at odds with representations employed in linguistics, which are typically symbolic. Vector quantization has been proposed as a way to induce discrete neural representations that are closer in nature to their linguistic counterparts. However, it is not clear which metrics are the best-suited to analyze such discrete representations. We compare the merits of four commonly used metrics in the context of weakly supervised models of spoken language. We compare the results they show when applied to two different models, while systematically studying the effect of the placement and size of the discretization layer. We find that different evaluation regimes can give inconsistent results. While we can attribute them to the properties of the different metrics in most cases, one point of concern remains: the use of minimal pairs of phoneme triples as stimuli disadvantages larger discrete unit inventories, unlike metrics applied to complete utterances. Furthermore, while in general vector quantization induces representations that correlate with units posited in linguistics, the strength of this correlation is only moderate.

2020

pdf bib
Textual Supervision for Visually Grounded Spoken Language Understanding
Bertrand Higy | Desmond Elliott | Grzegorz Chrupała
Findings of the Association for Computational Linguistics: EMNLP 2020

Visually-grounded models of spoken language understanding extract semantic information directly from speech, without relying on transcriptions. This is useful for low-resource languages, where transcriptions can be expensive or impossible to obtain. Recent work showed that these models can be improved if transcriptions are available at training time. However, it is not clear how an end-to-end approach compares to a traditional pipeline-based approach when one has access to transcriptions. Comparing different strategies, we find that the pipeline approach works better when enough text is available. With low-resource languages in mind, we also show that translations can be effectively used in place of transcriptions but more data is needed to obtain similar results.

pdf bib
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Afra Alishahi | Yonatan Belinkov | Grzegorz Chrupała | Dieuwke Hupkes | Yuval Pinter | Hassan Sajjad
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

pdf bib
Learning to Understand Child-directed and Adult-directed Speech
Lieke Gelderloos | Grzegorz Chrupała | Afra Alishahi
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Speech directed to children differs from adult-directed speech in linguistic aspects such as repetition, word choice, and sentence length, as well as in aspects of the speech signal itself, such as prosodic and phonemic variation. Human language acquisition research indicates that child-directed speech helps language learners. This study explores the effect of child-directed speech when learning to extract semantic information from speech directly. We compare the task performance of models trained on adult-directed speech (ADS) and child-directed speech (CDS). We find indications that CDS helps in the initial stages of learning, but eventually, models trained on ADS reach comparable task performance, and generalize better. The results suggest that this is at least partially due to linguistic rather than acoustic properties of the two registers, as we see the same pattern when looking at models trained on acoustically comparable synthetic speech.

pdf bib
Analyzing analytical methods: The case of phonology in neural models of spoken language
Grzegorz Chrupała | Bertrand Higy | Afra Alishahi
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Given the fast development of analysis techniques for NLP and speech processing systems, few systematic studies have been conducted to compare the strengths and weaknesses of each method. As a step in this direction we study the case of representations of phonology in neural network models of spoken language. We use two commonly applied analytical techniques, diagnostic classifiers and representational similarity analysis, to quantify to what extent neural activation patterns encode phonemes and phoneme sequences. We manipulate two factors that can affect the outcome of analysis. First, we investigate the role of learning by comparing neural activations extracted from trained versus randomly-initialized models. Second, we examine the temporal scope of the activations by probing both local activations corresponding to a few milliseconds of the speech signal, and global activations pooled over the whole utterance. We conclude that reporting analysis results with randomly initialized models is crucial, and that global-scope methods tend to yield more consistent and interpretable results and we recommend their use as a complement to local-scope diagnostic methods.

2019

pdf bib
Correlating Neural and Symbolic Representations of Language
Grzegorz Chrupała | Afra Alishahi
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Analysis methods which enable us to better understand the representations and functioning of neural models of language are increasingly needed as deep learning becomes the dominant approach in NLP. Here we present two methods based on Representational Similarity Analysis (RSA) and Tree Kernels (TK) which allow us to directly quantify how strongly the information encoded in neural activation patterns corresponds to information represented by symbolic structures such as syntax trees. We first validate our methods on the case of a simple synthetic language for arithmetic expressions with clearly defined syntax and semantics, and show that they exhibit the expected pattern of results. We then our methods to correlate neural representations of English sentences with their constituency parse trees.

pdf bib
Symbolic Inductive Bias for Visually Grounded Learning of Spoken Language
Grzegorz Chrupała
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

A widespread approach to processing spoken language is to first automatically transcribe it into text. An alternative is to use an end-to-end approach: recent works have proposed to learn semantic embeddings of spoken language from images with spoken captions, without an intermediate transcription step. We propose to use multitask learning to exploit existing transcribed speech within the end-to-end setting. We describe a three-task architecture which combines the objectives of matching spoken captions with corresponding images, speech with text, and text with images. We show that the addition of the speech/text task leads to substantial performance improvements on image retrieval when compared to training the speech/image task in isolation. We conjecture that this is due to a strong inductive bias transcribed speech provides to the model, and offer supporting evidence for this.

pdf bib
On the difficulty of a distributional semantics of spoken language
Grzegorz Chrupała | Lieke Gelderloos | Ákos Kádár | Afra Alishahi
Proceedings of the Society for Computation in Linguistics (SCiL) 2019

pdf bib
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Tal Linzen | Grzegorz Chrupała | Yonatan Belinkov | Dieuwke Hupkes
Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

2018

pdf bib
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Tal Linzen | Grzegorz Chrupała | Afra Alishahi
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

pdf bib
Style Obfuscation by Invariance
Chris Emmery | Enrique Manjavacas Arevalo | Grzegorz Chrupała
Proceedings of the 27th International Conference on Computational Linguistics

The task of obfuscating writing style using sequence models has previously been investigated under the framework of obfuscation-by-transfer, where the input text is explicitly rewritten in another style. A side effect of this framework are the frequent major alterations to the semantic content of the input. In this work, we propose obfuscation-by-invariance, and investigate to what extent models trained to be explicitly style-invariant preserve semantics. We evaluate our architectures in parallel and non-parallel settings, and compare automatic and human evaluations on the obfuscated sentences. Our experiments show that the performance of a style classifier can be reduced to chance level, while the output is evaluated to be of equal quality to models applying style-transfer. Additionally, human evaluation indicates a trade-off between the level of obfuscation and the observed quality of the output in terms of meaning preservation and grammaticality.

pdf bib
Revisiting the Hierarchical Multiscale LSTM
Ákos Kádár | Marc-Alexandre Côté | Grzegorz Chrupała | Afra Alishahi
Proceedings of the 27th International Conference on Computational Linguistics

Hierarchical Multiscale LSTM (Chung et. al., 2016) is a state-of-the-art language model that learns interpretable structure from character-level input. Such models can provide fertile ground for (cognitive) computational linguistics studies. However, the high complexity of the architecture, training and implementations might hinder its applicability. We provide a detailed reproduction and ablation study of the architecture, shedding light on some of the potential caveats of re-purposing complex deep-learning architectures. We further show that simplifying certain aspects of the architecture can in fact improve its performance. We also investigate the linguistic units (segments) learned by various levels of the model, and argue that their quality does not correlate with the overall performance of the model on language modeling.

pdf bib
Lessons Learned in Multilingual Grounded Language Learning
Ákos Kádár | Desmond Elliott | Marc-Alexandre Côté | Grzegorz Chrupała | Afra Alishahi
Proceedings of the 22nd Conference on Computational Natural Language Learning

Recent work has shown how to learn better visual-semantic embeddings by leveraging image descriptions in more than one language. Here, we investigate in detail which conditions affect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective.

2017

pdf bib
Simple Queries as Distant Labels for Predicting Gender on Twitter
Chris Emmery | Grzegorz Chrupała | Walter Daelemans
Proceedings of the 3rd Workshop on Noisy User-generated Text

The majority of research on extracting missing user attributes from social media profiles use costly hand-annotated labels for supervised learning. Distantly supervised methods exist, although these generally rely on knowledge gathered using external sources. This paper demonstrates the effectiveness of gathering distant labels for self-reported gender on Twitter using simple queries. We confirm the reliability of this query heuristic by comparing with manual annotation. Moreover, using these labels for distant supervision, we demonstrate competitive model performance on the same data as models trained on manual annotations. As such, we offer a cheap, extensible, and fast alternative that can be employed beyond the task of gender classification.

pdf bib
Representations of language in a model of visually grounded speech signal
Grzegorz Chrupała | Lieke Gelderloos | Afra Alishahi
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space. We use a multi-layer recurrent highway network to model the temporal nature of spoken speech, and show that it learns to extract both form and meaning-based linguistic knowledge from the input signal. We carry out an in-depth analysis of the representations used by different components of the trained model and show that encoding of semantic aspects tends to become richer as we go up the hierarchy of layers, whereas encoding of form-related aspects of the language input tends to initially increase and then plateau or decrease.

pdf bib
Representation of Linguistic Form and Function in Recurrent Neural Networks
Ákos Kádár | Grzegorz Chrupała | Afra Alishahi
Computational Linguistics, Volume 43, Issue 4 - December 2017

We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture consisting of two parallel pathways with shared word embeddings: The Visual pathway is trained on predicting the representations of the visual scene corresponding to an input sentence, and the Textual pathway is trained to predict the next word in the same sentence. We propose a method for estimating the amount of contribution of individual tokens in the input to the final prediction of the networks. Using this method, we show that the Visual pathway pays selective attention to lexical categories and grammatical functions that carry semantic information, and learns to treat word types differently depending on their grammatical function and their position in the sequential structure of the sentence. In contrast, the language models are comparatively more sensitive to words with a syntactic function. Further analysis of the most informative n-gram contexts for each model shows that in comparison with the Visual pathway, the language models react more strongly to abstract contexts that represent syntactic constructions.

pdf bib
Encoding of phonology in a recurrent neural model of grounded speech
Afra Alishahi | Marie Barking | Grzegorz Chrupała
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)

We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.

2016

pdf bib
Multimodal Semantic Learning from Child-Directed Input
Angeliki Lazaridou | Grzegorz Chrupała | Raquel Fernández | Marco Baroni
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning
Lieke Gelderloos | Grzegorz Chrupała
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover both structure and meaning from noisy and ambiguous data across modalities. We show that our model indeed learns to predict features of the visual context given phonetically transcribed image descriptions, and show that it represents linguistic information in a hierarchy of levels: lower layers in the stack are comparatively more sensitive to form, whereas higher layers are more sensitive to meaning.

2015

pdf bib
Lingusitic Analysis of Multi-Modal Recurrent Neural Networks
Ákos Kádár | Grzegorz Chrupała | Afra Alishahi
Proceedings of the Fourth Workshop on Vision and Language

pdf bib
Predicting the quality of questions on Stackoverflow
Antoaneta Baltadzhieva | Grzegorz Chrupała
Proceedings of the International Conference Recent Advances in Natural Language Processing

pdf bib
Learning language through pictures
Grzegorz Chrupała | Ákos Kádár | Afra Alishahi
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf bib
Semantic approaches to software component retrieval with English queries
Huijing Deng | Grzegorz Chrupała
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Enabling code reuse is an important goal in software engineering, and it depends crucially on effective code search interfaces. We propose to ground word meanings in source code and use such language-code mappings in order to enable a search engine for programming library code where users can pose queries in English. We exploit the fact that there are large programming language libraries which are documented both via formally specified function or method signatures as well as descriptions written in natural language. Automatically learned associations between words in descriptions and items in signatures allows us to use queries formulated in English to retrieve methods which are not documented via natural language descriptions, only based on their signatures. We show that the rankings returned by our model substantially outperforms a strong term-matching baseline.

pdf bib
Normalizing tweets with edit scripts and recurrent neural embeddings
Grzegorz Chrupała
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
RelationFactory: A Fast, Modular and Effective System for Knowledge Base Population
Benjamin Roth | Tassilo Barth | Grzegorz Chrupała | Martin Gropp | Dietrich Klakow
Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Learning word meanings from images of natural scenes
Ákos Kádár | Afra Alishahi | Grzegorz Chrupała
Traitement Automatique des Langues, Volume 55, Numéro 3 : Traitement automatique du langage naturel et sciences cognitives [Natural Language Processing and Cognitive Sciences]

pdf bib
DCU-UVT: Word-Level Language Classification with Code-Mixed Data
Utsab Barman | Joachim Wagner | Grzegorz Chrupała | Jennifer Foster
Proceedings of the First Workshop on Computational Approaches to Code Switching

2013

pdf bib
Elephant: Sequence Labeling for Word and Sentence Segmentation
Kilian Evang | Valerio Basile | Grzegorz Chrupała | Johan Bos
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

2012

pdf bib
Hierarchical clustering of word class distributions
Grzegorz Chrupała
Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure

pdf bib
Concurrent Acquisition of Word Meaning and Lexical Categories
Afra Alishahi | Grzegorz Chrupala
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Learning from evolving data streams: online triage of bug reports
Grzegorz Chrupala
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

pdf bib
Efficient induction of probabilistic word classes with LDA
Grzegorz Chrupala
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf bib
Lemmatization and Lexicalized Statistical Parsing of Morphologically-Rich Languages: the Case of French
Djamé Seddah | Grzegorz Chrupała | Özlem Çetinoğlu | Josef van Genabith | Marie Candito
Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages

pdf bib
Relatedness Curves for Acquiring Paraphrases
Georgiana Dinu | Grzegorz Chrupała
Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics

pdf bib
Online Entropy-Based Model of Lexical Category Acquisition
Grzegorz Chrupała | Afra Alishahi
Proceedings of the Fourteenth Conference on Computational Natural Language Learning

pdf bib
A Named Entity Labeler for German: Exploiting Wikipedia and Distributional Clusters
Grzegorz Chrupała | Dietrich Klakow
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Named Entity Recognition is a relatively well-understood NLP task, with many publicly available training resources and software for processing English data. Other languages tend to be underserved in this area. For German, CoNLL-2003 Shared Task provided training data, but there are no publicly available, ready-to-use tools. We fill this gap and develop a German NER system with state-of-the-art performance. In addition to CoNLL 2003 labeled training data, we use two additional resources: (i) 32 million words of unlabeled news article text and (ii) infobox labels from German Wikipedia articles. From the unlabeled text we derive distributional word clusters. Then we use cluster membership features and Wikipedia infobox label features to train a supervised model on the labeled training data. This approach allows us to deal better with word-types unseen in the training data and achieve good performance on German with little engineering effort.

2008

pdf bib
Learning Morphology with Morfette
Grzegorz Chrupala | Georgiana Dinu | Josef van Genabith
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Morfette is a modular, data-driven, probabilistic system which learns to perform joint morphological tagging and lemmatization from morphologically annotated corpora. The system is composed of two learning modules which are trained to predict morphological tags and lemmas using the Maximum Entropy classifier. The third module dynamically combines the predictions of the Maximum-Entropy models and outputs a probability distribution over tag-lemma pair sequences. The lemmatization module exploits the idea of recasting lemmatization as a classification task by using class labels which encode mappings from word forms to lemmas. Experimental evaluation results and error analysis on three morphologically rich languages show that the system achieves high accuracy with no language-specific feature engineering or additional resources.

2006

pdf bib
Using Machine-Learning to Assign Function Labels to Parser Output for Spanish
Grzegorz Chrupała | Josef van Genabith
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2004

pdf bib
Hierarchical Recognition of Propositional Arguments with Perceptrons
Xavier Carreras | Lluís Màrquez | Grzegorz Chrupała
Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004