2021
pdf
bib
abs
Evaluating Universal Dependency Parser Recovery of Predicate Argument Structure via CompChain Analysis
Sagar Indurkhya
|
Beracah Yankama
|
Robert C. Berwick
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics
Accurate recovery of predicate-argument structure from a Universal Dependency (UD) parse is central to downstream tasks such as extraction of semantic roles or event representations. This study introduces compchains, a categorization of the hierarchy of predicate dependency relations present within a UD parse. Accuracy of compchain classification serves as a proxy for measuring accurate recovery of predicate-argument structure from sentences with embedding. We analyzed the distribution of compchains in three UD English treebanks, EWT, GUM and LinES, revealing that these treebanks are sparse with respect to sentences with predicate-argument structure that includes predicate-argument embedding. We evaluated the CoNLL 2018 Shared Task UDPipe (v1.2) baseline (dependency parsing) models as compchain classifiers for the EWT, GUMS and LinES UD treebanks. Our results indicate that these three baseline models exhibit poorer performance on sentences with predicate-argument structure with more than one level of embedding; we used compchains to characterize the errors made by these parsers and present examples of erroneous parses produced by the parser that were identified using compchains. We also analyzed the distribution of compchains in 58 non-English UD treebanks and then used compchains to evaluate the CoNLL’18 Shared Task baseline model for each of these treebanks. Our analysis shows that performance with respect to compchain classification is only weakly correlated with the official evaluation metrics (LAS, MLAS and BLEX). We identify gaps in the distribution of compchains in several of the UD treebanks, thus providing a roadmap for how these treebanks may be supplemented. We conclude by discussing how compchains provide a new perspective on the sparsity of training data for UD parsers, as well as the accuracy of the resulting UD parses.
2012
pdf
bib
An annotated English child language database
Aline Villavicencio
|
Beracah Yankama
|
Rodrigo Wilkens
|
Marco Idiart
|
Robert Berwick
Proceedings of the Workshop on Computational Models of Language Acquisition and Loss
pdf
bib
Get out but don’t fall down: verb-particle constructions in child language
Aline Villavicencio
|
Marco Idiart
|
Carlos Ramisch
|
Vítor Araújo
|
Beracah Yankama
|
Robert Berwick
Proceedings of the Workshop on Computational Models of Language Acquisition and Loss
pdf
bib
abs
A large scale annotated child language construction database
Aline Villavicencio
|
Beracah Yankama
|
Marco Idiart
|
Robert Berwick
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Large scale annotated corpora of child language can be of great value in assessing theoretical proposals regarding language acquisition models. For example, they can help determine whether the type and amount of data required by a proposed language acquisition model can actually be found in a naturalistic data sample. To this end, several recent efforts have augmented the CHILDES child language corpora with POS tagging and parsing information for languages such as English. With the increasing availability of robust NLP systems and electronic resources, these corpora can be further annotated with more detailed information about the properties of words, verb argument structure, and sentences. This paper describes such an initiative for combining information from various sources to extend the annotation of the English CHILDES corpora with linguistic, psycholinguistic and distributional information, along with an example illustrating an application of this approach to the extraction of verb alternation information. The end result, the English CHILDES Verb Construction Database, is an integrated resource containing information such as grammatical relations, verb semantic classes, and age of acquisition, enabling more targeted complex searches involving different levels of annotation that can facilitate a more detailed analysis of the linguistic input available to children.