Lurdes Ruela


2010

pdf bib
Like Finding a Needle in a Haystack: Annotating the American National Corpus for Idiomatic Expressions
Laura Street | Nathan Michalov | Rachel Silverstein | Michael Reynolds | Lurdes Ruela | Felicia Flowers | Angela Talucci | Priscilla Pereira | Gabriella Morgon | Samantha Siegel | Marci Barousse | Antequa Anderson | Tashom Carroll | Anna Feldman
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Our paper presents the details of a pilot study in which we tagged portions of the American National Corpus (ANC) for idioms composed of verb-noun constructions, prepositional phrases, and subordinate clauses. The three data sets we analyzed included 1,500-sentence samples from the spoken, the nonfiction, and the fiction portions of the ANC. Our paper provides the details of the tagset we developed, the motivation behind our choices, and the inter-annotator agreement measures we deemed appropriate for this task. In tagging the ANC for idiomatic expressions, our annotators achieved a high level of agreement (> .80) on the tags but a low level of agreement (< .00) on what constituted an idiom. These findings support the claim that identifying idiomatic and metaphorical expressions is a highly difficult and subjective task. In total, 135 idiom types and 154 idiom tokens were identified. Based on the total tokens found for each idiom class, we suggest that future research on idiom detection and idiom annotation include prepositional phrases as this class of idioms occurred frequently in the nonfiction and spoken samples of our corpus