Pierre Ludmann
2022
Quantification Annotation in ISO 24617-12, Second Draft
Harry Bunt
|
Maxime Amblard
|
Johan Bos
|
Karën Fort
|
Bruno Guillaume
|
Philippe de Groote
|
Chuyuan Li
|
Pierre Ludmann
|
Michel Musiol
|
Siyana Pavlova
|
Guy Perrier
|
Sylvain Pogodalla
Proceedings of the Thirteenth Language Resources and Evaluation Conference
This paper describes the continuation of a project that aims at establishing an interoperable annotation schema for quantification phenomena as part of the ISO suite of standards for semantic annotation, known as the Semantic Annotation Framework. After a break, caused by the Covid-19 pandemic, the project was relaunched in early 2022 with a second working draft of an annotation scheme, which is discussed in this paper. Keywords: semantic annotation, quantification, interoperability, annotation schema, ISO standard
2017
The Parallel Meaning Bank: Towards a Multilingual Corpus of Translations Annotated with Compositional Meaning Representations
Lasha Abzianidze
|
Johannes Bjerva
|
Kilian Evang
|
Hessel Haagsma
|
Rik van Noord
|
Pierre Ludmann
|
Duc-Duy Nguyen
|
Johan Bos
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
The Parallel Meaning Bank is a corpus of translations annotated with shared, formal meaning representations comprising over 11 million words divided over four languages (English, German, Italian, and Dutch). Our approach is based on cross-lingual projection: automatically produced (and manually corrected) semantic annotations for English sentences are mapped onto their word-aligned translations, assuming that the translations are meaning-preserving. The semantic annotation consists of five main steps: (i) segmentation of the text in sentences and lexical items; (ii) syntactic parsing with Combinatory Categorial Grammar; (iii) universal semantic tagging; (iv) symbolization; and (v) compositional semantic analysis based on Discourse Representation Theory. These steps are performed using statistical models trained in a semi-supervised manner. The employed annotation models are all language-neutral. Our first results are promising.