Attila Kett


2021

pdf bib
Unleashing annotations with TextAnnotator: Multimedia, multi-perspective document views for ubiquitous annotation
Giuseppe Abrami | Alexander Henlein | Andy Lücking | Attila Kett | Pascal Adeberg | Alexander Mehler
Proceedings of the 17th Joint ACL - ISO Workshop on Interoperable Semantic Annotation

We argue that mainly due to technical innovation in the landscape of annotation tools, a conceptual change in annotation models and processes is also on the horizon. It is diagnosed that these changes are bound up with multi-media and multi-perspective facilities of annotation tools, in particular when considering virtual reality (VR) and augmented reality (AR) applications, their potential ubiquitous use, and the exploitation of externally trained natural language pre-processing methods. Such developments potentially lead to a dynamic and exploratory heuristic construction of the annotation process. With TextAnnotator an annotation suite is introduced which focuses on multi-mediality and multi-perspectivity with an interoperable set of task-specific annotation modules (e.g., for word classification, rhetorical structures, dependency trees, semantic roles, and more) and their linkage to VR and mobile implementations. The basic architecture and usage of TextAnnotator is described and related to the above mentioned shifts in the field.

2020

pdf bib
Transfer of ISOSpace into a 3D Environment for Annotations and Applications
Alexander Henlein | Giuseppe Abrami | Attila Kett | Alexander Mehler
Proceedings of the 16th Joint ACL-ISO Workshop on Interoperable Semantic Annotation

People’s visual perception is very pronounced and therefore it is usually no problem for them to describe the space around them in words. Conversely, people also have no problems imagining a concept of a described space. In recent years many efforts have been made to develop a linguistic concept for spatial and spatial-temporal relations. However, the systems have not really caught on so far, which in our opinion is due to the complex models on which they are based and the lack of available training data and automated taggers. In this paper we describe a project to support spatial annotation, which could facilitate annotation by its many functions, but also enrich it with many more information. This is to be achieved by an extension by means of a VR environment, with which spatial relations can be better visualized and connected with real objects. And we want to use the available data to develop a new state-of-the-art tagger and thus lay the foundation for future systems such as improved text understanding for Text2Scene.