Adrian Braşoveanu
Also published as: Adrian Brasoveanu
2020
Production-based Cognitive Models as a Test Suite for Reinforcement Learning Algorithms
Adrian Brasoveanu
|
Jakub Dotlacil
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
We introduce a framework in which production-rule based computational cognitive modeling and Reinforcement Learning can systematically interact and inform each other. We focus on linguistic applications because the sophisticated rule-based cognitive models needed to capture linguistic behavioral data promise to provide a stringent test suite for RL algorithms, connecting RL algorithms to both accuracy and reaction-time experimental data. Thus, we open a path towards assembling an experimentally rigorous and cognitively realistic benchmark for RL algorithms. We extend our previous work on lexical decision tasks and tabular RL algorithms (Brasoveanu and Dotlačil, 2020b) with a discussion of neural-network based approaches, and a discussion of how parsing can be formalized as an RL problem.
2018
Framing Named Entity Linking Error Types
Adrian Braşoveanu
|
Giuseppe Rizzo
|
Philipp Kuntschik
|
Albert Weichselbraun
|
Lyndon J.B. Nixon
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
2016
A Regional News Corpora for Contextualized Entity Discovery and Linking
Adrian Braşoveanu
|
Lyndon J.B. Nixon
|
Albert Weichselbraun
|
Arno Scharl
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
This paper presents a German corpus for Named Entity Linking (NEL) and Knowledge Base Population (KBP) tasks. We describe the annotation guideline, the annotation process, NIL clustering techniques and conversion to popular NEL formats such as NIF and TAC that have been used to construct this corpus based on news transcripts from the German regional broadcaster RBB (Rundfunk Berlin Brandenburg). Since creating such language resources requires significant effort, the paper also discusses how to derive additional evaluation resources for tasks like named entity contextualization or ontology enrichment by exploiting the links between named entities from the annotated corpus. The paper concludes with an evaluation that shows how several well-known NEL tools perform on the corpus, a discussion of the evaluation results, and with suggestions on how to keep evaluation corpora and datasets up to date.
Search
Co-authors
- Albert Weichselbraun 2
- Lyndon J.B. Nixon 2
- Giuseppe Rizzo 1
- Philipp Kuntschik 1
- Arno Scharl 1
- show all...