Jonas Rieger


2024

pdf bib
Lex2Sent: A bagging approach to unsupervised sentiment analysis
Kai-Robin Lange | Jonas Rieger | Carsten Jentsch
Proceedings of the 20th Conference on Natural Language Processing (KONVENS 2024)

2023

pdf bib
Debunking Disinformation with GADMO: A Topic Modeling Analysis of a Comprehensive Corpus of German-language Fact-Checks
Jonas Rieger | Nico Hornig | Jonathan Flossdorf | Henrik Müller | Stephan Mündges | Carsten Jentsch | Jörg Rahnenführer | Christina Elmer
Proceedings of the 4th Conference on Language, Data and Knowledge

2022

pdf bib
Finding Scientific Topics in Continuously Growing Text Corpora
André Bittermann | Jonas Rieger
Proceedings of the Third Workshop on Scholarly Document Processing

The ever growing amount of research publications demands computational assistance for everyone trying to keep track with scientific processes. Topic modeling has become a popular approach for finding scientific topics in static collections of research papers. However, the reality of continuously growing corpora of scholarly documents poses a major challenge for traditional approaches. We introduce RollingLDA for an ongoing monitoring of research topics, which offers the possibility of sequential modeling of dynamically growing corpora with time consistency of time series resulting from the modeled texts. We evaluate its capability to detect research topics and present a Shiny App as an easy-to-use interface. In addition, we illustrate usage scenarios for different user groups such as researchers, students, journalists, or policy-makers.

2021

pdf bib
RollingLDA: An Update Algorithm of Latent Dirichlet Allocation to Construct Consistent Time Series from Textual Data
Jonas Rieger | Carsten Jentsch | Jörg Rahnenführer
Findings of the Association for Computational Linguistics: EMNLP 2021

We propose a rolling version of the Latent Dirichlet Allocation, called RollingLDA. By a sequential approach, it enables the construction of LDA-based time series of topics that are consistent with previous states of LDA models. After an initial modeling, updates can be computed efficiently, allowing for real-time monitoring and detection of events or structural breaks. For this purpose, we propose suitable similarity measures for topics and provide simulation evidence of superiority over other commonly used approaches. The adequacy of the resulting method is illustrated by an application to an example corpus. In particular, we compute the similarity of sequentially obtained topic and word distributions over consecutive time periods. For a representative example corpus consisting of The New York Times articles from 1980 to 2020, we analyze the effect of several tuning parameter choices and we run the RollingLDA method on the full dataset of approximately 4 million articles to demonstrate its feasibility.