Thomas Schmidt


2023

pdf bib
Aspect-Based Sentiment Analysis as a Multi-Label Classification Task on the Domain of German Hotel Reviews
Jakob Fehle | Leonie Münster | Thomas Schmidt | Christian Wolff
Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)

pdf bib
Transformer-Based Analysis of Sentiment Towards German Political Parties on Twitter During the 2021 Election Year
Nils Constantin Hellwig | Markus Bink | Thomas Schmidt | Jakob Fehle | Christian Wolff
Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)

2022

pdf bib
Sentiment Analysis on Twitter for the Major German Parties during the 2021 German Federal Election
Thomas Schmidt | Jakob Fehle | Maximilian Weissenbacher | Jonathan Richter | Philipp Gottschalk | Christian Wolff
Proceedings of the 18th Conference on Natural Language Processing (KONVENS 2022)

pdf bib
Querying Interaction Structure: Approaches to Overlap in Spoken Language Corpora
Elena Frick | Thomas Schmidt | Henrike Helmer
Proceedings of the Thirteenth Language Resources and Evaluation Conference

In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker-based search mode that enables any speaker’s transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI-based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS – an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks.

2021

pdf bib
Emotion Classification in German Plays with Transformer-based Language Models Pretrained on Historical and Contemporary Language
Thomas Schmidt | Katrin Dennerlein | Christian Wolff
Proceedings of the 5th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature

We present results of a project on emotion classification on historical German plays of Enlightenment, Storm and Stress, and German Classicism. We have developed a hierarchical annotation scheme consisting of 13 sub-emotions like suffering, love and joy that sum up to 6 main and 2 polarity classes (positive/negative). We have conducted textual annotations on 11 German plays and have acquired over 13,000 emotion annotations by two annotators per play. We have evaluated multiple traditional machine learning approaches as well as transformer-based models pretrained on historical and contemporary language for a single-label text sequence emotion classification for the different emotion categories. The evaluation is carried out on three different instances of the corpus: (1) taking all annotations, (2) filtering overlapping annotations by annotators, (3) applying a heuristic for speech-based analysis. Best results are achieved on the filtered corpus with the best models being large transformer-based models pretrained on contemporary German language. For the polarity classification accuracies of up to 90% are achieved. The accuracies become lower for settings with a higher number of classes, achieving 66% for 13 sub-emotions. Further pretraining of a historical model with a corpus of dramatic texts led to no improvements.

pdf bib
Lexicon-based Sentiment Analysis in German: Systematic Evaluation of Resources and Preprocessing Techniques
Jakob Fehle | Thomas Schmidt | Christian Wolff
Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021)

2020

pdf bib
Exploring Online Depression Forums via Text Mining: A Comparison of Reddit and a Curated Online Forum
Luis Moßburger | Felix Wende | Kay Brinkmann | Thomas Schmidt
Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task

We present a study employing various techniques of text mining to explore and compare two different online forums focusing on depression: (1) the subreddit r/depression (over 60 million tokens), a large, open social media platform and (2) Beyond Blue (almost 5 million tokens), a professionally curated and moderated depression forum from Australia. We are interested in how the language and the content on these platforms differ from each other. We scrape both forums for a specific period. Next to general methods of computational text analysis, we focus on sentiment analysis, topic modeling and the distribution of word categories to analyze these forums. Our results indicate that Beyond Blue is generally more positive and that the users are more supportive to each other. Topic modeling shows that Beyond Blue’s users talk more about adult topics like finance and work while topics shaped by school or college terms are more prevalent on r/depression. Based on our findings we hypothesize that the professional curation and moderation of a depression forum is beneficial for the discussion in it.

pdf bib
Addressing Cha(lle)nges in Long-Term Archiving of Large Corpora
Denis Arnold | Bernhard Fisseni | Pawel Kamocki | Oliver Schonefeld | Marc Kupietz | Thomas Schmidt
Proceedings of the 8th Workshop on Challenges in the Management of Large Corpora

This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).

pdf bib
Using full text indices for querying spoken language data
Elena Frick | Thomas Schmidt
Proceedings of the 8th Workshop on Challenges in the Management of Large Corpora

As a part of the ZuMult-project, we are currently modelling a backend architecture that should provide query access to corpora from the Archive of Spoken German (AGD) at the Leibniz-Institute for the German Language (IDS). We are exploring how to reuse existing search engine frameworks providing full text indices and allowing to query corpora by one of the corpus query languages (QLs) established and actively used in the corpus research community. For this purpose, we tested MTAS - an open source Lucene-based search engine for querying on text with multilevel annotations. We applied MTAS on three oral corpora stored in the TEI-based ISO standard for transcriptions of spoken language (ISO 24624:2016). These corpora differ from the corpus data that MTAS was developed for, because they include interactions with two and more speakers and are enriched, inter alia, with timeline-based annotations. In this contribution, we report our test results and address issues that arise when search frameworks originally developed for querying written corpora are being transferred into the field of spoken language.

pdf bib
Using Automatic Speech Recognition in Spoken Corpus Curation
Jan Gorisch | Michael Gref | Thomas Schmidt
Proceedings of the Twelfth Language Resources and Evaluation Conference

The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed – despite relatively high ASR-confidence – some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.

pdf bib
Improving Sentence Boundary Detection for Spoken Language Transcripts
Ines Rehbein | Josef Ruppenhofer | Thomas Schmidt
Proceedings of the Twelfth Language Resources and Evaluation Conference

This paper presents experiments on sentence boundary detection in transcripts of spoken dialogues. Segmenting spoken language into sentence-like units is a challenging task, due to disfluencies, ungrammatical or fragmented structures and the lack of punctuation. In addition, one of the main bottlenecks for many NLP applications for spoken language is the small size of the training data, as the transcription and annotation of spoken language is by far more time-consuming and labour-intensive than processing written language. We therefore investigate the benefits of data expansion and transfer learning and test different ML architectures for this task. Our results show that data expansion is not straightforward and even data from the same domain does not always improve results. They also highlight the importance of modelling, i.e. of finding the best architecture and data representation for the task at hand. For the detection of boundaries in spoken language transcripts, we achieve a substantial improvement when framing the boundary detection problem assentence pair classification task, as compared to a sequence tagging approach.

2018

pdf bib
An Evaluation of Lexicon-based Sentiment Analysis Techniques for the Plays of Gotthold Ephraim Lessing
Thomas Schmidt | Manuel Burghardt
Proceedings of the Second Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature

We present results from a project in the research area of sentiment analysis of drama texts, more concretely the plays of Gotthold Ephraim Lessing. We conducted an annotation study to create a gold standard for a systematic evaluation. The gold standard consists of 200 speeches of Lessing’s plays manually annotated with sentiment information. We explore the performance of different German sentiment lexicons and processing configurations like lemmatization, the extension of lexicons with historical linguistic variants or stop words elimination to explore the influence of these parameters and find best practices for our domain of application. The best performing configuration accomplishes an accuracy of 70%. We discuss the problems and challenges for sentiment analysis in this area and describe our next steps toward further research.

2016

pdf bib
User, who art thou? User Profiling for Oral Corpus Platforms
Christian Fandrych | Elena Frick | Hanna Hedeland | Anna Iliash | Daniel Jettka | Cordula Meißner | Thomas Schmidt | Franziska Wallner | Kathrin Weigert | Swantje Westpfahl
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This contribution presents the background, design and results of a study of users of three oral corpus platforms in Germany. Roughly 5.000 registered users of the Database for Spoken German (DGD), the GeWiss corpus and the corpora of the Hamburg Centre for Language Corpora (HZSK) were asked to participate in a user survey. This quantitative approach was complemented by qualitative interviews with selected users. We briefly introduce the corpus resources involved in the study in section 2. Section 3 describes the methods employed in the user studies. Section 4 summarizes results of the studies focusing on selected key topics. Section 5 attempts a generalization of these results to larger contexts.

pdf bib
FOLK-Gold ― A Gold Standard for Part-of-Speech-Tagging of Spoken German
Swantje Westpfahl | Thomas Schmidt
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In this paper, we present a GOLD standard of part-of-speech tagged transcripts of spoken German. The GOLD standard data consists of four annotation layers ― transcription (modified orthography), normalization (standard orthography), lemmatization and POS tags ― all of which have undergone careful manual quality control. It comes with guidelines for the manual POS annotation of transcripts of German spoken data and an extended version of the STTS (Stuttgart Tübingen Tagset) which accounts for phenomena typically found in spontaneous spoken German. The GOLD standard was developed on the basis of the Research and Teaching Corpus of Spoken German, FOLK, and is, to our knowledge, the first such dataset based on a wide variety of spontaneous and authentic interaction types. It can be used as a basis for further development of language technology and corpus linguistic applications for German spoken language.

2014

pdf bib
The Database for Spoken German — DGD2
Thomas Schmidt
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

The Database for Spoken German (Datenbank für Gesprochenes Deutsch, DGD2, http://dgd.ids-mannheim.de) is the central platform for publishing and disseminating spoken language corpora from the Archive of Spoken German (Archiv für Gesprochenes Deutsch, AGD, http://agd.ids-mannheim.de) at the Institute for the German Language in Mannheim. The corpora contained in the DGD2 come from a variety of sources, some of them in-house projects, some of them external projects. Most of the corpora were originally intended either for research into the (dialectal) variation of German or for studies in conversation analysis and related fields. The AGD has taken over the task of permanently archiving these resources and making them available for reuse to the research community. To date, the DGD2 offers access to 19 different corpora, totalling around 9000 speech events, 2500 hours of audio recordings or 8 million transcribed words. This paper gives an overview of the data made available via the DGD2, of the technical basis for its implementation, and of the most important functionalities it offers. The paper concludes with information about the users of the database and future plans for its development.

pdf bib
The Research and Teaching Corpus of Spoken German — FOLK
Thomas Schmidt
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

FOLK is the “Forschungs- und Lehrkorpus Gesprochenes Deutsch (FOLK)” (eng.: research and teaching corpus of spoken German). The project has set itself the aim of building a corpus of German conversations which a) covers a broad range of interaction types in private, institutional and public settings, b) is sufficiently large and diverse and of sufficient quality to support different qualitative and quantitative research approaches, c) is transcribed, annotated and made accessible according to current technological standards, and d) is available to the scientific community on a sound legal basis and without unnecessary restrictions of usage. This paper gives an overview of the corpus design, the strategies for acquisition of a diverse range of interaction data, and the corpus construction workflow from recording via transcription an annotation to dissemination.

2012

pdf bib
EXMARaLDA and the FOLK tools — two toolsets for transcribing and annotating spoken language
Thomas Schmidt
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper presents two toolsets for transcribing and annotating spoken language: the EXMARaLDA system, developed at the University of Hamburg, and the FOLK tools, developed at the Institute for the German Language in Mannheim. Both systems are targeted at users interested in the analysis of spontaneous, multi-party discourse. Their main user community is situated in conversation analysis, pragmatics, sociolinguistics and related fields. The paper gives an overview of the individual tools of the two systems ― the Partitur-Editor, a tool for multi-level annotation of audio or video recordings, the Corpus Manager, a tool for creating and administering corpus metadata, EXAKT, a query and analysis tool for spoken language corpora, FOLKER, a transcription editor optimized for speed and efficiency of transcription, and OrthoNormal, a tool for orthographical normalization of transcription data. It concludes with some thoughts about the integration of these tools into the larger tool landscape.

2010

pdf bib
FOLKER: An Annotation Tool for Efficient Transcription of Natural, Multi-party Interaction
Thomas Schmidt | Wilfried Schütte
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents FOLKER, an annotation tool developed for the efficient transcription of natural, multi-party interaction in a conversation analysis framework. FOLKER is being developed at the Institute for German Language in and for the FOLK project, whose aim is the construction of a large corpus of spoken present-day German, to be used for research and teaching purposes. FOLKER builds on the experience gained with multi-purpose annotation tools like ELAN and EXMARaLDA, but attempts to improve transcription efficiency by restricting and optimizing both data model and tool functionality to a single, well-defined purpose. The tool’s most important features in this respect are the possibility to freely switch between several editable views according to the requirements of different steps in the annotation process, and an automatic syntax check of annotations during input for their conformance to the GAT transcription convention. This paper starts with a description of the GAT transcription conventions and the data model underlying the tool. It then gives an overview of the tool functionality and compares this functionality to that of other widely used tools.

2008

pdf bib
An Exchange Format for Multimodal Annotations
Thomas Schmidt | Susan Duncan | Oliver Ehmer | Jeffrey Hoyt | Michael Kipp | Dan Loehr | Magnus Magnusson | Travis Rose | Han Sloetjes
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools.