Joachim Wagner


2024

pdf bib
Beyond Binary: Towards Embracing Complexities in Cyberbullying Detection and Intervention - a Position Paper
Kanishk Verma | Kolawole John Adebayo | Joachim Wagner | Megan Reynolds | Rebecca Umbach | Tijana Milosevic | Brian Davis
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

In the digital age, cyberbullying (CB) poses a significant concern, impacting individuals as early as primary school and leading to severe or lasting consequences, including an increased risk of self-harm. CB incidents, are not limited to bullies and victims, but include bystanders with various roles, and usually have numerous sub-categories and variations of online harms. This position paper emphasises the complexity of CB incidents by drawing on insights from psychology, social sciences, and computational linguistics. While awareness of CB complexities is growing, existing computational techniques tend to oversimplify CB as a binary classification task, often relying on training datasets that capture peripheries of CB behaviours. Inconsistent definitions and categories of CB-related online harms across various platforms further complicates the issue. Ethical concerns arise when CB research involves children to role-play CB incidents to curate datasets. Through multi-disciplinary collaboration, we propose strategies for consideration when developing CB detection systems. We present our position on leveraging large language models (LLMs) such as Claude-2 and Llama2-Chat as an alternative approach to generate CB-related role-playing datasets. Our goal is to assist researchers, policymakers, and online platforms in making informed decisions regarding the automation of CB incident detection and intervention. By addressing these complexities, our research contributes to a more nuanced and effective approach to combating CB especially in young people.

2023

pdf bib
DCU at SemEval-2023 Task 10: A Comparative Analysis of Encoder-only and Decoder-only Language Models with Insights into Interpretability
Kanishk Verma | Kolawole Adebayo | Joachim Wagner | Brian Davis
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)

We conduct a comparison of pre-trained encoder-only and decoder-only language models with and without continued pre-training, to detect online sexism. Our fine-tuning-based classifier system achieved the 16th rank in the SemEval 2023 Shared Task 10 Subtask A that asks to distinguish sexist and non-sexist texts. Additionally, we conduct experiments aimed at enhancing the interpretability of systems designed to detect online sexism. Our findings provide insights into the features and decision-making processes underlying our classifier system, thereby contributing to a broader effort to develop explainable AI models to detect online sexism.

pdf bib
Investigating the Saliency of Sentiment Expressions in Aspect-Based Sentiment Analysis
Joachim Wagner | Jennifer Foster
Findings of the Association for Computational Linguistics: ACL 2023

We examine the behaviour of an aspect-based sentiment classifier built by fine-tuning the BERT BASE model on the SemEval 2016 English dataset. In a set of masking experiments, we examine the extent to which the tokens identified as salient by LIME and a gradient-based method are being used by the classifier. We find that both methods are able to produce faithful rationales, with LIME outperforming the gradient-based method. We also identify a set of manually annotated sentiment expressions for this dataset, and carry out more masking experiments with these as human rationales. The enhanced performance of a classifier that only sees the relevant sentiment expressions suggests that they are not being used to their full potential. A comparison of the LIME and gradient rationales with the sentiment expressions reveals only a moderate level of agreement. Some disagreements are related to the fixed length of the rationales and the tendency of the rationales to contain content words related to the aspect itself.

2022

pdf bib
gaBERT — an Irish Language Model
James Barry | Joachim Wagner | Lauren Cassidy | Alan Cowap | Teresa Lynn | Abigail Walsh | Mícheál J. Ó Meachair | Jennifer Foster
Proceedings of the Thirteenth Language Resources and Evaluation Conference

The BERT family of neural language models have become highly popular due to their ability to provide sequences of text with rich context-sensitive token encodings which are able to generalise well to many NLP tasks. We introduce gaBERT, a monolingual BERT model for the Irish language. We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task. We also show how different filtering criteria, vocabulary size and the choice of subword tokenisation model affect downstream performance. We compare the results of fine-tuning a gaBERT model with an mBERT model for the task of identifying verbal multiword expressions, and show that the fine-tuned gaBERT model also performs better at this task. We release gaBERT and related code to the community.

2021

pdf bib
The DCU-EPFL Enhanced Dependency Parser at the IWPT 2021 Shared Task
James Barry | Alireza Mohammadshahi | Joachim Wagner | Jennifer Foster | James Henderson
Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)

We describe the DCU-EPFL submission to the IWPT 2021 Parsing Shared Task: From Raw Text to Enhanced Universal Dependencies. The task involves parsing Enhanced UD graphs, which are an extension of the basic dependency trees designed to be more facilitative towards representing semantic structure. Evaluation is carried out on 29 treebanks in 17 languages and participants are required to parse the data from each language starting from raw strings. Our approach uses the Stanza pipeline to preprocess the text files, XLM-RoBERTa to obtain contextualized token representations, and an edge-scoring and labeling model to predict the enhanced graph. Finally, we run a postprocessing script to ensure all of our outputs are valid Enhanced UD graphs. Our system places 6th out of 9 participants with a coarse Enhanced Labeled Attachment Score (ELAS) of 83.57. We carry out additional post-deadline experiments which include using Trankit for pre-processing, XLM-RoBERTa LARGE, treebank concatenation, and multitask learning between a basic and an enhanced dependency parser. All of these modifications improve our initial score and our final system has a coarse ELAS of 88.04.

pdf bib
Naive Bayes versus BERT: Jupyter notebook assignments for an introductory NLP course
Jennifer Foster | Joachim Wagner
Proceedings of the Fifth Workshop on Teaching NLP

We describe two Jupyter notebooks that form the basis of two assignments in an introductory Natural Language Processing (NLP) module taught to final year undergraduate students at Dublin City University. The notebooks show the students how to train a bag-of-words polarity classifier using multinomial Naive Bayes, and how to fine-tune a polarity classifier using BERT. The students take the code as a starting point for their own experiments.

pdf bib
Revisiting Tri-training of Dependency Parsers
Joachim Wagner | Jennifer Foster
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

We compare two orthogonal semi-supervised learning techniques, namely tri-training and pretrained word embeddings, in the task of dependency parsing. We explore language-specific FastText and ELMo embeddings and multilingual BERT embeddings. We focus on a low resource scenario as semi-supervised learning can be expected to have the most impact here. Based on treebank size and available ELMo models, we select Hungarian, Uyghur (a zero-shot language for mBERT) and Vietnamese. Furthermore, we include English in a simulated low-resource setting. We find that pretrained word embeddings make more effective use of unlabelled data than tri-training but that the two approaches can be successfully combined.

2020

pdf bib
Treebank Embedding Vectors for Out-of-Domain Dependency Parsing
Joachim Wagner | James Barry | Jennifer Foster
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

A recent advance in monolingual dependency parsing is the idea of a treebank embedding vector, which allows all treebanks for a particular language to be used as training data while at the same time allowing the model to prefer training data from one treebank over others and to select the preferred treebank at test time. We build on this idea by 1) introducing a method to predict a treebank vector for sentences that do not come from a treebank used in training, and 2) exploring what happens when we move away from predefined treebank embedding vectors during test time and instead devise tailored interpolations. We show that 1) there are interpolated vectors that are superior to the predefined ones, and 2) treebank vectors can be predicted with sufficient accuracy, for nine out of ten test languages, to match the performance of an oracle approach that knows the most suitable predefined treebank embedding for the test set.

pdf bib
The ADAPT Enhanced Dependency Parser at the IWPT 2020 Shared Task
James Barry | Joachim Wagner | Jennifer Foster
Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies

We describe the ADAPT system for the 2020 IWPT Shared Task on parsing enhanced Universal Dependencies in 17 languages. We implement a pipeline approach using UDPipe and UDPipe-future to provide initial levels of annotation. The enhanced dependency graph is either produced by a graph-based semantic dependency parser or is built from the basic tree using a small set of heuristics. Our results show that, for the majority of languages, a semantic dependency parser can be successfully applied to the task of parsing enhanced dependencies. Unfortunately, we did not ensure a connected graph as part of our pipeline approach and our competition submission relied on a last-minute fix to pass the validation script which harmed our official evaluation scores significantly. Our submission ranked eighth in the official evaluation with a macro-averaged coarse ELAS F1 of 67.23 and a treebank average of 67.49. We later implemented our own graph-connecting fix which resulted in a score of 79.53 (language average) or 79.76 (treebank average), which would have placed fourth in the competition evaluation.

2019

pdf bib
Cross-lingual Parsing with Polyglot Training and Multi-treebank Learning: A Faroese Case Study
James Barry | Joachim Wagner | Jennifer Foster
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)

Cross-lingual dependency parsing involves transferring syntactic knowledge from one language to another. It is a crucial component for inducing dependency parsers in low-resource scenarios where no training data for a language exists. Using Faroese as the target language, we compare two approaches using annotation projection: first, projecting from multiple monolingual source models; second, projecting from a single polyglot model which is trained on the combination of all source languages. Furthermore, we reproduce multi-source projection (Tyers et al., 2018), in which dependency trees of multiple sources are combined. Finally, we apply multi-treebank modelling to the projected treebanks, in addition to or alternatively to polyglot modelling on the source side. We find that polyglot training on the source languages produces an overall trend of better results on the target language but the single best result for the target language is obtained by projecting from monolingual source parsing models and then training multi-treebank POS tagging and parsing models on the target side.

pdf bib
APE through Neural and Statistical MT with Augmented Data. ADAPT/DCU Submission to the WMT 2019 APE Shared Task
Dimitar Shterionov | Joachim Wagner | Félix do Carmo
Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)

Automatic post-editing (APE) can be reduced to a machine translation (MT) task, where the source is the output of a specific MT system and the target is its post-edited variant. However, this approach does not consider context information that can be found in the original source of the MT system. Thus a better approach is to employ multi-source MT, where two input sequences are considered – the one being the original source and the other being the MT output. Extra context information can be introduced in the form of extra tokens that identify certain global property of a group of segments, added as a prefix or a suffix to each segment. Successfully applied in domain adaptation of MT as well as on APE, this technique deserves further attention. In this work we investigate multi-source neural APE (or NPE) systems with training data which has been augmented with two types of extra context tokens. We experiment with authentic and synthetic data provided by WMT 2019 and submit our results to the APE shared task. We also experiment with using statistical machine translation (SMT) methods for APE. While our systems score bellow the baseline, we consider this work a step towards understanding the added value of extra context in the case of APE.

2016

pdf bib
Part-of-speech Tagging of Code-mixed Social Media Content: Pipeline, Stacking and Joint Modelling
Utsab Barman | Joachim Wagner | Jennifer Foster
Proceedings of the Second Workshop on Computational Approaches to Code Switching

2015

pdf bib
DCU-ADAPT: Learning Edit Operations for Microblog Normalisation with the Generalised Perceptron
Joachim Wagner | Jennifer Foster
Proceedings of the Workshop on Noisy User-generated Text

2014

pdf bib
DCU: Aspect-based Polarity Classification for SemEval Task 4
Joachim Wagner | Piyush Arora | Santiago Cortes | Utsab Barman | Dasha Bogdanova | Jennifer Foster | Lamia Tounsi
Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014)

pdf bib
Target-Centric Features for Translation Quality Estimation
Chris Hokamp | Iacer Calixto | Joachim Wagner | Jian Zhang
Proceedings of the Ninth Workshop on Statistical Machine Translation

pdf bib
Code Mixing: A Challenge for Language Identification in the Language of Social Media
Utsab Barman | Amitava Das | Joachim Wagner | Jennifer Foster
Proceedings of the First Workshop on Computational Approaches to Code Switching

pdf bib
DCU-UVT: Word-Level Language Classification with Code-Mixed Data
Utsab Barman | Joachim Wagner | Grzegorz Chrupała | Jennifer Foster
Proceedings of the First Workshop on Computational Approaches to Code Switching

2013

pdf bib
DCU-Symantec at the WMT 2013 Quality Estimation Shared Task
Raphael Rubino | Joachim Wagner | Jennifer Foster | Johann Roturier | Rasoul Samad Zadeh Kaljahi | Fred Hollowood
Proceedings of the Eighth Workshop on Statistical Machine Translation

2012

pdf bib
DCU-Symantec Submission for the WMT 2012 Quality Estimation Task
Raphael Rubino | Jennifer Foster | Joachim Wagner | Johann Roturier | Rasul Samad Zadeh Kaljahi | Fred Hollowood
Proceedings of the Seventh Workshop on Statistical Machine Translation

2011

pdf bib
Comparing the Use of Edited and Unedited Text in Parser Self-Training
Jennifer Foster | Özlem Çetinoğlu | Joachim Wagner | Josef van Genabith
Proceedings of the 12th International Conference on Parsing Technologies

pdf bib
From News to Comment: Resources and Benchmarks for Parsing the Language of Web 2.0
Jennifer Foster | Özlem Çetinoğlu | Joachim Wagner | Joseph Le Roux | Joakim Nivre | Deirdre Hogan | Josef van Genabith
Proceedings of 5th International Joint Conference on Natural Language Processing

2009

pdf bib
The effect of correcting grammatical errors on parse probabilities
Joachim Wagner | Jennifer Foster
Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09)

2008

pdf bib
Parser-Based Retraining for Domain Adaptation of Probabilistic Generators
Deirdre Hogan | Jennifer Foster | Joachim Wagner | Josef van Genabith
Proceedings of the Fifth International Natural Language Generation Conference

pdf bib
Adapting a WSJ-Trained Parser to Grammatically Noisy Text
Jennifer Foster | Joachim Wagner | Josef van Genabith
Proceedings of ACL-08: HLT, Short Papers

2007

pdf bib
Adapting WSJ-Trained Parsers to the British National Corpus using In-Domain Self-Training
Jennifer Foster | Joachim Wagner | Djamé Seddah | Josef van Genabith
Proceedings of the Tenth International Conference on Parsing Technologies

pdf bib
A Comparative Evaluation of Deep and Shallow Approaches to the Automatic Detection of Common Grammatical Errors
Joachim Wagner | Jennifer Foster | Josef van Genabith
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)