2019
pdf
bib
abs
Predicting Suicide Risk from Online Postings in Reddit The UGent-IDLab submission to the CLPysch 2019 Shared Task A
Semere Kiros Bitew
|
Giannis Bekoulis
|
Johannes Deleu
|
Lucas Sterckx
|
Klim Zaporojets
|
Thomas Demeester
|
Chris Develder
Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology
This paper describes IDLab’s text classification systems submitted to Task A as part of the CLPsych 2019 shared task. The aim of this shared task was to develop automated systems that predict the degree of suicide risk of people based on their posts on Reddit. Bag-of-words features, emotion features and post level predictions are used to derive user-level predictions. Linear models and ensembles of these models are used to predict final scores. We find that predicting fine-grained risk levels is much more difficult than flagging potentially at-risk users. Furthermore, we do not find clear added value from building richer ensembles compared to simple baselines, given the available training data and the nature of the prediction task.
pdf
bib
abs
A Self-Training Approach for Short Text Clustering
Amir Hadifar
|
Lucas Sterckx
|
Thomas Demeester
|
Chris Develder
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
Short text clustering is a challenging problem when adopting traditional bag-of-words or TF-IDF representations, since these lead to sparse vector representations of the short texts. Low-dimensional continuous representations or embeddings can counter that sparseness problem: their high representational power is exploited in deep clustering algorithms. While deep clustering has been studied extensively in computer vision, relatively little work has focused on NLP. The method we propose, learns discriminative features from both an autoencoder and a sentence embedding, then uses assignments from a clustering algorithm as supervision to update weights of the encoder network. Experiments on three short text datasets empirically validate the effectiveness of our method.
2018
pdf
bib
abs
Predicting Psychological Health from Childhood Essays. The UGent-IDLab CLPsych 2018 Shared Task System.
Klim Zaporojets
|
Lucas Sterckx
|
Johannes Deleu
|
Thomas Demeester
|
Chris Develder
Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic
This paper describes the IDLab system submitted to Task A of the CLPsych 2018 shared task. The goal of this task is predicting psychological health of children based on language used in hand-written essays and socio-demographic control variables. Our entry uses word- and character-based features as well as lexicon-based features and features derived from the essays such as the quality of the language. We apply linear models, gradient boosting as well as neural-network based regressors (feed-forward, CNNs and RNNs) to predict scores. We then make ensembles of our best performing models using a weighted average.
2017
pdf
bib
abs
Break it Down for Me: A Study in Automated Lyric Annotation
Lucas Sterckx
|
Jason Naradowsky
|
Bill Byrne
|
Thomas Demeester
|
Chris Develder
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Comprehending lyrics, as found in songs and poems, can pose a challenge to human and machine readers alike. This motivates the need for systems that can understand the ambiguity and jargon found in such creative texts, and provide commentary to aid readers in reaching the correct interpretation. We introduce the task of automated lyric annotation (ALA). Like text simplification, a goal of ALA is to rephrase the original text in a more easily understandable manner. However, in ALA the system must often include additional information to clarify niche terminology and abstract concepts. To stimulate research on this task, we release a large collection of crowdsourced annotations for song lyrics. We analyze the performance of translation and retrieval models on this task, measuring performance with both automated and human evaluation. We find that each model captures a unique type of information important to the task.
2016
pdf
bib
Supervised Keyphrase Extraction as Positive Unlabeled Learning
Lucas Sterckx
|
Cornelia Caragea
|
Thomas Demeester
|
Chris Develder
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing