Philipp Schaer
2024
BATS: BenchmArking Text Simplicity 🦇
Christin Kreutz
|
Fabian Haak
|
Björn Engelmann
|
Philipp Schaer
Findings of the Association for Computational Linguistics: ACL 2024
Evaluation of text simplification currently focuses on the difference of a source text to its simplified variant. Datasets for this evaluation base on a specific topic and group of readers for which is simplified. The broad applicability of text simplification and specifics that come with intended target audiences (e.g., children compared to adult non-experts) are disregarded. An explainable assessment of the overall simplicity of text is missing. This work is BenchmArking Text Simplicity (BATS): we provide an explainable method to assess practical and concrete rules from literature describing features of simplicity and complexity of text. Our experiments on 15 datasets for text simplification highlight differences in features that are important in different domains of text and for different intended target audiences.
ARTS: Assessing Readability & Text Simplicity
Björn Engelmann
|
Christin Katharina Kreutz
|
Fabian Haak
|
Philipp Schaer
Findings of the Association for Computational Linguistics: EMNLP 2024
Automatic text simplification aims to reduce a text’s complexity. Its evaluation should quantify how easy it is to understand a text. Datasets with simplicity labels on text level are a prerequisite for developing such evaluation approaches. However, current publicly available datasets do not align with this, as they mainly treat text simplification as a relational concept (“How much simpler has this text gotten compared to the original version?”) or assign discrete readability levels.This work alleviates the problem of Assessing Readability & Text Simplicity. We present ARTS, a method for language-independent construction of datasets for simplicity assessment. We propose using pairwise comparisons of texts in conjunction with an Elo algorithm to produce a simplicity ranking and simplicity scores. Additionally, we provide a high-quality human-labeled and three GPT-labeled simplicity datasets. Our results show a high correlation between human and LLM-based labels, allowing for an effective and cost-efficient way to construct large synthetic datasets.
Search