Hongyu Chen
2024
What Can Go Wrong in Authorship Profiling: Cross-Domain Analysis of Gender and Age Prediction
Hongyu Chen
|
Michael Roth
|
Agnieszka Falenska
Proceedings of the 5th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
Authorship Profiling (AP) aims to predict the demographic attributes (such as gender and age) of authors based on their writing styles. Ever-improving models mean that this task is gaining interest and application possibilities. However, with greater use also comes the risk that authors are misclassified more frequently, and it remains unclear to what extent the better models can capture the bias and who is affected by the models’ mistakes. In this paper, we investigate three established datasets for AP as well as classical and neural classifiers for this task. Our analyses show that it is often possible to predict the demographic information of the authors based on textual features. However, some features learned by the models are specific to datasets. Moreover, models are prone to errors based on stereotypes associated with topical bias.
How Does Quantization Affect Multilingual LLMs?
Kelly Marchisio
|
Saurabh Dash
|
Hongyu Chen
|
Dennis Aumiller
|
Ahmet Üstün
|
Sara Hooker
|
Sebastian Ruder
Findings of the Association for Computational Linguistics: EMNLP 2024
Quantization techniques are widely used to improve inference speed and deployment of large language models. While a wide body of work examines the impact of quantization on LLMs in English, none have evaluated across languages. We conduct a thorough analysis of quantized multilingual LLMs, focusing on performance across languages and at varying scales. We use automatic benchmarks, LLM-as-a-Judge, and human evaluation, finding that (1) harmful effects of quantization are apparent in human evaluation, which automatic metrics severely underestimate: a 1.7% average drop in Japanese across automatic tasks corresponds to a 16.0% drop reported by human evaluators on realistic prompts; (2) languages are disparately affected by quantization, with non-Latin script languages impacted worst; and (3) challenging tasks like mathematical reasoning degrade fastest. As the ability to serve low-compute models is critical for wide global adoption of NLP technologies, our results urge consideration of multilingual performance as a key evaluation criterion for efficient models.
Search
Co-authors
- Michael Roth 1
- Agnieszka Falenska 1
- Kelly Marchisio 1
- Saurabh Dash 1
- Dennis Aumiller 1
- show all...