Fosca Giannotti
2024
An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
Benedetta Muscato
|
Chandana Sree Mala
|
Marta Marchiori Manerba
|
Gizem Gezici
|
Fosca Giannotti
Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024
The varied backgrounds and experiences of human annotators inject different opinions and potential biases into the data, inevitably leading to disagreements. Yet, traditional aggregation methods fail to capture individual judgments since they rely on the notion of a single ground truth. Our aim is to review prior contributions to pinpoint the shortcomings that might cause stereotypical content generation. As a preliminary study, our purpose is to investigate state-of-the-art approaches, primarily focusing on the following two research directions. First, we investigate how adding subjectivity aspects to LLMs might guarantee diversity. We then look into the alignment between humans and LLMs and discuss how to measure it. Considering existing gaps, our review explores possible methods to mitigate the perpetuation of biases targeting specific communities. However, we recognize the potential risk of disseminating sensitive information due to the utilization of socio-demographic data in the training process. These considerations underscore the inclusion of diverse perspectives while taking into account the critical importance of implementing robust safeguards to protect individuals’ privacy and prevent the inadvertent propagation of sensitive information.
2023
HANSEN: Human and AI Spoken Text Benchmark for Authorship Analysis
Nafis Tripto
|
Adaku Uchendu
|
Thai Le
|
Mattia Setzu
|
Fosca Giannotti
|
Dongwon Lee
Findings of the Association for Computational Linguistics: EMNLP 2023
Authorship Analysis, also known as stylometry, has been an essential aspect of Natural Language Processing (NLP) for a long time. Likewise, the recent advancement of Large Language Models (LLMs) has made authorship analysis increasingly crucial for distinguishing between human-written and AI-generated texts. However, these authorship analysis tasks have primarily been focused on written texts, not considering spoken texts. Thus, we introduce the largest benchmark for spoken texts - \sf HANSEN( ̲Human ̲ANd ai ̲Spoken t ̲Ext be ̲Nchmark). \sf HANSEN encompasses meticulous curation of existing speech datasets accompanied by transcripts, alongside the creation of novel AI-generated spoken text datasets. Together, it comprises 17 human datasets, and AI-generated spoken texts created using 3 prominent LLMs: ChatGPT, PaLM2, and Vicuna13B. To evaluate and demonstrate the utility of \sf HANSEN, we perform Authorship Attribution (AA) & Author Verification (AV) on human-spoken datasets and conducted Human vs. AI text detection using state-of-the-art (SOTA) models. While SOTA methods, such as, character n-gram or Transformer-based model, exhibit similar AA & AV performance in human-spoken datasets compared to written ones, there is much room for improvement in AI-generated spoken text detection. The \sf HANSEN benchmark is available at: https://huggingface.co/datasets/HANSEN-REPO/HANSEN
Search
Co-authors
- Nafis Tripto 1
- Adaku Uchendu 1
- Thai Le 1
- Mattia Setzu 1
- Dongwon Lee 1
- show all...