Or Sharir
2024
ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs
Pengrui Han
|
Rafal Kocielnik
|
Adhithya Saravanan
|
Roy Jiang
|
Or Sharir
|
Anima Anandkumar
Proceedings of the Fourth Workshop on Language Technology for Equality, Diversity, Inclusion
Large Language models (LLMs), while powerful, exhibit harmful social biases. Debiasing is often challenging due to computational costs, data constraints, and potential degradation of multi-task language capabilities. This work introduces a novel approach utilizing ChatGPT to generate synthetic training data, aiming to enhance the debiasing of LLMs. We propose two strategies: Targeted Prompting, which provides effective debiasing for known biases but necessitates prior specification of bias in question; and General Prompting, which, while slightly less effective, offers debiasing across various categories. We leverage resource-efficient LLM debiasing using adapter tuning and compare the effectiveness of our synthetic data to existing debiasing datasets. Our results reveal that: (1) ChatGPT can efficiently produce high-quality training data for debiasing other LLMs; (2) data produced via our approach surpasses existing datasets in debiasing performance while also preserving internal knowledge of a pre-trained LLM; and (3) synthetic data exhibits generalizability across categories, effectively mitigating various biases, including intersectional ones. These findings underscore the potential of synthetic data in advancing the fairness of LLMs with minimal retraining cost.
2020
SenseBERT: Driving Some Sense into BERT
Yoav Levine
|
Barak Lenz
|
Or Dagan
|
Ori Ram
|
Dan Padnos
|
Or Sharir
|
Shai Shalev-Shwartz
|
Amnon Shashua
|
Yoav Shoham
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The ability to learn from large unlabeled corpora has allowed neural language models to advance the frontier in natural language understanding. However, existing self-supervision techniques operate at the word form level, which serves as a surrogate for the underlying semantic content. This paper proposes a method to employ weak-supervision directly at the word sense level. Our model, named SenseBERT, is pre-trained to predict not only the masked words but also their WordNet supersenses. Accordingly, we attain a lexical-semantic level language model, without the use of human annotation. SenseBERT achieves significantly improved lexical understanding, as we demonstrate by experimenting on SemEval Word Sense Disambiguation, and by attaining a state of the art result on the ‘Word in Context’ task.
Search
Co-authors
- Yoav Levine 1
- Barak Lenz 1
- Or Dagan 1
- Ori Ram 1
- Dan Padnos 1
- show all...