Ethos: Rectifying Language Models in Orthogonal Parameter Space

Lei Gao, Yue Niu, Tingting Tang, Salman Avestimehr, Murali Annavaram


Abstract
Language models (LMs) have greatly propelled the research on natural language processing. However, LMs also raise concerns regarding the generation of biased or toxic content and the potential disclosure of private information from the training dataset. In this work, we present a new efficient approach, Ethos, that rectifies LMs to mitigate toxicity and bias in outputs and avoid privacy leakage. Ethos is built on task arithmetic. However, unlike current task arithmetic algorithms, Ethos distinguishes general beneficial and undesired knowledge when reconstructing task vectors. Specifically, Ethos first obtains a set of principal components from the pre-trained models using singular value decomposition. Then, by projecting the task vector onto principal components, Ethos separates the principal components that encode general from those associated with undesired knowledge. Ethos performs forgetting or unlearning by only negating the task vector with undesired knowledge, thereby minimizing collateral damage on general model utility. We demonstrate the efficacy of our approach on three different tasks: bias, toxicity, and memorization unlearning. Evaluations show Ethos is more effective in removing undesired knowledge while maintaining the overall model performance compared to current task arithmetic methods.
Anthology ID:
2024.findings-naacl.132
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2054–2068
Language:
URL:
https://aclanthology.org/2024.findings-naacl.132
DOI:
10.18653/v1/2024.findings-naacl.132
Bibkey:
Cite (ACL):
Lei Gao, Yue Niu, Tingting Tang, Salman Avestimehr, and Murali Annavaram. 2024. Ethos: Rectifying Language Models in Orthogonal Parameter Space. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2054–2068, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Ethos: Rectifying Language Models in Orthogonal Parameter Space (Gao et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.132.pdf