Yaxin Li
2024
Exploring Memorization in Fine-tuned Language Models
Shenglai Zeng
|
Yaxin Li
|
Jie Ren
|
Yiding Liu
|
Han Xu
|
Pengfei He
|
Yue Xing
|
Shuaiqiang Wang
|
Jiliang Tang
|
Dawei Yin
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) have shown great capabilities in various tasks but also exhibited memorization of training data, raising tremendous privacy and copyright concerns. While prior works have studied memorization during pre-training, the exploration of memorization during fine-tuning is rather limited. Compared to pre-training, fine-tuning typically involves more sensitive data and diverse objectives, thus may bring distinct privacy risks and unique memorization behaviors. In this work, we conduct the first comprehensive analysis to explore language models’ (LMs) memorization during fine-tuning across tasks. Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks. We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
2023
Document-level Relationship Extraction by Bidirectional Constraints of Beta Rules
Yichun Liu
|
Zizhong Zhu
|
Xiaowang Zhang
|
Zhiyong Feng
|
Daoqi Chen
|
Yaxin Li
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Document-level Relation Extraction (DocRE) aims to extract relations among entity pairs in documents. Some works introduce logic constraints into DocRE, addressing the issues of opacity and weak logic in original DocRE models. However, they only focus on forward logic constraints and the rules mined in these works often suffer from pseudo rules with high standard-confidence but low support. In this paper, we proposes Bidirectional Constraints of Beta Rules(BCBR), a novel logic constraint framework. BCBR first introduces a new rule miner which model rules by beta contribtion. Then forward and reverse logic constraints are constructed based on beta rules. Finally, BCBR reconstruct rule consistency loss by bidirectional constraints to regulate the output of the DocRE model. Experiments show that BCBR outperforms original DocRE models in terms of relation extraction performance (~2.7 F1 score) and logical consistency(~3.1 logic score). Furthermore, BCBR consistently outperforms two other logic constraint frameworks.
Search
Co-authors
- Yichun Liu 1
- Zizhong Zhu 1
- Xiaowang Zhang 1
- Zhiyong Feng 1
- Daoqi Chen 1
- show all...