Bryan Kian Hsiang Low
2024
Waterfall: Scalable Framework for Robust Text Watermarking and Provenance for LLMs
Gregory Kang Ruey Lau
|
Xinyuan Niu
|
Hieu Dao
|
Jiangwei Chen
|
Chuan-Sheng Foo
|
Bryan Kian Hsiang Low
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Protecting intellectual property (IP) of text such as articles and code is increasingly important, especially as sophisticated attacks become possible, such as paraphrasing by large language models (LLMs) or even unauthorized training of LLMs on copyrighted text to infringe such IP. However, existing text watermarking methods are not robust enough against such attacks nor scalable to millions of users for practical implementation. In this paper, we propose Waterfall, the first training-free framework for robust and scalable text watermarking applicable across multiple text types (e.g., articles, code) and languages supportable by LLMs, for general text and LLM data provenance. Waterfall comprises several key innovations, such as being the first to use LLM as paraphrasers for watermarking along with a novel combination of techniques that are surprisingly effective in achieving robust verifiability and scalability. We empirically demonstrate that Waterfall achieves significantly better scalability, robust verifiability, and computational efficiency compared to SOTA article-text watermarking methods, and also showed how it could be directly applied to the watermarking of code.
Position Paper: Data-Centric AI in the Age of Large Language Models
Xinyi Xu
|
Zhaoxuan Wu
|
Rui Qiao
|
Arun Verma
|
Yao Shu
|
Jingtan Wang
|
Xinyuan Niu
|
Zhenfeng He
|
Jiangwei Chen
|
Zijian Zhou
|
Gregory Kang Ruey Lau
|
Hieu Dao
|
Lucas Agussurja
|
Rachael Hwee Ling Sim
|
Xiaoqiang Lin
|
Wenyang Hu
|
Zhongxiang Dai
|
Pang Wei Koh
|
Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: EMNLP 2024
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making a key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs, and advocate that data-centric research should receive more attention from the community. We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization. In each scenario, we underscore the importance of data, highlight promising research directions, and articulate the potential impacts on the research community and, where applicable, the society as a whole. For instance, we advocate for a suite of data-centric benchmarks tailored to the scale and complexity of data for LLMs. These benchmarks can be used to develop new data curation methods and document research efforts and results, which can help promote openness and transparency in AI and LLM research.
Search
Co-authors
- Gregory Kang Ruey Lau 2
- Xinyuan Niu 2
- Hieu Dao 2
- Jiangwei Chen 2
- Chuan-Sheng Foo 1
- show all...