Frederic Sala
2024
Look Who’s Talking Now: Covert Channels From Biased LLMs
Daniel Silva
|
Frederic Sala
|
Ryan Gabrys
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language model-based steganography encodes hidden messages into model-generated tokens. The key tradeoff is between how much hidden information can be introduced and how much the model can be perturbed. To address this tradeoff, we show how to adapt strategies previously used for LLM watermarking to encode large amounts of information. We tackle the practical (but difficult) setting where we do not have access to the full model when trying to recover the hidden information. Theoretically, we study the fundamental limits in how much steganographic information can be inserted into LLM-created outputs. We provide practical encoding schemes and present experimental results showing that our proposed strategies are nearly optimal.
2023
The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models
Satya Sai Srinath Namburi
|
Makesh Sreedhar
|
Srinath Srinivasan
|
Frederic Sala
Findings of the Association for Computational Linguistics: EMNLP 2023
Compressing large language models (LLMs), often consisting of billions of parameters, provides faster inference, smaller memory footprints, and enables local deployment. The standard compression techniques are pruning and quantization, with the former eliminating redundant connections in model layers and the latter representing model parameters with as little as 4 bits. The key tradeoff is between the degree of compression and the impact on the quality of the compressed model. Existing research on LLM compression primarily focuses on performance in terms of general metrics like perplexity or downstream task accuracy. More fine-grained metrics, such as those measuring parametric knowledge, remain significantly underexplored. To help bridge this gap, we present a comprehensive analysis across multiple model families using the LAMA and LM-Harness benchmarks in order to systematically quantify the effect of commonly employed compression techniques on model performance. A particular focus is on tradeoffs involving parametric knowledge, with the goal of providing practitioners with practical insights to make informed decisions on compression.
2020
Low-Dimensional Hyperbolic Knowledge Graph Embeddings
Ines Chami
|
Adva Wolf
|
Da-Cheng Juan
|
Frederic Sala
|
Sujith Ravi
|
Christopher Ré
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Knowledge graph (KG) embeddings learn low- dimensional representations of entities and relations to predict missing facts. KGs often exhibit hierarchical and logical patterns which must be preserved in the embedding space. For hierarchical data, hyperbolic embedding methods have shown promise for high-fidelity and parsimonious representations. However, existing hyperbolic embedding methods do not account for the rich logical patterns in KGs. In this work, we introduce a class of hyperbolic KG embedding models that simultaneously capture hierarchical and logical patterns. Our approach combines hyperbolic reflections and rotations with attention to model complex relational patterns. Experimental results on standard KG benchmarks show that our method improves over previous Euclidean- and hyperbolic-based efforts by up to 6.1% in mean reciprocal rank (MRR) in low dimensions. Furthermore, we observe that different geometric transformations capture different types of relations while attention- based transformations generalize to multiple relations. In high dimensions, our approach yields new state-of-the-art MRRs of 49.6% on WN18RR and 57.7% on YAGO3-10.
Search
Co-authors
- Satya Sai Srinath Namburi 1
- Makesh Sreedhar 1
- Srinath Srinivasan 1
- Daniel Silva 1
- Ryan Gabrys 1
- show all...