Ayoub Hammal
2026
KAD: A Framework for Proxy-based Test-time Alignment with Knapsack Approximation Deferral
Ayoub Hammal | Pierre Zweigenbaum | Caio Corro
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Ayoub Hammal | Pierre Zweigenbaum | Caio Corro
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Several previous works concluded that the largest part of generation capabilities of large language models (LLM) are learned (early) during pre-training. However, LLMs still require further alignment to adhere to downstream task requirements and stylistic preferences, among other desired properties. As LLMs continue to scale in terms of size, the computational cost of alignment procedures increase prohibitively.In this work, we propose a novel approach to circumvent these costs via proxy-based test-time alignment, i.e. using guidance from a small aligned model. Our approach can be described as a token-specific cascading method, where the token-specific deferral rule is reduced to 0-1 knapsack problem. In this setting, we derive primal and dual approximations of the optimal deferral decision. We experimentally show the benefits of our method both in task performance and speculative decoding speed.
2025
Few-shot domain adaptation for named-entity recognition via joint constrained k-means and subspace selection
Ayoub Hammal | Benno Uthayasooriyar | Caio Corro
Proceedings of the 31st International Conference on Computational Linguistics
Ayoub Hammal | Benno Uthayasooriyar | Caio Corro
Proceedings of the 31st International Conference on Computational Linguistics
Named-entity recognition (NER) is a task that typically requires large annotated datasets, which limits its applicability across domains with varying entity definitions. This paper addresses few-shot NER, aiming to transfer knowledge to new domains with minimal supervision. Unlike previous approaches that rely solely on limited annotated data, we propose a weakly-supervised algorithm that combines small labeled datasets with large amounts of unlabeled data. Our method extends the k-means algorithm with label supervision, cluster size constraints, and domain-specific discriminative subspace selection. This unified framework achieves state-of-the-art results in few-shot NER, demonstrating its effectiveness in leveraging unlabeled data and adapting to domain-specific challenges.