Changshun Wu
2026
What Matters to an LLM? Behavioral and Computational Evidences from Summarization
Yongxin Zhou | Changshun Wu | Philippe Mulhem | Didier Schwab | Maxime Peyrard
Findings of the Association for Computational Linguistics: EACL 2026
Yongxin Zhou | Changshun Wu | Philippe Mulhem | Didier Schwab | Maxime Peyrard
Findings of the Association for Computational Linguistics: EACL 2026
Large Language Models (LLMs) are now state-of-the-art at summarization, yet the internal notion of importance that drives their information selections remains hidden. We propose to investigate this by combining behavioral and computational analyses. Behaviorally, we generate a series of length-controlled summaries for each document and derive empirical importance distributions based on how often each information unit is selected. These reveal that LLMs converge on consistent importance patterns, sharply different from pre-LLM baselines, and that LLMs cluster more by family than by size. Computationally, we identify that certain attention heads align well with empirical importance distributions, and that middle-to-late layers are strongly predictive of importance. Together, these results provide initial insights into *what* LLMs prioritize in summarization and *how* this priority is internally represented, opening a path toward interpreting and ultimately controlling information selection in these models.
2025
Randomized Smoothing Meets Vision-Language Models
Emmanouil Seferis | Changshun Wu | Stefanos Kollias | Saddek Bensalem | Chih-Hong Cheng
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Emmanouil Seferis | Changshun Wu | Stefanos Kollias | Saddek Bensalem | Chih-Hong Cheng
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Randomized smoothing (RS) is one of the prominent techniques to ensure the correctness of machine learning models, where point-wise robustness certificates can be derived analytically. While RS is well understood for classification, its application to generative models is unclear, since their outputs are sequences rather than labels. We resolve this by connecting generative outputs to an oracle classification task and showing that RS can still be enabled: the final response can be classified as a discrete action (e.g., service-robot commands in VLAs), as harmful vs. harmless (content moderation or toxicity detection in VLMs), or even applying oracles to cluster answers into semantically equivalent ones. Provided that the error rate for the oracle classifier comparison is bounded, we develop the theory that associates the number of samples with the corresponding robustness radius. We further derive improved scaling laws analytically relating the certified radius and accuracy to the number of samples, showing that the earlier result of 2 to 3 orders of magnitude fewer samples sufficing with minimal loss remains valid even under weaker assumptions. Together, these advances make robustness certification both well-defined and computationally feasible for state-of-the-art VLMs, as validated against recent jailbreak-style adversarial attacks.