Xuefei Ning
2026
How Quantization Shapes Bias in Large Language Models
Federico Marcuzzi | Xuefei Ning | Roy Schwartz | Iryna Gurevych
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Federico Marcuzzi | Xuefei Ning | Roy Schwartz | Iryna Gurevych
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
This work presents a comprehensive evaluation of how quantization affects model bias, with particular attention to its impact on individual demographic subgroups.We focus on weight and activation quantization strategies and examine their effects across a broad range of bias types, including stereotypes, fairness, toxicity, and sentiment.We employ both probability- and generated text-based metrics across 13 benchmarks and evaluate models that differ in architecture family and reasoning ability.Our findings show that quantization has a nuanced impact on bias: while it can reduce model toxicity and does not significantly impact sentiment, it tends to slightly increase stereotypes and unfairness in generative tasks, especially under aggressive compression.These trends are generally consistent across demographic categories and subgroups, and model types, although their magnitude depends on the specific setting.Overall, our results highlight the importance of carefully balancing efficiency and ethical considerations when applying quantization in practice.
2025
Efficient Inference for Large Language Models –Algorithm, Model, and System
Xuefei Ning | Guohao Dai | Haoli Bai | Lu Hou | Yu Wang | Qun Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Xuefei Ning | Guohao Dai | Haoli Bai | Lu Hou | Yu Wang | Qun Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
The inference of LLMs incurs high computational costs, memory access overhead, and memory usage, leading to inefficiencies in terms of latency, throughput, power consumption, and storage. To this end, this tutorial focuses on the increasingly important topic of Efficient Inference for LLMs and aims to provide a systematic understanding of key facts and methodologies from a designer’s perspective. We start by introducing the basic concepts of modern LLMs, software and hardware. Following this, we define the efficiency optimization problem. To equip the audience with a designer’s mindset, we briefly explain how to diagnose efficiency bottlenecks for a given workload on specific hardware. After introducing the basics, we will introduce our full-stack taxonomy of efficient inference methods for LLMs. We will walk through each category of methodology, using one to three representative methods as examples for each leaf subcategory, elaborating on the design logic behind each method and which inefficiency factors they primarily address. Finally, we will wrap up with a takeaway summary, and future research directions. The tutorial website is https://haolibai.github.io/emnlp-2025-tutorial-efficiency/.