Rui Miao
2025
Understanding the Information Propagation Effects of Communication Topologies in LLM-based Multi-Agent Systems
Xu Shen
|
Yixin Liu
|
Yiwei Dai
|
Yili Wang
|
Rui Miao
|
Yue Tan
|
Shirui Pan
|
Xin Wang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The communication topology in large language model-based multi-agent systems fundamentally governs inter-agent collaboration patterns, critically shaping both the efficiency and effectiveness of collective decision-making. While recent studies for communication topology automated design tend to construct sparse structures for efficiency, they often overlook why and when sparse and dense topologies help or hinder collaboration. In this paper, we present a causal framework to analyze how agent outputs, whether correct or erroneous, propagate under topologies with varying sparsity. Our empirical studies reveal that moderately sparse topologies, which effectively suppress error propagation while preserving beneficial information diffusion, typically achieve optimal task performance. Guided by this insight, we propose a novel topology design approach, EIB-Learner, that balances error suppression and beneficial information propagation by fusing connectivity patterns from both dense and sparse graphs. Extensive experiments show the superior effectiveness, communication cost, and robustness of EIB-Learner.
2020
Revisiting Representation Degeneration Problem in Language Modeling
Zhong Zhang
|
Chongming Gao
|
Cong Xu
|
Rui Miao
|
Qinli Yang
|
Junming Shao
Findings of the Association for Computational Linguistics: EMNLP 2020
Weight tying is now a common setting in many language generation tasks such as language modeling and machine translation. However, a recent study reveals that there is a potential flaw in weight tying. They find that the learned word embeddings are likely to degenerate and lie in a narrow cone when training a language model. They call it the representation degeneration problem and propose a cosine regularization to solve it. Nevertheless, we prove that the cosine regularization is insufficient to solve the problem, as the degeneration is still likely to happen under certain conditions. In this paper, we revisit the representation degeneration problem and theoretically analyze the limitations of the previously proposed solution. Afterward, we propose an alternative regularization method called Laplacian regularization to tackle the problem. Experiments on language modeling demonstrate the effectiveness of the proposed Laplacian regularization.
Search
Fix author
Co-authors
- Yiwei Dai 1
- Chongming Gao 1
- Yixin Liu 1
- Shirui Pan 1
- Junming Shao 1
- show all...