Zhuoyan Xu
2026
Efficient Table Retrieval and Understanding with Multimodal Large Language Models
Zhuoyan Xu | Haoyang Fang | Boran Han | Bonan Min | Bernie Wang | Cuixiong Hu | Shuai Zhang
Findings of the Association for Computational Linguistics: EACL 2026
Zhuoyan Xu | Haoyang Fang | Boran Han | Bonan Min | Bernie Wang | Cuixiong Hu | Shuai Zhang
Findings of the Association for Computational Linguistics: EACL 2026
Tabular data is frequently captured in image form across a wide range of real-world scenarios such as financial reports, handwritten records, and document scans. These visual representations pose unique challenges for machine understanding, as they combine both structural and visual complexities. While recent advances in Multimodal Large Language Models (MLLMs) show promising results in table understanding, they typically assume the relevant table is readily available. However, a more practical scenario involves identifying and reasoning over relevant tables from large-scale collections to answer user queries. To address this gap, we propose , a framework that enables MLLMs to answer queries over large collections of table images. Our approach first retrieves candidate tables using jointly trained visual-text foundation models, then leverages MLLMs to perform fine-grained reranking of these candidates, and finally employs MLLMs to reason over the selected tables for answer generation. Through extensive experiments on a newly constructed dataset comprising 88,161 training and 9,819 testing samples across 8 benchmarks with 48,504 unique tables, we demonstrate that our framework significantly outperforms existing methods by 7.0% in retrieval recall and 6.1% in answer accuracy, offering a practical solution for real-world table understanding tasks.
2025
Conv-Basis: A New Paradigm for Efficient Attention Inference and Gradient Computation in Transformers
Yingyu Liang | Heshan Liu | Zhenmei Shi | Zhao Song | Zhuoyan Xu | Jiale Zhao | Zhen Zhuang
Findings of the Association for Computational Linguistics: EMNLP 2025
Yingyu Liang | Heshan Liu | Zhenmei Shi | Zhao Song | Zhuoyan Xu | Jiale Zhao | Zhen Zhuang
Findings of the Association for Computational Linguistics: EMNLP 2025
The self-attention mechanism is key to the success of transformers in recent large language models (LLMs). However, the quadratic computational cost, O(n2), with respect to the input sequence length n poses a significant obstacle to further improvement and scalability in longer contexts.In this work, we leverage the convolution-like structure of attention matrices to develop an efficient approximation method for attention computation using convolution matrices. We propose a conv basis system, analogous to the rank basis, and show that any lower triangular matrix can be decomposed as a sum of structured convolution matrices in this basis. We then design a fast algorithm to approximate the attention matrix using a sum of k convolution matrices. This enables us to compute attention during inference via Fast Fourier Transforms (FFT) in O(knd log n) time, where d is the hidden dimension, achieving nearly linear time complexity, n1+o(1), in practical scenarios where kd = no(1). Furthermore, both training forward and backward gradient computations can be performed in n1+o(1) time as well.We provide theoretical guarantees on runtime and approximation error and conduct preliminary experiments to evaluate the effectiveness of our approach. We hope this new paradigm for accelerating attention computation in transformer models facilitates their application to longer contexts.
Neural at ArchEHR-QA 2025: Agentic Prompt Optimization for Evidence-Grounded Clinical Question Answering
Sai Prasanna Teja Reddy Bogireddy | Abrar Majeedi | Viswanath Gajjala | Zhuoyan Xu | Siddhant Rai | Vaishnav Potlapalli
Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)
Sai Prasanna Teja Reddy Bogireddy | Abrar Majeedi | Viswanath Gajjala | Zhuoyan Xu | Siddhant Rai | Vaishnav Potlapalli
Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)