Bryan Kian Hsiang Low
2026
Respecting Temporal-Causal Consistency: Entity-Event Knowledge Graph for Retrieval-Augmented Generation
Ze Yu Zhang | Zitao Li | Yaliang Li | Bolin Ding | Bryan Kian Hsiang Low
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Ze Yu Zhang | Zitao Li | Yaliang Li | Bolin Ding | Bryan Kian Hsiang Low
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Retrieval-augmented generation (RAG) based on large language models often falters on narrative documents with inherent temporal structures. Standard unstructured RAG methods rely solely on embedding-similarity matching and lack any general mechanism to encode or exploit chronological information, while knowledge graph RAG (KG-RAG) frameworks collapse every mention of an entity into a single node, erasing the evolving context that drives many queries. To formalize this challenge and draw the community’s attention, we construct ChronoQA, a robust and discriminative QA benchmark that measures temporal, causal, and character consistency understanding in narrative documents (e.g., novels) under the RAG setting. We then introduce Entity-Event RAG (E 2RAG), a dual-graph framework that keeps separate entity and event subgraphs linked by a bipartite mapping, thereby preserving the temporal and causal facets needed for fine-grained reasoning. Across ChronoQA, our approach outperforms state-of-the-art unstructured and KG-based RAG baselines, with notable gains on causal and character consistency queries. E 2RAG therefore offers a practical path to more context-aware retrieval for tasks that require precise answers grounded in chronological information.
2025
Dipper: Diversity in Prompts for Producing Large Language Model Ensembles in Reasoning Tasks
Wenyang Hu | Gregory Kang Ruey Lau | Liu Diwen | Chen Jizhuo | See-Kiong Ng | Bryan Kian Hsiang Low
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Wenyang Hu | Gregory Kang Ruey Lau | Liu Diwen | Chen Jizhuo | See-Kiong Ng | Bryan Kian Hsiang Low
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs), particularly smaller variants, still struggle with complex reasoning tasks. While inference-time prompting can guide reasoning, existing methods often rely on sequential queries. Ensemble approaches offer a promising path to performance gains, especially given recent batch inference speed-ups. This work introduces DIPPER, a novel, training-free framework that transforms a single LLM into an effective inference-time ensemble. By feeding the model an optimized and diverse set of prompts in parallel, DIPPER elicits varied reasoning paths, leading to performance gains. We empirically demonstrate significant improvements on mathematical reasoning benchmarks, such as MATH, where a DIPPER ensemble of three Qwen2-MATH-1.5B instances (via parallel prompting of a single model) outperforms a larger Qwen2-MATH-7B model.
TETRIS: Optimal Draft Token Selection for Batch Speculative Decoding
Zhaoxuan Wu | Zijian Zhou | Arun Verma | Alok Prakash | Daniela Rus | Bryan Kian Hsiang Low
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Zhaoxuan Wu | Zijian Zhou | Arun Verma | Alok Prakash | Daniela Rus | Bryan Kian Hsiang Low
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We propose TETRIS, a novel method that optimizes the total throughput of batch speculative decoding in multi-request settings. Unlike existing methods that optimize for a single request or a group of requests as a whole, TETRIS actively selects the most promising draft tokens (for every request in a batch) to be accepted when verified in parallel, resulting in fewer rejected tokens and hence less wasted computing resources. Such an effective resource utilization to achieve fast inference in large language models (LLMs) is especially important to service providers with limited inference capacity. Compared to baseline speculative decoding, TETRIS yields a consistently higher acceptance rate and more effective utilization of the limited inference capacity. We show theoretically and empirically that TETRIS outperforms baseline speculative decoding and existing methods that dynamically select draft tokens, leading to a more efficient batch inference in LLMs.
WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data
Xinyang Lu | Jingtan Wang | Zitong Zhao | Zhongxiang Dai | Chuan-Sheng Foo | See-Kiong Ng | Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: ACL 2025
Xinyang Lu | Jingtan Wang | Zitong Zhao | Zhongxiang Dai | Chuan-Sheng Foo | See-Kiong Ng | Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: ACL 2025
The impressive performances of Large Language Models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the Intellectual Property (IP) of their training data. In particular, the synthetic texts generated by LLMs may infringe the IP of the data being used to train the LLMs. To this end, it is imperative to be able to perform source attribution by identifying the data provider who contributed to the generation of a synthetic text by an LLM. In this paper, we show that this problem can be tackled by watermarking, i.e., by enabling an LLM to generate synthetic texts with embedded watermarks that contain information about their source(s). We identify the key properties of such watermarking frameworks (e.g., source attribution accuracy, robustness against adversaries), and propose a source attribution framework that satisfies these key properties due to our algorithmic designs. Our framework enables an LLM to learn an accurate mapping from the generated texts to data providers, which sets the foundation for effective source attribution. Extensive empirical evaluations show that our framework achieves effective source attribution.
Uncovering Scaling Laws for Large Language Models via Inverse Problems
Arun Verma | Zhaoxuan Wu | Zijian Zhou | Xiaoqiang Lin | Zhiliang Chen | Rachael Hwee Ling Sim | Rui Qiao | Jingtan Wang | Nhung Bui | Xinyuan Niu | Wenyang Hu | Gregory Kang Ruey Lau | Zi-Yu Khoo | Zitong Zhao | Xinyi Xu | Apivich Hemachandra | See-Kiong Ng | Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: EMNLP 2025
Arun Verma | Zhaoxuan Wu | Zijian Zhou | Xiaoqiang Lin | Zhiliang Chen | Rachael Hwee Ling Sim | Rui Qiao | Jingtan Wang | Nhung Bui | Xinyuan Niu | Wenyang Hu | Gregory Kang Ruey Lau | Zi-Yu Khoo | Zitong Zhao | Xinyi Xu | Apivich Hemachandra | See-Kiong Ng | Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Models (LLMs) are large-scale pretrained models that have achieved remarkable success across diverse domains. These successes have been driven by unprecedented complexity and scale in both data and computations. However, due to the high costs of training such models, brute-force trial-and-error approaches to improve LLMs are not feasible. Inspired by the success of inverse problems in uncovering fundamental scientific laws, this position paper advocates that inverse problems can also efficiently uncover scaling laws that guide the building of LLMs to achieve the desirable performance with significantly better cost-effectiveness.
2024
Waterfall: Scalable Framework for Robust Text Watermarking and Provenance for LLMs
Gregory Kang Ruey Lau | Xinyuan Niu | Hieu Dao | Jiangwei Chen | Chuan-Sheng Foo | Bryan Kian Hsiang Low
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Gregory Kang Ruey Lau | Xinyuan Niu | Hieu Dao | Jiangwei Chen | Chuan-Sheng Foo | Bryan Kian Hsiang Low
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Protecting intellectual property (IP) of text such as articles and code is increasingly important, especially as sophisticated attacks become possible, such as paraphrasing by large language models (LLMs) or even unauthorized training of LLMs on copyrighted text to infringe such IP. However, existing text watermarking methods are not robust enough against such attacks nor scalable to millions of users for practical implementation. In this paper, we propose Waterfall, the first training-free framework for robust and scalable text watermarking applicable across multiple text types (e.g., articles, code) and languages supportable by LLMs, for general text and LLM data provenance. Waterfall comprises several key innovations, such as being the first to use LLM as paraphrasers for watermarking along with a novel combination of techniques that are surprisingly effective in achieving robust verifiability and scalability. We empirically demonstrate that Waterfall achieves significantly better scalability, robust verifiability, and computational efficiency compared to SOTA article-text watermarking methods, and also showed how it could be directly applied to the watermarking of code.
Position Paper: Data-Centric AI in the Age of Large Language Models
Xinyi Xu | Zhaoxuan Wu | Rui Qiao | Arun Verma | Yao Shu | Jingtan Wang | Xinyuan Niu | Zhenfeng He | Jiangwei Chen | Zijian Zhou | Gregory Kang Ruey Lau | Hieu Dao | Lucas Agussurja | Rachael Hwee Ling Sim | Xiaoqiang Lin | Wenyang Hu | Zhongxiang Dai | Pang Wei Koh | Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: EMNLP 2024
Xinyi Xu | Zhaoxuan Wu | Rui Qiao | Arun Verma | Yao Shu | Jingtan Wang | Xinyuan Niu | Zhenfeng He | Jiangwei Chen | Zijian Zhou | Gregory Kang Ruey Lau | Hieu Dao | Lucas Agussurja | Rachael Hwee Ling Sim | Xiaoqiang Lin | Wenyang Hu | Zhongxiang Dai | Pang Wei Koh | Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: EMNLP 2024
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making a key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs, and advocate that data-centric research should receive more attention from the community. We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization. In each scenario, we underscore the importance of data, highlight promising research directions, and articulate the potential impacts on the research community and, where applicable, the society as a whole. For instance, we advocate for a suite of data-centric benchmarks tailored to the scale and complexity of data for LLMs. These benchmarks can be used to develop new data curation methods and document research efforts and results, which can help promote openness and transparency in AI and LLM research.
Search
Fix author
Co-authors
- Gregory Kang Ruey Lau 4
- Wenyang Hu 3
- See Kiong Ng 3
- Xinyuan Niu 3
- Arun Verma 3
- Jingtan Wang 3
- Zhaoxuan Wu 3
- Zijian Zhou 3
- Jiangwei Chen 2
- Zhongxiang Dai 2
- Hieu Dao 2
- Chuan-Sheng Foo 2
- Xiaoqiang Lin 2
- Rui Qiao 2
- Rachael Hwee Ling Sim 2
- Xinyi Xu 2
- Zitong Zhao 2
- Lucas Agussurja 1
- Nhung Bui 1
- Zhiliang Chen 1
- Bolin Ding 1
- Liu Diwen 1
- Zhenfeng He 1
- Apivich Hemachandra 1
- Chen Jizhuo 1
- Zi-Yu Khoo 1
- Pang Wei Koh 1
- Zitao Li 1
- Yaliang Li 1
- Xinyang Lu 1
- Alok Prakash 1
- Daniela Rus 1
- Yao Shu 1
- Ze Yu Zhang 1