ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction

Yaorui Shi, An Zhang, Enzhi Zhang, Zhiyuan Liu, Xiang Wang


Abstract
Predicting chemical reactions, a fundamental challenge in chemistry, involves forecasting the resulting products from a given reaction process. Conventional techniques, notably those employing Graph Neural Networks (GNNs), are often limited by insufficient training data and their inability to utilize textual information, undermining their applicability in real-world applications. In this work, we propose **ReLM**, a novel framework that leverages the chemical knowledge encoded in language models (LMs) to assist GNNs, thereby enhancing the accuracy of real-world chemical reaction predictions. To further enhance the model’s robustness and interpretability, we incorporate the confidence score strategy, enabling the LMs to self-assess the reliability of their predictions. Our experimental results demonstrate that ReLM improves the performance of state-of-the-art GNN-based methods across various chemical reaction datasets, especially in out-of-distribution settings. Codes are available at https://github.com/syr-cn/ReLM.
Anthology ID:
2023.findings-emnlp.366
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5506–5520
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.366
DOI:
10.18653/v1/2023.findings-emnlp.366
Bibkey:
Cite (ACL):
Yaorui Shi, An Zhang, Enzhi Zhang, Zhiyuan Liu, and Xiang Wang. 2023. ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5506–5520, Singapore. Association for Computational Linguistics.
Cite (Informal):
ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction (Shi et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.366.pdf