Training Text-to-Molecule Models with Context-Aware Tokenization

Seojin Kim, Hyeontae Song, Jaehyun Nam, Jinwoo Shin


Abstract
Recently, text-to-molecule models have shown great potential across various chemical applications, e.g., drug-discovery. These models adapt language models to molecular data by representing molecules as sequences of atoms. However, they rely on atom-level tokenizations, which primarily focus on modeling local connectivity, thereby limiting the ability of models to capture the global structural context within molecules. To tackle this issue, we propose a novel text-to-molecule model, coined Context-Aware Molecular T5 (CAMT5). Inspired by the significance of the substructure-level contexts in understanding molecule structures, e.g., ring systems, we introduce substructure-level tokenization for text-to-molecule models. Building on our tokenization scheme, we develop an importance-based training strategy that prioritizes key substructures, enabling CAMT5 to better capture the molecular semantics. Extensive experiments verify the superiority of CAMT5 in various text-to-molecule generation tasks. Intriguingly, we find that CAMT5 outperforms the state-of-the-art methods using only 2% of training tokens. In addition, we propose a simple yet effective ensemble strategy that aggregates the outputs of text-to-molecule models to further boost the generation performance.
Anthology ID:
2025.findings-emnlp.1221
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22442–22460
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1221/
DOI:
Bibkey:
Cite (ACL):
Seojin Kim, Hyeontae Song, Jaehyun Nam, and Jinwoo Shin. 2025. Training Text-to-Molecule Models with Context-Aware Tokenization. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 22442–22460, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Training Text-to-Molecule Models with Context-Aware Tokenization (Kim et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1221.pdf
Checklist:
 2025.findings-emnlp.1221.checklist.pdf