Yang Young Lu
2024
Retrieved Sequence Augmentation for Protein Representation Learning
Chang Ma
|
Haiteng Zhao
|
Lin Zheng
|
Jiayi Xin
|
Qintong Li
|
Lijun Wu
|
Zhihong Deng
|
Yang Young Lu
|
Qi Liu
|
Sheng Wang
|
Lingpeng Kong
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Protein Language Models traditionally depend on Multiple Sequence Alignments (MSA) to incorporate evolutionary knowledge. However, MSA-based approaches suffer from substantial computational overhead and generally underperform in generalizing to de novo proteins. This study reevaluates the role of MSA, proposing it as a retrieval augmentation method and questioning the necessity of sequence alignment. We show that a simple alternative, Retrieved Sequence Augmentation (RSA), can enhance protein representation learning without the need for alignment and cumbersome preprocessing. RSA surpasses MSA Transformer by an average of 5% in both structural and property prediction tasks while being 373 times faster. Additionally, RSA demonstrates enhanced transferability for predicting de novo proteins. This methodology addresses a critical need for efficiency in protein prediction and can be rapidly employed to identify homologous sequences, improve representation learning, and enhance the capacity of Large Language Models to interpret protein structures.
Search
Co-authors
- Chang Ma 1
- Haiteng Zhao 1
- Lin Zheng 1
- Jiayi Xin 1
- Qintong Li 1
- show all...