ArchBERT: Bi-Modal Understanding of Neural Architectures and Natural Languages

Mohammad Akbari, Saeed Ranjbar Alvar, Behnam Kamranian, Amin Banitalebi-Dehkordi, Yong Zhang


Abstract
Building multi-modal language models has been a trend in the recent years, where additional modalities such as image, video, speech, etc. are jointly learned along with natural languages (i.e., textual information). Despite the success of these multi-modal language models with different modalities, there is no existing solution for neural network architectures and natural languages. Providing neural architectural information as a new modality allows us to provide fast architecture-2-text and text-2-architecture retrieval/generation services on the cloud with a single inference. Such solution is valuable in terms of helping beginner and intermediate ML users to come up with better neural architectures or AutoML approaches with a simple text query. In this paper, we propose ArchBERT, a bi-modal model for joint learning and understanding of neural architectures and natural languages, which opens up new avenues for research in this area. We also introduce a pre-training strategy named Masked Architecture Modeling (MAM) for a more generalized joint learning. Moreover, we introduce and publicly release two new bi-modal datasets for training and validating our methods. The ArchBERT’s performance is verified through a set of numerical experiments on different downstream tasks such as architecture-oriented reasoning, question answering, and captioning (summarization). Datasets, codes, and demos are available as supplementary materials.
Anthology ID:
2023.conll-1.7
Volume:
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Jing Jiang, David Reitter, Shumin Deng
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
87–107
Language:
URL:
https://aclanthology.org/2023.conll-1.7
DOI:
10.18653/v1/2023.conll-1.7
Bibkey:
Cite (ACL):
Mohammad Akbari, Saeed Ranjbar Alvar, Behnam Kamranian, Amin Banitalebi-Dehkordi, and Yong Zhang. 2023. ArchBERT: Bi-Modal Understanding of Neural Architectures and Natural Languages. In Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), pages 87–107, Singapore. Association for Computational Linguistics.
Cite (Informal):
ArchBERT: Bi-Modal Understanding of Neural Architectures and Natural Languages (Akbari et al., CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-1.7.pdf
Software:
 2023.conll-1.7.Software.zip