SubmissionNumber#=%=#24 FinalPaperTitle#=%=#AAST-NLP at Multimodal Hate Speech Event Detection 2024 : A Multimodal Approach for Classification of Text-Embedded Images Based on CLIP and BERT-Based Models. ShortPaperTitle#=%=# NumberOfPages#=%=# CopyrightSigned#=%=# JobTitle#==# Organization#==# Abstract#==#With the rapid rise of social media platforms, communities have been able to share their passions and interests with the world much more conveniently. This, in turn, has led to individuals being able to spread hateful messages through the use of memes. The classification of such materials requires not only looking at the individual images but also considering the associated text in tandem. Looking at the images or the text separately does not provide the full context. In this paper, we describe our approach to hateful meme classification for the Multimodal Hate Speech Shared Task at CASE 2024. We utilized the same approach in the two subtasks, which involved a classification model based on text and image features obtained using Contrastive Language-Image Pre-training (CLIP) in addition to utilizing BERT-Based models. We then utilize predictions created by both models in an ensemble approach. This approach ranked second in both subtasks, respectively. Author{1}{Firstname}#=%=#Ahmed Author{1}{Lastname}#=%=#El-Sayed Author{1}{Username}#=%=#ahmedelsayed Author{1}{Email}#=%=#a1752000@gmail.com Author{1}{Affiliation}#=%=#Arab Academy For Science and Technology Author{2}{Firstname}#=%=#Omar Author{2}{Lastname}#=%=#Nasr Author{2}{Email}#=%=#omarnasr5206@gmail.com Author{2}{Affiliation}#=%=#Arab Academy For Science and Technology ========== èéáğö