%0 Conference Proceedings %T IIITG-ADBU at SemEval-2020 Task 8: A Multimodal Approach to Detect Offensive, Sarcastic and Humorous Memes %A Baruah, Arup %A Das, Kaushik %A Barbhuiya, Ferdous %A Dey, Kuntal %Y Herbelot, Aurelie %Y Zhu, Xiaodan %Y Palmer, Alexis %Y Schneider, Nathan %Y May, Jonathan %Y Shutova, Ekaterina %S Proceedings of the Fourteenth Workshop on Semantic Evaluation %D 2020 %8 December %I International Committee for Computational Linguistics %C Barcelona (online) %F baruah-etal-2020-iiitg %X In this paper, we present a multimodal architecture to determine the emotion expressed in a meme. This architecture utilizes both textual and visual information present in a meme. To extract image features we experimented with pre-trained VGG-16 and Inception-V3 classifiers and to extract text features we used LSTM and BERT classifiers. Both FastText and GloVe embeddings were experimented with for the LSTM classifier. The best F1 scores our classifier obtained on the official analysis results are 0.3309, 0.4752, and 0.2897 for Task A, B, and C respectively in the Memotion Analysis task (Task 8) organized as part of International Workshop on Semantic Evaluation 2020 (SemEval 2020). In our study, we found that combining both textual and visual information expressed in a meme improves the performance of the classifier as opposed to using standalone classifiers that use only text or visual data. %R 10.18653/v1/2020.semeval-1.112 %U https://aclanthology.org/2020.semeval-1.112 %U https://doi.org/10.18653/v1/2020.semeval-1.112 %P 885-890