SubmissionNumber#=%=#3 FinalPaperTitle#=%=#The Future of Web Data Mining: Insights from Multimodal and Code-based Extraction Methods ShortPaperTitle#=%=# NumberOfPages#=%=# CopyrightSigned#=%=# JobTitle#==# Organization#==# Abstract#==#The extraction of structured data from websites is critical for numerous Artificial Intelligence applications, but modern web design increasingly stores information visually in images rather than in text. This shift calls into question the optimal technique, as language-only models fail without textual cues while new multimodal models like GPT-4 promise image understanding abilities. We conduct the first rigorous comparison between text-based and vision-based models for extracting event metadata harvested from comic convention websites. Surprisingly, our results between GPT-4 Vision and GPT-4 Text uncover a significant accuracy advantage for vision-based methods in an applies-to-apples setting, indicating that vision models may be outpacing language-alone techniques in the task of information extraction from websites. We release our dataset and provide a qualitative analysis to guide further research in multi-modal models for web information extraction. Author{1}{Firstname}#=%=#Evan Author{1}{Lastname}#=%=#Fellman Author{1}{Username}#=%=#evanfellman Author{1}{Email}#=%=#efellman@cs.cmu.edu Author{1}{Affiliation}#=%=#Carnegie Mellon University Author{2}{Firstname}#=%=#Jacob Author{2}{Lastname}#=%=#Tyo Author{2}{Email}#=%=#jacob.tyo@gmail.com Author{2}{Affiliation}#=%=#Carnegie Mellon University Author{3}{Firstname}#=%=#Zachary Author{3}{Lastname}#=%=#Lipton Author{3}{Email}#=%=#zlipton@cmu.edu Author{3}{Affiliation}#=%=#Carnegie Mellon University ========== èéáğö