Juho Kim
Papers on this page may belong to the following people: Juho Kim (MIT, KAIST)
2024
Observing the Southern US Culture of Honor Using Large-Scale Social Media Analysis
Juho Kim | Michael Guerzhoy
Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)
Juho Kim | Michael Guerzhoy
Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)
A culture of honor refers to a social system where individuals’ status, reputation, and esteem play a central role in governing interpersonal relations. Past works have associated this concept with the United States (US) South and related with it various traits such as higher sensitivity to insult, a higher value on reputation, and a tendency to react violently to insults. In this paper, we hypothesize and confirm that internet users from the US South, where a culture of honor is more prevalent, are more likely to display a trait predicted by their belonging to a culture of honor. Specifically, we test the hypothesis that US Southerners are more likely to retaliate to personal attacks by personally attacking back. We leverage OpenAI’s GPT-3.5 API to both geolocate internet users and to automatically detect whether users are insulting each other. We validate the use of GPT-3.5 by measuring its performance on manually-labeled subsets of the data. Our work demonstrates the potential of formulating a hypothesis based on a conceptual framework, operationalizing it in a way that is amenable to large-scale LLM-aided analysis, manually validating the use of the LLM, and drawing a conclusion.
2018
Teaching Syntax by Adversarial Distraction
Juho Kim | Christopher Malon | Asim Kadav
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
Juho Kim | Christopher Malon | Asim Kadav
Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)
Existing entailment datasets mainly pose problems which can be answered without attention to grammar or word order. Learning syntax requires comparing examples where different grammar and word order change the desired classification. We introduce several datasets based on synthetic transformations of natural entailment examples in SNLI or FEVER, to teach aspects of grammar and word order. We show that without retraining, popular entailment models are unaware that these syntactic differences change meaning. With retraining, some but not all popular entailment models can learn to compare the syntax properly.