Samira Shaikh


2024

pdf bib
Persona-aware Multi-party Conversation Response Generation
Khyati Mahajan | Samira Shaikh
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Modeling interlocutor information is essential towards modeling multi-party conversations to account for the presence of multiple participants. We investigate the role of including the persona attributes of both the speaker and addressee relevant to each utterance, collected via 3 distinct mock social media experiments. The participants were recruited via MTurk, and were unaware of the persona attributes of the other users they interacted with on the platform. Our main contributions include 1) a multi-party conversation dataset with rich associated metadata (including persona), and 2) a persona-aware heterogeneous graph transformer response generation model. We find that PersonaHeterMPC provides a good baseline towards persona-aware generation for multi-party conversation modeling, generating responses which are relevant and consistent with the interlocutor personas relevant to the conversation.

2022

pdf bib
Proceedings of the 15th International Conference on Natural Language Generation
Samira Shaikh | Thiago Ferreira | Amanda Stent
Proceedings of the 15th International Conference on Natural Language Generation

pdf bib
Towards Evaluation of Multi-party Dialogue Systems
Khyati Mahajan | Sashank Santhanam | Samira Shaikh
Proceedings of the 15th International Conference on Natural Language Generation

pdf bib
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations
Samira Shaikh | Thiago Ferreira | Amanda Stent
Proceedings of the 15th International Conference on Natural Language Generation: System Demonstrations

pdf bib
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
Samira Shaikh | Thiago Ferreira | Amanda Stent
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges

pdf bib
Improving Dialogue Act Recognition with Augmented Data
Khyati Mahajan | Soham Parikh | Quaizar Vohra | Mitul Tiwari | Samira Shaikh
Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)

We present our work on augmenting dialog act recognition capabilities utilizing synthetically generated data. Our work is motivated by the limitations of current dialog act datasets, and the need to adapt for new domains as well as ambiguity in utterances written by humans. We list our observations and findings towards how synthetically generated data can contribute meaningfully towards more robust dialogue act recognition models extending to new domains. Our major finding shows that synthetic data, which is linguistically varied, can be very useful towards this goal and increase the performance from (0.39, 0.16) to (0.85, 0.88) for AFFIRM and NEGATE dialog acts respectively.

pdf bib
BeSt: The Belief and Sentiment Corpus
Jennifer Tracey | Owen Rambow | Claire Cardie | Adam Dalton | Hoa Trang Dang | Mona Diab | Bonnie Dorr | Louise Guthrie | Magdalena Markowska | Smaranda Muresan | Vinodkumar Prabhakaran | Samira Shaikh | Tomek Strzalkowski
Proceedings of the Thirteenth Language Resources and Evaluation Conference

We present the BeSt corpus, which records cognitive state: who believes what (i.e., factuality), and who has what sentiment towards what. This corpus is inspired by similar source-and-target corpora, specifically MPQA and FactBank. The corpus comprises two genres, newswire and discussion forums, in three languages, Chinese (Mandarin), English, and Spanish. The corpus is distributed through the LDC.

2021

pdf bib
A Case Study of Analysis of Construals in Language on Social Media Surrounding a Crisis Event
Lolo Aboufoul | Khyati Mahajan | Tiffany Gallicano | Sara Levens | Samira Shaikh
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop

The events that took place at the Unite the Right rally held in Charlottesville, Virginia on August 11-12, 2017 caused intense reaction on social media from users across the political spectrum. We present a novel application of psycholinguistics - specifically, construal level theory - to analyze the language on social media around this event of social import through topic models. We find that including psycholinguistic measures of concreteness as covariates in topic models can lead to informed analysis of the language surrounding an event of political import.

pdf bib
TeamUNCC@LT-EDI-EACL2021: Hope Speech Detection using Transfer Learning with Transformers
Khyati Mahajan | Erfan Al-Hossami | Samira Shaikh
Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion

In this paper, we describe our approach towards utilizing pre-trained models for the task of hope speech detection. We participated in Task 2: Hope Speech Detection for Equality, Diversity and Inclusion at LT-EDI-2021 @ EACL2021. The goal of this task is to predict the presence of hope speech, along with the presence of samples that do not belong to the same language in the dataset. We describe our approach to fine-tuning RoBERTa for Hope Speech detection in English and our approach to fine-tuning XLM-RoBERTa for Hope Speech detection in Tamil and Malayalam, two low resource Indic languages. We demonstrate the performance of our approach on classifying text into hope-speech, non-hope and not-language. Our approach ranked 1st in English (F1 = 0.93), 1st in Tamil (F1 = 0.61) and 3rd in Malayalam (F1 = 0.83).

pdf bib
Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)
Antoine Bosselut | Esin Durmus | Varun Prashant Gangal | Sebastian Gehrmann | Yacine Jernite | Laura Perez-Beltrachini | Samira Shaikh | Wei Xu
Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)

pdf bib
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann | Tosin Adewumi | Karmanya Aggarwal | Pawan Sasanka Ammanamanchi | Anuoluwapo Aremu | Antoine Bosselut | Khyathi Raghavi Chandu | Miruna-Adriana Clinciu | Dipanjan Das | Kaustubh Dhole | Wanyu Du | Esin Durmus | Ondřej Dušek | Chris Chinenye Emezue | Varun Gangal | Cristina Garbacea | Tatsunori Hashimoto | Yufang Hou | Yacine Jernite | Harsh Jhamtani | Yangfeng Ji | Shailza Jolly | Mihir Kale | Dhruv Kumar | Faisal Ladhak | Aman Madaan | Mounica Maddela | Khyati Mahajan | Saad Mahamood | Bodhisattwa Prasad Majumder | Pedro Henrique Martins | Angelina McMillan-Major | Simon Mille | Emiel van Miltenburg | Moin Nadeem | Shashi Narayan | Vitaly Nikolaev | Andre Niyongabo Rubungo | Salomey Osei | Ankur Parikh | Laura Perez-Beltrachini | Niranjan Ramesh Rao | Vikas Raunak | Juan Diego Rodriguez | Sashank Santhanam | João Sedoc | Thibault Sellam | Samira Shaikh | Anastasia Shimorina | Marco Antonio Sobrevilla Cabezudo | Hendrik Strobelt | Nishant Subramani | Wei Xu | Diyi Yang | Akhila Yerukola | Jiawei Zhou
Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)

We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics. Measuring progress in NLG relies on a constantly evolving ecosystem of automated metrics, datasets, and human evaluation standards. Due to this moving target, new models often still evaluate on divergent anglo-centric corpora with well-established, but flawed, metrics. This disconnect makes it challenging to identify the limitations of current models and opportunities for progress. Addressing this limitation, GEM provides an environment in which models can easily be applied to a wide set of tasks and in which evaluation strategies can be tested. Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models. This paper serves as the description of the data for the 2021 shared task at the associated GEM Workshop.

pdf bib
On the Need for Thoughtful Data Collection for Multi-Party Dialogue: A Survey of Available Corpora and Collection Methods
Khyati Mahajan | Samira Shaikh
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue

We present a comprehensive survey of available corpora for multi-party dialogue. We survey over 300 publications related to multi-party dialogue and catalogue all available corpora in a novel taxonomy. We analyze methods of data collection for multi-party dialogue corpora and identify several lacunae in existing data collection approaches used to collect such dialogue. We present this survey, the first survey to focus exclusively on multi-party dialogue corpora, to motivate research in this area. Through our discussion of existing data collection methods, we identify desiderata and guiding principles for multi-party data collection to contribute further towards advancing this area of dialogue research.

pdf bib
Shellcode_IA32: A Dataset for Automatic Shellcode Generation
Pietro Liguori | Erfan Al-Hossami | Domenico Cotroneo | Roberto Natella | Bojan Cukic | Samira Shaikh
Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021)

We take the first step to address the task of automatically generating shellcodes, i.e., small pieces of code used as a payload in the exploitation of a software vulnerability, starting from natural language comments. We assemble and release a novel dataset (Shellcode_IA32), consisting of challenging but common assembly instructions with their natural language descriptions. We experiment with standard methods in neural machine translation (NMT) to establish baseline performance levels on this task.

2020

pdf bib
Proceedings of the Fourth Widening Natural Language Processing Workshop
Rossana Cunha | Samira Shaikh | Erika Varis | Ryan Georgi | Alicia Tsai | Antonios Anastasopoulos | Khyathi Raghavi Chandu
Proceedings of the Fourth Widening Natural Language Processing Workshop

bib
Understanding the Impact of Experiment Design for Evaluating Dialogue System Output
Sashank Santhanam | Samira Shaikh
Proceedings of the Fourth Widening Natural Language Processing Workshop

Evaluation of output from natural language generation (NLG) systems is typically conducted via crowdsourced human judgments. To understand the impact of how experiment design might affect the quality and consistency of such human judgments, we designed a between-subjects study with four experimental conditions. Through our systematic study with 40 crowdsourced workers in each task, we find that using continuous scales achieves more consistent ratings than Likert scale or ranking-based experiment design. Additionally, we find that factors such as no prior experience of participating in similar studies of rating dialogue system output

bib
Studying The Effect of Emotional and Moral Language on Information Contagion during the Charlottesville Event
Khyati Mahajan | Samira Shaikh
Proceedings of the Fourth Widening Natural Language Processing Workshop

We highlight the contribution of emotional and moral language towards information contagion online. We find that retweet count on Twitter is significantly predicted by the use of negative emotions with negative moral language. We find that a tweet is less likely to be retweeted (hence less engagement and less potential for contagion) when it has emotional language expressed as anger along with a specific type of moral language, known as authority-vice. Conversely, when sadness is expressed with authority-vice, the tweet is more likely to be retweeted. Our findings indicate how emotional and moral language can interact in predicting information contagion.

pdf bib
Learning to Plan and Realize Separately for Open-Ended Dialogue Systems
Sashank Santhanam | Zhuo Cheng | Brodie Mather | Bonnie Dorr | Archna Bhatia | Bryanna Hebenstreit | Alan Zemel | Adam Dalton | Tomek Strzalkowski | Samira Shaikh
Findings of the Association for Computational Linguistics: EMNLP 2020

Achieving true human-like ability to conduct a conversation remains an elusive goal for open-ended dialogue systems. We posit this is because extant approaches towards natural language generation (NLG) are typically construed as end-to-end architectures that do not adequately model human generation processes. To investigate, we decouple generation into two separate phases: planning and realization. In the planning phase, we train two planners to generate plans for response utterances. The realization phase uses response plans to produce an appropriate response. Through rigorous evaluations, both automated and human, we demonstrate that decoupling the process into planning and realization performs better than an end-to-end approach.

pdf bib
Proceedings for the First International Workshop on Social Threats in Online Conversations: Understanding and Management
Archna Bhatia | Samira Shaikh
Proceedings for the First International Workshop on Social Threats in Online Conversations: Understanding and Management

pdf bib
Active Defense Against Social Engineering: The Case for Human Language Technology
Adam Dalton | Ehsan Aghaei | Ehab Al-Shaer | Archna Bhatia | Esteban Castillo | Zhuo Cheng | Sreekar Dhaduvai | Qi Duan | Bryanna Hebenstreit | Md Mazharul Islam | Younes Karimi | Amir Masoumzadeh | Brodie Mather | Sashank Santhanam | Samira Shaikh | Alan Zemel | Tomek Strzalkowski | Bonnie J. Dorr
Proceedings for the First International Workshop on Social Threats in Online Conversations: Understanding and Management

We describe a system that supports natural language processing (NLP) components for active defenses against social engineering attacks. We deploy a pipeline of human language technology, including Ask and Framing Detection, Named Entity Recognition, Dialogue Engineering, and Stylometry. The system processes modern message formats through a plug-in architecture to accommodate innovative approaches for message analysis, knowledge representation and dialogue generation. The novelty of the system is that it uses NLP for cyber defense and engages the attacker using bots to elicit evidence to attribute to the attacker and to waste the attacker’s time and resources.

pdf bib
Adaptation of a Lexical Organization for Social Engineering Detection and Response Generation
Archna Bhatia | Adam Dalton | Brodie Mather | Sashank Santhanam | Samira Shaikh | Alan Zemel | Tomek Strzalkowski | Bonnie J. Dorr
Proceedings for the First International Workshop on Social Threats in Online Conversations: Understanding and Management

We present a paradigm for extensible lexicon development based on Lexical Conceptual Structure to support social engineering detection and response generation. We leverage the central notions of ask (elicitation of behaviors such as providing access to money) and framing (risk/reward implied by the ask). We demonstrate improvements in ask/framing detection through refinements to our lexical organization and show that response generation qualitatively improves as ask/framing detection performance improves. The paradigm presents a systematic and efficient approach to resource adaptation for improved task-specific performance.

2019

pdf bib
Emotional Neural Language Generation Grounded in Situational Contexts
Sashank Santhanam | Samira Shaikh
Proceedings of the 4th Workshop on Computational Creativity in Language Generation

pdf bib
JUSTDeep at NLP4IF 2019 Task 1: Propaganda Detection using Ensemble Deep Learning Models
Hani Al-Omari | Malak Abdullah | Ola AlTiti | Samira Shaikh
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda

The internet and the high use of social media have enabled the modern-day journalism to publish, share and spread news that is difficult to distinguish if it is true or fake. Defining “fake news” is not well established yet, however, it can be categorized under several labels: false, biased, or framed to mislead the readers that are characterized as propaganda. Digital content production technologies with logical fallacies and emotional language can be used as propaganda techniques to gain more readers or mislead the audience. Recently, several researchers have proposed deep learning (DL) models to address this issue. This research paper provides an ensemble deep learning model using BiLSTM, XGBoost, and BERT to detect propaganda. The proposed model has been applied on the dataset provided by the challenge NLP4IF 2019, Task 1 Sentence Level Classification (SLC) and it shows a significant performance over the baseline model.

pdf bib
Proceedings of the 2019 Workshop on Widening NLP
Amittai Axelrod | Diyi Yang | Rossana Cunha | Samira Shaikh | Zeerak Waseem
Proceedings of the 2019 Workshop on Widening NLP

bib
Emoji Usage Across Platforms: A Case Study for the Charlottesville Event
Khyati Mahajan | Samira Shaikh
Proceedings of the 2019 Workshop on Widening NLP

We study emoji usage patterns across two social media platforms, one of them considered a fringe community called Gab, and the other Twitter. We find that Gab tends to comparatively use more emotionally charged emoji, but also seems more apathetic towards the violence during the event, while Twitter takes a more empathetic approach to the event.

pdf bib
Towards Best Experiment Design for Evaluating Dialogue System Output
Sashank Santhanam | Samira Shaikh
Proceedings of the 12th International Conference on Natural Language Generation

To overcome the limitations of automated metrics (e.g. BLEU, METEOR) for evaluating dialogue systems, researchers typically use human judgments to provide convergent evidence. While it has been demonstrated that human judgments can suffer from the inconsistency of ratings, extant research has also found that the design of the evaluation task affects the consistency and quality of human judgments. We conduct a between-subjects study to understand the impact of four experiment conditions on human ratings of dialogue system output. In addition to discrete and continuous scale ratings, we also experiment with a novel application of Best-Worst scaling to dialogue evaluation. Through our systematic study with 40 crowdsourced workers in each task, we find that using continuous scales achieves more consistent ratings than Likert scale or ranking-based experiment design. Additionally, we find that factors such as time taken to complete the task and no prior experience of participating in similar studies of rating dialogue system output positively impact consistency and agreement amongst raters.

2018

pdf bib
TeamUNCC at SemEval-2018 Task 1: Emotion Detection in English and Arabic Tweets using Deep Learning
Malak Abdullah | Samira Shaikh
Proceedings of the 12th International Workshop on Semantic Evaluation

Task 1 in the International Workshop SemEval 2018, Affect in Tweets, introduces five subtasks (El-reg, El-oc, V-reg, V-oc, and E-c) to detect the intensity of emotions in English, Arabic, and Spanish tweets. This paper describes TeamUNCC’s system to detect emotions in English and Arabic tweets. Our approach is novel in that we present the same architecture for all the five subtasks in both English and Arabic. The main input to the system is a combination of word2vec and doc2vec embeddings and a set of psycholinguistic features (e.g. from AffectTweets Weka-package). We apply a fully connected neural network architecture and obtain performance results that show substantial improvements in Spearman correlation scores over the baseline models provided by Task 1 organizers, (ranging from 0.03 to 0.23). TeamUNCC’s system ranks third in subtask El-oc and fourth in other subtasks for Arabic tweets.

2016

pdf bib
ANEW+: Automatic Expansion and Validation of Affective Norms of Words Lexicons in Multiple Languages
Samira Shaikh | Kit Cho | Tomek Strzalkowski | Laurie Feldman | John Lien | Ting Liu | George Aaron Broadwell
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In this article we describe our method of automatically expanding an existing lexicon of words with affective valence scores. The automatic expansion process was done in English. In addition, we describe our procedure for automatically creating lexicons in languages where such resources may not previously exist. The foreign languages we discuss in this paper are Spanish, Russian and Farsi. We also describe the procedures to systematically validate our newly created resources. The main contributions of this work are: 1) A general method for expansion and creation of lexicons with scores of words on psychological constructs such as valence, arousal or dominance; and 2) a procedure for ensuring validity of the newly constructed resources.

pdf bib
The Validation of MRCPD Cross-language Expansions on Imageability Ratings
Ting Liu | Kit Cho | Tomek Strzalkowski | Samira Shaikh | Mehrdad Mirzaei
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In this article, we present a method to validate a multi-lingual (English, Spanish, Russian, and Farsi) corpus on imageability ratings automatically expanded from MRCPD (Liu et al., 2014). We employed the corpus (Brysbaert et al., 2014) on concreteness ratings for our English MRCPD+ validation because of lacking human assessed imageability ratings and high correlation between concreteness ratings and imageability ratings (e.g. r = .83). For the same reason, we built a small corpus with human imageability assessment for the other language corpus validation. The results show that the automatically expanded imageability ratings are highly correlated with human assessment in all four languages, which demonstrate our automatic expansion method is valid and robust. We believe these new resources can be of significant interest to the research community, particularly in natural language processing and computational sociolinguistics.

2015

pdf bib
A New Dataset and Evaluation for Belief/Factuality
Vinodkumar Prabhakaran | Tomas By | Julia Hirschberg | Owen Rambow | Samira Shaikh | Tomek Strzalkowski | Jennifer Tracey | Michael Arrigo | Rupayan Basu | Micah Clark | Adam Dalton | Mona Diab | Louise Guthrie | Anna Prokofieva | Stephanie Strassel | Gregory Werner | Yorick Wilks | Janyce Wiebe
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics

pdf bib
Understanding Cultural Conflicts using Metaphors and Sociolinguistic Measures of Influence
Samira Shaikh | Tomek Strzalkowski | Sarah Taylor | John Lien | Ting Liu | George Aaron Broadwell | Laurie Feldman | Boris Yamrom | Kit Cho | Yuliya Peshkova
Proceedings of the Third Workshop on Metaphor in NLP

2014

pdf bib
Automatic Expansion of the MRC Psycholinguistic Database Imageability Ratings
Ting Liu | Kit Cho | G. Aaron Broadwell | Samira Shaikh | Tomek Strzalkowski | John Lien | Sarah Taylor | Laurie Feldman | Boris Yamrom | Nick Webb | Umit Boz | Ignacio Cases | Ching-sheng Lin
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Recent studies in metaphor extraction across several languages (Broadwell et al., 2013; Strzalkowski et al., 2013) have shown that word imageability ratings are highly correlated with the presence of metaphors in text. Information about imageability of words can be obtained from the MRC Psycholinguistic Database (MRCPD) for English words and Léxico Informatizado del Español Programa (LEXESP) for Spanish words, which is a collection of human ratings obtained in a series of controlled surveys. Unfortunately, word imageability ratings were collected for only a limited number of words: 9,240 words in English, 6,233 in Spanish; and are unavailable at all in the other two languages studied: Russian and Farsi. The present study describes an automated method for expanding the MRCPD by conferring imageability ratings over the synonyms and hyponyms of existing MRCPD words, as identified in Wordnet. The result is an expanded MRCPD+ database with imagea-bility scores for more than 100,000 words. The appropriateness of this expansion process is assessed by examining the structural coherence of the expanded set and by validating the expanded lexicon against human judgment. Finally, the performance of the metaphor extraction system is shown to improve significantly with the expanded database. This paper describes the process for English MRCPD+ and the resulting lexical resource. The process is analogous for other languages.

pdf bib
A Multi-Cultural Repository of Automatically Discovered Linguistic and Conceptual Metaphors
Samira Shaikh | Tomek Strzalkowski | Ting Liu | George Aaron Broadwell | Boris Yamrom | Sarah Taylor | Laurie Feldman | Kit Cho | Umit Boz | Ignacio Cases | Yuliya Peshkova | Ching-Sheng Lin
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this article, we present details about our ongoing work towards building a repository of Linguistic and Conceptual Metaphors. This resource is being developed as part of our research effort into the large-scale detection of metaphors from unrestricted text. We have stored a large amount of automatically extracted metaphors in American English, Mexican Spanish, Russian and Iranian Farsi in a relational database, along with pertinent metadata associated with these metaphors. A substantial subset of the contents of our repository has been systematically validated via rigorous social science experiments. Using information stored in the repository, we are able to posit certain claims in a cross-cultural context about how peoples in these cultures (America, Mexico, Russia and Iran) view particular concepts related to Governance and Economic Inequality through the use of metaphor. Researchers in the field can use this resource as a reference of typical metaphors used across these cultures. In addition, it can be used to recognize metaphors of the same form or pattern, in other domains of research.

pdf bib
Computing Affect in Metaphors
Tomek Strzalkowski | Samira Shaikh | Kit Cho | George Aaron Broadwell | Laurie Feldman | Sarah Taylor | Boris Yamrom | Ting Liu | Ignacio Cases | Yuliya Peshkova | Kyle Elliot
Proceedings of the Second Workshop on Metaphor in NLP

pdf bib
Discovering Conceptual Metaphors using Source Domain Spaces
Samira Shaikh | Tomek Strzalkowski | Kit Cho | Ting Liu | George Aaron Broadwell | Laurie Feldman | Sarah Taylor | Boris Yamrom | Ching-Sheng Lin | Ning Sa | Ignacio Cases | Yuliya Peshkova | Kyle Elliot
Proceedings of the 4th Workshop on Cognitive Aspects of the Lexicon (CogALex)

2013

pdf bib
Robust Extraction of Metaphor from Novel Data
Tomek Strzalkowski | George Aaron Broadwell | Sarah Taylor | Laurie Feldman | Samira Shaikh | Ting Liu | Boris Yamrom | Kit Cho | Umit Boz | Ignacio Cases | Kyle Elliot
Proceedings of the First Workshop on Metaphor in NLP

pdf bib
Topical Positioning: A New Method for Predicting Opinion Changes in Conversation
Ching-Sheng Lin | Samira Shaikh | Jennifer Stromer-Galley | Jennifer Crowley | Tomek Strzalkowski | Veena Ravishankar
Proceedings of the Workshop on Language Analysis in Social Media

2012

pdf bib
Modeling Leadership and Influence in Multi-party Online Discourse
Tomek Strzalkowski | Samira Shaikh | Ting Liu | George Aaron Broadwell | Jenny Stromer-Galley | Sarah Taylor | Umit Boz | Veena Ravishankar | Xiaoai Ren
Proceedings of COLING 2012

pdf bib
Revealing Contentious Concepts Across Social Groups
Ching-Sheng Lin | Zumrut Akcam | Samira Shaikh | Sharon Small | Ken Stahl | Tomek Strzalkowski | Nick Webb
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

In this paper, a computational model based on concept polarity is proposed to investigate the influence of communications across the diacultural groups. The hypothesis of this work is that there are communities or groups which can be characterized by a network of concepts and the corresponding valuations of those concepts that are agreed upon by the members of the community. We apply an existing research tool, ECO, to generate text representative of each community and create community specific Valuation Concept Networks (VCN). We then compare VCNs across the communities, to attempt to find contentious concepts, which could subsequently be the focus of further exploration as points of contention between the two communities. A prototype, CPAM (Changing Positions, Altering Minds), was implemented as a proof of concept for this approach. The experiment was conducted using blog data from pro-Palestinian and pro-Israeli communities. A potential application of this method and future work are discussed as well.

pdf bib
Extending the MPC corpus to Chinese and Urdu - A Multiparty Multi-Lingual Chat Corpus for Modeling Social Phenomena in Language
Ting Liu | Samira Shaikh | Tomek Strzalkowski | Aaron Broadwell | Jennifer Stromer-Galley | Sarah Taylor | Umit Boz | Xiaoai Ren | Jingsi Wu
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

In this paper, we report our efforts in building a multi-lingual multi-party online chat corpus in order to develop a firm understanding in a set of social constructs such as agenda control, influence, and leadership as well as to computationally model such constructs in online interactions. These automated models will help capture the dialogue dynamics that are essential for developing, among others, realistic human-machine dialogue systems, including autonomous virtual chat agents. In this paper, we first introduce our experiment design and data collection method in Chinese and Urdu, and then report on the current stage of our data collection. We annotated the collected corpus on four levels: communication links, dialogue acts, local topics, and meso-topics. Results from the analyses of annotated data on different languages indicate some interesting phenomena, which are reported in this paper.

2010

pdf bib
VCA: An Experiment with a Multiparty Virtual Chat Agent
Samira Shaikh | Tomek Strzalkowski | Sarah Taylor | Nick Webb
Proceedings of the 2010 Workshop on Companionable Dialogue Systems

pdf bib
Modeling Socio-Cultural Phenomena in Discourse
Tomek Strzalkowski | George Aaron Broadwell | Jennifer Stromer-Galley | Samira Shaikh | Sarah Taylor | Nick Webb
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
MPC: A Multi-Party Chat Corpus for Modeling Social Phenomena in Discourse
Samira Shaikh | Tomek Strzalkowski | Aaron Broadwell | Jennifer Stromer-Galley | Sarah Taylor | Nick Webb
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In this paper, we describe our experience with collecting and creating an annotated corpus of multi-party online conversations in a chat-room environment. This effort is part of a larger project to develop computational models of social phenomena such as agenda control, influence, and leadership in on-line interactions. Such models will help capturing the dialogue dynamics that are essential for developing, among others, realistic human-machine dialogue systems, including autonomous virtual chat agents. In this paper we describe data collection method used and the characteristics of the initial dataset of English chat. We have devised a multi-tiered collection process in which the subjects start from simple, free-flowing conversations and progress towards more complex and structured interactions. In this paper, we report on the first two stages of this process, which were recently completed. The third, large-scale collection effort is currently being conducted. All English dialogue has been annotated at four levels: communication links, dialogue acts, local topics and meso-topics. Some details of these annotations will be discussed later in this paper, although a full description is impossible within the scope of this article.
Search
Co-authors