pdf
bib
Proceedings of the First Workshop on Narrative Understanding
David Bamman
|
Snigdha Chaturvedi
|
Elizabeth Clark
|
Madalina Fiterau
|
Mohit Iyyer
pdf
bib
abs
Towards Coherent and Cohesive Long-form Text Generation
Woon Sang Cho
|
Pengchuan Zhang
|
Yizhe Zhang
|
Xiujun Li
|
Michel Galley
|
Chris Brockett
|
Mengdi Wang
|
Jianfeng Gao
Generating coherent and cohesive long-form texts is a challenging task. Previous works relied on large amounts of human-generated texts to train neural language models. However, few attempted to explicitly improve neural language models from the perspectives of coherence and cohesion. In this work, we propose a new neural language model that is equipped with two neural discriminators which provide feedback signals at the levels of sentence (cohesion) and paragraph (coherence). Our model is trained using a simple yet efficient variant of policy gradient, called ‘negative-critical sequence training’, which is proposed to eliminate the need of training a separate critic for estimating ‘baseline’. Results demonstrate the effectiveness of our approach, showing improvements over the strong baseline – recurrent attention-based bidirectional MLE-trained neural language model.
pdf
bib
abs
Character Identification Refined: A Proposal
Labiba Jahan
|
Mark Finlayson
Characters are a key element of narrative and so character identification plays an important role in automatic narrative understanding. Unfortunately, most prior work that incorporates character identification is not built upon a clear, theoretically grounded concept of character. They either take character identification for granted (e.g., using simple heuristics on referring expressions), or rely on simplified definitions that do not capture important distinctions between characters and other referents in the story. Prior approaches have also been rather complicated, relying, for example, on predefined case bases or ontologies. In this paper we propose a narratologically grounded definition of character for discussion at the workshop, and also demonstrate a preliminary yet straightforward supervised machine learning model with a small set of features that performs well on two corpora. The most important of the two corpora is a set of 46 Russian folktales, on which the model achieves an F1 of 0.81. Error analysis suggests that features relevant to the plot will be necessary for further improvements in performance.
pdf
bib
abs
Deep Natural Language Understanding of News Text
Jaya Shree
|
Emily Liu
|
Andrew Gordon
|
Jerry Hobbs
Early proposals for the deep understanding of natural language text advocated an approach of “interpretation as abduction,” where the meaning of a text was derived as an explanation that logically entailed the input words, given a knowledge base of lexical and commonsense axioms. While most subsequent NLP research has instead pursued statistical and data-driven methods, the approach of interpretation as abduction has seen steady advancements in both theory and software implementations. In this paper, we summarize advances in deriving the logical form of the text, encoding commonsense knowledge, and technologies for scalable abductive reasoning. We then explore the application of these advancements to the deep understanding of a paragraph of news text, where the subtle meaning of words and phrases are resolved by backward chaining on a knowledge base of 80 hand-authored axioms.
pdf
bib
abs
Extraction of Message Sequence Charts from Narrative History Text
Girish Palshikar
|
Sachin Pawar
|
Sangameshwar Patil
|
Swapnil Hingmire
|
Nitin Ramrakhiyani
|
Harsimran Bedi
|
Pushpak Bhattacharyya
|
Vasudeva Varma
In this paper, we advocate the use of Message Sequence Chart (MSC) as a knowledge representation to capture and visualize multi-actor interactions and their temporal ordering. We propose algorithms to automatically extract an MSC from a history narrative. For a given narrative, we first identify verbs which indicate interactions and then use dependency parsing and Semantic Role Labelling based approaches to identify senders (initiating actors) and receivers (other actors involved) for these interaction verbs. As a final step in MSC extraction, we employ a state-of-the art algorithm to temporally re-order these interactions. Our evaluation on multiple publicly available narratives shows improvements over four baselines.
pdf
bib
abs
Unsupervised Hierarchical Story Infilling
Daphne Ippolito
|
David Grangier
|
Chris Callison-Burch
|
Douglas Eck
Story infilling involves predicting words to go into a missing span from a story. This challenging task has the potential to transform interactive tools for creative writing. However, state-of-the-art conditional language models have trouble balancing fluency and coherence with novelty and diversity. We address this limitation with a hierarchical model which first selects a set of rare words and then generates text conditioned on that set. By relegating the high entropy task of picking rare words to a word-sampling model, the second-stage model conditioned on those words can achieve high fluency and coherence by searching for likely sentences, without sacrificing diversity.
pdf
bib
abs
Identifying Sensible Lexical Relations in Generated Stories
Melissa Roemmele
As with many text generation tasks, the focus of recent progress on story generation has been in producing texts that are perceived to “make sense” as a whole. There are few automated metrics that address this dimension of story quality even on a shallow lexical level. To initiate investigation into such metrics, we apply a simple approach to identifying word relations that contribute to the ‘narrative sense’ of a story. We use this approach to comparatively analyze the output of a few notable story generation systems in terms of these relations. We characterize differences in the distributions of relations according to their strength within each story.