Yllias Chali


2024

pdf bib
Transfer-Learning based on Extract, Paraphrase and Compress Models for Neural Abstractive Multi-Document Summarization
Yllias Chali | Elozino Egonmwan
Proceedings of the 17th International Natural Language Generation Conference

Recently, transfer-learning by unsupervised pre-training and fine-tuning has shown great success on a number of tasks. The paucity of data for multi-document summarization (MDS) in the news domain, especially makes this approach practical. However, while existing literature mostly formulate unsupervised learning objectives tailored for/around the summarization problem we find that MDS can benefit directly from models pre-trained on other downstream supervised tasks such as sentence extraction, paraphrase generation and sentence compression. We carry out experiments to demonstrate the impact of zero-shot transfer-learning from these downstream tasks on MDS. Since it is challenging to train end-to-end encoder-decoder models on MDS due to i) the sheer length of the input documents, and ii) the paucity of training data. We hope this paper encourages more work on these downstream tasks as a means to mitigating the challenges in neural abstractive MDS.

2020

pdf bib
Towards Generating Query to Perform Query Focused Abstractive Summarization using Pre-trained Model
Deen Mohammad Abdullah | Yllias Chali
Proceedings of the 13th International Conference on Natural Language Generation

Query Focused Abstractive Summarization (QFAS) represents an abstractive summary from the source document based on a given query. To measure the performance of abstractive summarization tasks, different datasets have been broadly used. However, for QFAS tasks, only a limited number of datasets have been used, which are comparatively small and provide single sentence summaries. This paper presents a query generation approach, where we considered most similar words between documents and summaries for generating queries. By implementing our query generation approach, we prepared two relatively large datasets, namely CNN/DailyMail and Newsroom which contain multiple sentence summaries and can be used for future QFAS tasks. We also implemented a pre-processing approach to perform QFAS tasks using a pretrained language model, BERTSUM. In our pre-processing approach, we sorted the sentences of the documents from the most query-related sentences to the less query-related sentences. Then, we fine-tuned the BERTSUM model for generating the abstractive summaries. We also experimented on one of the largely used datasets, Debatepedia, to compare our QFAS approach with other models. The experimental results show that our approach outperforms the state-of-the-art models on three ROUGE scores.

2019

pdf bib
Transformer-based Model for Single Documents Neural Summarization
Elozino Egonmwan | Yllias Chali
Proceedings of the 3rd Workshop on Neural Generation and Translation

We propose a system that improves performance on single document summarization task using the CNN/DailyMail and Newsroom datasets. It follows the popular encoder-decoder paradigm, but with an extra focus on the encoder. The intuition is that the probability of correctly decoding an information significantly lies in the pattern and correctness of the encoder. Hence we introduce, encode –encode – decode. A framework that encodes the source text first with a transformer, then a sequence-to-sequence (seq2seq) model. We find that the transformer and seq2seq model complement themselves adequately, making for a richer encoded vector representation. We also find that paying more attention to the vocabulary of target words during abstraction improves performance. We experiment our hypothesis and framework on the task of extractive and abstractive single document summarization and evaluate using the standard CNN/DailyMail dataset and the recently released Newsroom dataset.

pdf bib
Transformer and seq2seq model for Paraphrase Generation
Elozino Egonmwan | Yllias Chali
Proceedings of the 3rd Workshop on Neural Generation and Translation

Paraphrase generation aims to improve the clarity of a sentence by using different wording that convey similar meaning. For better quality of generated paraphrases, we propose a framework that combines the effectiveness of two models – transformer and sequence-to-sequence (seq2seq). We design a two-layer stack of encoders. The first layer is a transformer model containing 6 stacked identical layers with multi-head self attention, while the second-layer is a seq2seq model with gated recurrent units (GRU-RNN). The transformer encoder layer learns to capture long-term dependencies, together with syntactic and semantic properties of the input sentence. This rich vector representation learned by the transformer serves as input to the GRU-RNN encoder responsible for producing the state vector for decoding. Experimental results on two datasets-QUORA and MSCOCO using our framework, produces a new benchmark for paraphrase generation.

2018

pdf bib
Abstractive Unsupervised Multi-Document Summarization using Paraphrastic Sentence Fusion
Mir Tafseer Nayeem | Tanvir Ahmed Fuad | Yllias Chali
Proceedings of the 27th International Conference on Computational Linguistics

In this work, we aim at developing an unsupervised abstractive summarization system in the multi-document setting. We design a paraphrastic sentence fusion model which jointly performs sentence fusion and paraphrasing using skip-gram word embedding model at the sentence level. Our model improves the information coverage and at the same time abstractiveness of the generated sentences. We conduct our experiments on the human-generated multi-sentence compression datasets and evaluate our system on several newly proposed Machine Translation (MT) evaluation metrics. Furthermore, we apply our sentence level model to implement an abstractive multi-document summarization system where documents usually contain a related set of sentences. We also propose an optimal solution for the classical summary length limit problem which was not addressed in the past research. For the document level summary, we conduct experiments on the datasets of two different domains (e.g., news article and user reviews) which are well suited for multi-document abstractive summarization. Our experiments demonstrate that the methods bring significant improvements over the state-of-the-art methods.

pdf bib
Automatic Opinion Question Generation
Yllias Chali | Tina Baghaee
Proceedings of the 11th International Conference on Natural Language Generation

We study the problem of opinion question generation from sentences with the help of community-based question answering systems. For this purpose, we use a sequence to sequence attentional model, and we adopt coverage mechanism to prevent sentences from repeating themselves. Experimental results on the Amazon question/answer dataset show an improvement in automatic evaluation metrics as well as human evaluations from the state-of-the-art question generation systems.

2017

pdf bib
Towards Abstractive Multi-Document Summarization Using Submodular Function-Based Framework, Sentence Compression and Merging
Yllias Chali | Moin Tanvee | Mir Tafseer Nayeem
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

We propose a submodular function-based summarization system which integrates three important measures namely importance, coverage, and non-redundancy to detect the important sentences for the summary. We design monotone and submodular functions which allow us to apply an efficient and scalable greedy algorithm to obtain informative and well-covered summaries. In addition, we integrate two abstraction-based methods namely sentence compression and merging for generating an abstractive sentence set. We design our summarization models for both generic and query-focused summarization. Experimental results on DUC-2004 and DUC-2007 datasets show that our generic and query-focused summarizers have outperformed the state-of-the-art summarization systems in terms of ROUGE-1 and ROUGE-2 recall and F-measure.

pdf bib
Extract with Order for Coherent Multi-Document Summarization
Mir Tafseer Nayeem | Yllias Chali
Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing

In this work, we aim at developing an extractive summarizer in the multi-document setting. We implement a rank based sentence selection using continuous vector representations along with key-phrases. Furthermore, we propose a model to tackle summary coherence for increasing readability. We conduct experiments on the Document Understanding Conference (DUC) 2004 datasets using ROUGE toolkit. Our experiments demonstrate that the methods bring significant improvements over the state of the art methods in terms of informativity and coherence.

2016

pdf bib
Ranking Automatically Generated Questions Using Common Human Queries
Yllias Chali | Sina Golestanirad
Proceedings of the 9th International Natural Language Generation conference

2015

pdf bib
Towards Topic-to-Question Generation
Yllias Chali | Sadid A. Hasan
Computational Linguistics, Volume 41, Issue 1 - March 2015

2014

pdf bib
Fear the REAPER: A System for Automatic Multi-Document Summarization with Reinforcement Learning
Cody Rioux | Sadid A. Hasan | Yllias Chali
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf bib
Using POMDPs for Topic-Focused Multi-Document Summarization (L’utilisation des POMDP pour les résumés multi-documents orientés par une thématique) [in French]
Yllias Chali | Sadid A. Hasan | Mustapha Mojahid
Proceedings of TALN 2013 (Volume 1: Long Papers)

pdf bib
On the Effectiveness of Using Syntactic and Shallow Semantic Tree Kernels for Automatic Assessment of Essays
Yllias Chali | Sadid A. Hasan
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2012

pdf bib
Automatically Assessing Free Texts
Yllias Chali | Sadid A. Hasan
Proceedings of the Workshop on Speech and Language Processing Tools in Education

pdf bib
Simple or Complex? Classifying Questions by Answering Complexity
Yllias Chali | Sadid A. Hasan
Proceedings of the Workshop on Question Answering for Complex Domains

pdf bib
On the Effectiveness of using Sentence Compression Models for Query-Focused Multi-Document Summarization
Yllias Chali | Sadid A. Hasan
Proceedings of COLING 2012

pdf bib
Towards Automatic Topical Question Generation
Yllias Chali | Sadid A. Hasan
Proceedings of COLING 2012

2011

pdf bib
Using Syntactic and Shallow Semantic Kernels to Improve Multi-Modality Manifold-Ranking for Topic-Focused Multi-Document Summarization
Yllias Chali | Sadid A. Hasan | Kaisar Imam
Proceedings of 5th International Joint Conference on Natural Language Processing

2010

pdf bib
Automatic Question Generation from Sentences
Husam Ali | Yllias Chali | Sadid A. Hasan
Actes de la 17e conférence sur le Traitement Automatique des Langues Naturelles. Articles courts

Question Generation (QG) and Question Answering (QA) are some of the many challenges for natural language understanding and interfaces. As humans need to ask good questions, the potential benefits from automated QG systems may assist them in meeting useful inquiry needs. In this paper, we consider an automatic Sentence-to-Question generation task, where given a sentence, the Question Generation (QG) system generates a set of questions for which the sentence contains, implies, or needs answers. To facilitate the question generation task, we build elementary sentences from the input complex sentences using a syntactic parser. A named entity recognizer and a part of speech tagger are applied on each of these sentences to encode necessary information. We classify the sentences based on their subject, verb, object and preposition for determining the possible type of questions to be generated. We use the TREC-2007 (Question Answering Track) dataset for our experiments and evaluation.

2009

pdf bib
Do Automatic Annotation Techniques Have Any Impact on Supervised Complex Question Answering?
Yllias Chali | Sadid Hasan | Shafiq Joty
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf bib
Improving the Performance of the Random Walk Model for Answering Complex Questions
Yllias Chali | Shafiq Joty
Proceedings of ACL-08: HLT, Short Papers

pdf bib
Selecting Sentences for Answering Complex Questions
Yllias Chali | Shafiq Joty
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
UofL: Word Sense Disambiguation Using Lexical Cohesion
Yllias Chali | Shafiq R. Joty
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

2005

pdf bib
Document Clustering with Grouping and Chaining Algorithms
Yllias Chali | Soufiane Noureddine
Second International Joint Conference on Natural Language Processing: Full Papers

2002

pdf bib
Experiments in Topic Detection
Yllias Chali
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)