Alex Aussem


2023

pdf bib
Non-Parametric Memory Guidance for Multi-Document Summarization
Florian Baud | Alex Aussem
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing

Multi-document summarization (MDS) is a difficult task in Natural Language Processing, aiming to summarize information from several documents. However, the source documents are often insufficient to obtain a qualitative summary. We propose a retriever-guided model combined with non-parametric memory for summary generation. This model retrieves relevant candidates from a database and then generates the summary considering the candidates with a copy mechanism and the source documents. The retriever is implemented with Approximate Nearest Neighbor Search (ANN) to search large databases. Our method is evaluated on the MultiXScience dataset which includes scientific articles. Finally, we discuss our results and possible directions for future work.

2020

pdf bib
End-to-End Extraction of Structured Information from Business Documents with Pointer-Generator Networks
Clément Sage | Alex Aussem | Véronique Eglin | Haytham Elghazel | Jérémy Espinas
Proceedings of the Fourth Workshop on Structured Prediction for NLP

The predominant approaches for extracting key information from documents resort to classifiers predicting the information type of each word. However, the word level ground truth used for learning is expensive to obtain since it is not naturally produced by the extraction task. In this paper, we discuss a new method for training extraction models directly from the textual value of information. The extracted information of a document is represented as a sequence of tokens in the XML language. We learn to output this representation with a pointer-generator network that alternately copies the document words carrying information and generates the XML tags delimiting the types of information. The ability of our end-to-end method to retrieve structured information is assessed on a large set of business documents. We show that it performs competitively with a standard word classifier without requiring costly word level supervision.