2023
pdf
bib
abs
mReFinED: An Efficient End-to-End Multilingual Entity Linking System
Peerat Limkonchotiwat
|
Weiwei Cheng
|
Christos Christodoulopoulos
|
Amir Saffari
|
Jens Lehmann
Findings of the Association for Computational Linguistics: EMNLP 2023
End-to-end multilingual entity linking (MEL) is concerned with identifying multilingual entity mentions and their corresponding entity IDs in a knowledge base. Existing works assumed that entity mentions were given and skipped the entity mention detection step due to a lack of high-quality multilingual training corpora. To overcome this limitation, we propose mReFinED, the first end-to-end multilingual entity linking. Additionally, we propose a bootstrapping mention detection framework that enhances the quality of training corpora. Our experimental results demonstrated that mReFinED outperformed the best existing work in the end-to-end MEL task while being 44 times faster.
2022
pdf
bib
abs
Product Answer Generation from Heterogeneous Sources: A New Benchmark and Best Practices
Xiaoyu Shen
|
Gianni Barlacchi
|
Marco Del Tredici
|
Weiwei Cheng
|
Bill Byrne
|
Adrià Gispert
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
It is of great value to answer product questions based on heterogeneous information sources available on web product pages, e.g., semi-structured attributes, text descriptions, user-provided contents, etc. However, these sources have different structures and writing styles, which poses challenges for (1) evidence ranking, (2) source selection, and (3) answer generation. In this paper, we build a benchmark with annotations for both evidence selection and answer generation covering 6 information sources. Based on this benchmark, we conduct a comprehensive study and present a set of best practices. We show that all sources are important and contribute to answering questions. Handling all sources within one single model can produce comparable confidence scores across sources and combining multiple sources for training always helps, even for sources with totally different structures. We further propose a novel data augmentation method to iteratively create training samples for answer generation, which achieves close-to-human performance with only a few thousandannotations. Finally, we perform an in-depth error analysis of model predictions and highlight the challenges for future research.
pdf
bib
abs
semiPQA: A Study on Product Question Answering over Semi-structured Data
Xiaoyu Shen
|
Gianni Barlacchi
|
Marco Del Tredici
|
Weiwei Cheng
|
Adrià Gispert
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
Product question answering (PQA) aims to automatically address customer questions to improve their online shopping experience. Current research mainly focuses on finding answers from either unstructured text, like product descriptions and user reviews, or structured knowledge bases with pre-defined schemas. Apart from the above two sources, a lot of product information is represented in a semi-structured way, e.g., key-value pairs, lists, tables, json and xml files, etc. These semi-structured data can be a valuable answer source since they are better organized than free text, while being easier to construct than structured knowledge bases. However, little attention has been paid to them. To fill in this blank, here we study how to effectively incorporate semi-structured answer sources for PQA and focus on presenting answers in a natural, fluent sentence. To this end, we present semiPQA: a dataset to benchmark PQA over semi-structured data. It contains 11,243 written questions about json-formatted data covering 320 unique attribute types. Each data point is paired with manually-annotated text that describes its contents, so that we can train a neural answer presenter to present the data in a natural way. We provide baseline results and a deep analysis on the successes and challenges of leveraging semi-structured data for PQA. In general, state-of-the-art neural models can perform remarkably well when dealing with seen attribute types. For unseen attribute types, however, a noticeable drop is observed for both answer presentation and attribute ranking.
2018
pdf
bib
abs
Multiplicative Tree-Structured Long Short-Term Memory Networks for Semantic Representations
Nam Khanh Tran
|
Weiwei Cheng
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
Tree-structured LSTMs have shown advantages in learning semantic representations by exploiting syntactic information. Most existing methods model tree structures by bottom-up combinations of constituent nodes using the same shared compositional function and often making use of input word information only. The inability to capture the richness of compositionality makes these models lack expressive power. In this paper, we propose multiplicative tree-structured LSTMs to tackle this problem. Our model makes use of not only word information but also relation information between words. It is more expressive, as different combination functions can be used for each child node. In addition to syntactic trees, we also investigate the use of Abstract Meaning Representation in tree-structured models, in order to incorporate both syntactic and semantic information from the sentence. Experimental results on common NLP tasks show the proposed models lead to better sentence representation and AMR brings benefits in complex tasks.
2017
pdf
bib
abs
Salience Rank: Efficient Keyphrase Extraction with Topic Modeling
Nedelina Teneva
|
Weiwei Cheng
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Topical PageRank (TPR) uses latent topic distribution inferred by Latent Dirichlet Allocation (LDA) to perform ranking of noun phrases extracted from documents. The ranking procedure consists of running PageRank K times, where K is the number of topics used in the LDA model. In this paper, we propose a modification of TPR, called Salience Rank. Salience Rank only needs to run PageRank once and extracts comparable or better keyphrases on benchmark datasets. In addition to quality and efficiency benefit, our method has the flexibility to extract keyphrases with varying tradeoffs between topic specificity and corpus specificity.
2012
pdf
bib
Scaling up WSD with Automatically Generated Examples
Weiwei Cheng
|
Judita Preiss
|
Mark Stevenson
BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing
2010
pdf
bib
Demonstration of a Prototype for a Conversational Companion for Reminiscing about Images
Yorick Wilks
|
Roberta Catizone
|
Alexiei Dingli
|
Weiwei Cheng
Proceedings of the ACL 2010 System Demonstrations