Gaole He
2021
TextBox: A Unified, Modularized, and Extensible Framework for Text Generation
Junyi Li
|
Tianyi Tang
|
Gaole He
|
Jinhao Jiang
|
Xiaoxuan Hu
|
Puzhao Xie
|
Zhipeng Chen
|
Zhuohao Yu
|
Wayne Xin Zhao
|
Ji-Rong Wen
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations
In this paper, we release an open-source library, called TextBox, to provide a unified, modularized, and extensible text generation framework. TextBox aims to support a broad set of text generation tasks and models. In our library, we implement 21 text generation models on 9 benchmark datasets, covering the categories of VAE, GAN, and pretrained language models. Meanwhile, our library maintains sufficient modularity and extensibility by properly decomposing the model architecture, inference, and learning process into highly reusable modules, which allows users to easily incorporate new models into our framework. The above features make TextBox especially suitable for researchers and practitioners to quickly reproduce baseline models and develop new models. TextBox is implemented based on PyTorch, and released under Apache License 2.0 at the link https://github.com/RUCAIBox/TextBox.
A Pretraining Numerical Reasoning Model for Ordinal Constrained Question Answering on Knowledge Base
Yu Feng
|
Jing Zhang
|
Gaole He
|
Wayne Xin Zhao
|
Lemao Liu
|
Quan Liu
|
Cuiping Li
|
Hong Chen
Findings of the Association for Computational Linguistics: EMNLP 2021
Knowledge Base Question Answering (KBQA) is to answer natural language questions posed over knowledge bases (KBs). This paper targets at empowering the IR-based KBQA models with the ability of numerical reasoning for answering ordinal constrained questions. A major challenge is the lack of explicit annotations about numerical properties. To address this challenge, we propose a pretraining numerical reasoning model consisting of NumGNN and NumTransformer, guided by explicit self-supervision signals. The two modules are pretrained to encode the magnitude and ordinal properties of numbers respectively and can serve as model-agnostic plugins for any IR-based KBQA model to enhance its numerical reasoning ability. Extensive experiments on two KBQA benchmarks verify the effectiveness of our method to enhance the numerical reasoning ability for IR-based KBQA models.
Search
Co-authors
- Wayne Xin Zhao 2
- Junyi Li 1
- Tianyi Tang 1
- Jinhao Jiang 1
- Xiaoxuan Hu 1
- show all...