Hang Li


2024

pdf bib
MLeVLM: Improve Multi-level Progressive Capabilities based on Multimodal Large Language Model for Medical Visual Question Answering
Dexuan Xu | Yanyuan Chen | Jieyi Wang | Yue Huang | Hanpin Wang | Zhi Jin | Hongxing Wang | Weihua Yue | Jing He | Hang Li | Yu Huang
Findings of the Association for Computational Linguistics ACL 2024

Medical visual question answering (MVQA) requires in-depth understanding of medical images and questions to provide reliable answers. We summarize multi-level progressive capabilities that models need to focus on in MVQA: recognition, details, diagnosis, knowledge, and reasoning. Existing MVQA models tend to ignore the above capabilities due to unspecific data and plain architecture. To address these issues, this paper proposes Multi-level Visual Language Model (MLeVLM) for MVQA. On the data side, we construct a high-quality multi-level instruction dataset MLe-VQA via GPT-4, which covers multi-level questions and answers as well as reasoning processes from visual clues to semantic cognition. On the architecture side, we propose a multi-level feature alignment module, including attention-based token selector and context merger, which can efficiently align features at different levels from visual to semantic. To better evaluate the model’s capabilities, we manually construct a multi-level MVQA evaluation benchmark named MLe-Bench. Extensive experiments demonstrate the effectiveness of our constructed multi-level instruction dataset and the multi-level feature alignment module. It also proves that MLeVLM outperforms existing medical multimodal large language models.

pdf bib
ReFT: Reasoning with Reinforced Fine-Tuning
Luong Trung | Xinbo Zhang | Zhanming Jie | Peng Sun | Xiaoran Jin | Hang Li
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

One way to enhance the reasoning capability of Large Language Models (LLMs) is to conduct Supervised Fine-Tuning (SFT) using Chain-of-Thought (CoT) annotations. This approach does not show sufficiently strong generalization ability, however, because the training only relies on the given CoT data. In math problem-solving, for example, there is usually only one annotated reasoning path for each question in the training data. Intuitively, it would be better for the algorithm to learn from multiple annotated reasoning paths given a question. To address this issue, we propose a simple yet effective approach called Reinforced Fine-Tuning (ReFT) to enhance the generalizability of learning LLMs for reasoning, with math problem-solving as an example. ReFT first warmups the model with SFT, and then employs on-line reinforcement learning, specifically the PPO algorithm in this paper, to further fine-tune the model, where an abundance of reasoning paths are automatically sampled given the question and the rewards are naturally derived from the ground-truth answers. Extensive experiments on GSM8K, MathQA, and SVAMP datasets show that ReFT significantly outperforms SFT, and the performance can be potentially further boosted by combining inference-time strategies such as majority voting and re-ranking. Note that ReFT obtains the improvement by learning from the same training questions as SFT, without relying on extra or augmented training questions. This indicates a superior generalization ability for ReFT.

2023

pdf bib
Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks
Xinsong Zhang | Yan Zeng | Jipeng Zhang | Hang Li
Findings of the Association for Computational Linguistics: EMNLP 2023

Foundation models or pre-trained models have substantially improved the performance of various language, vision, and vision-language understanding tasks. However, existing foundation models can only perform the best in one type of tasks, namely language, vision, or vision-language. It is still an open question whether it is possible to construct a general foundation model performing the best for all the understanding tasks. In this paper, we propose a new method for training the general foundation model, X-FM (the X-Foundation Model). X-FM has one language encoder, one vision encoder, and one fusion encoder, as well as a new training method. The training method includes two new techniques for learning X-FM from text, image, and image-text pair data. One is to stop gradients from the vision-language training when learning the language encoder. The other is to leverage the vision-language training to guide the learning of the vision encoder. Extensive experiments on benchmark datasets show that X-FM can significantly outperform existing general foundation models and perform better than or comparable to existing foundation models specifically for language, vision, or vision-language understanding. Code and pre-trained models are released at https://github.com/zhangxinsong-nlp/XFM.

2022

pdf bib
Text-to-Table: A New Way of Information Extraction
Xueqing Wu | Jiacheng Zhang | Hang Li
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We study a new problem setting of information extraction (IE), referred to as text-to-table. In text-to-table, given a text, one creates a table or several tables expressing the main content of the text, while the model is learned from text-table pair data. The problem setting differs from those of the existing methods for IE. First, the extraction can be carried out from long texts to large tables with complex structures. Second, the extraction is entirely data-driven, and there is no need to explicitly define the schemas. As far as we know, there has been no previous work that studies the problem. In this work, we formalize text-to-table as a sequence-to-sequence (seq2seq) problem. We first employ a seq2seq model fine-tuned from a pre-trained language model to perform the task. We also develop a new method within the seq2seq approach, exploiting two additional techniques in table generation: table constraint and table relation embeddings. We consider text-to-table as an inverse problem of the well-studied table-to-text, and make use of four existing table-to-text datasets in our experiments on text-to-table. Experimental results show that the vanilla seq2seq model can outperform the baseline methods of using relation extraction and named entity extraction. The results also show that our method can further boost the performances of the vanilla seq2seq model. We further discuss the main challenges of the proposed task. The code and data are available at https://github.com/shirley-wu/text_to_table.

pdf bib
A Neural-Symbolic Approach to Natural Language Understanding
Zhixuan Liu | Zihao Wang | Yuan Lin | Hang Li
Findings of the Association for Computational Linguistics: EMNLP 2022

Deep neural networks, empowered by pre-trained language models, have achieved remarkable results in natural language understanding (NLU) tasks. However, their performances can drastically deteriorate when logical reasoning is needed. This is because NLU in principle depends on not only analogical reasoning, which deep neural networks are good at, but also logical reasoning. According to the dual-process theory, analogical reasoning and logical reasoning are respectively carried out by System 1 and System 2 in the human brain. Inspired by the theory, we present a novel framework for NLU called Neural-Symbolic Processor (NSP), which performs analogical reasoning based on neural processing and logical reasoning based on both neural and symbolic processing. As a case study, we conduct experiments on two NLU tasks, question answering (QA) and natural language inference (NLI), when numerical reasoning (a type of logical reasoning) is necessary. The experimental results show that our method significantly outperforms state-of-the-art methods in both tasks.

2021

pdf bib
A Sequence-to-Sequence Approach to Dialogue State Tracking
Yue Feng | Yang Wang | Hang Li
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

This paper is concerned with dialogue state tracking (DST) in a task-oriented dialogue system. Building a DST module that is highly effective is still a challenging issue, although significant progresses have been made recently. This paper proposes a new approach to dialogue state tracking, referred to as Seq2Seq-DU, which formalizes DST as a sequence-to-sequence problem. Seq2Seq-DU employs two BERT-based encoders to respectively encode the utterances in the dialogue and the descriptions of schemas, an attender to calculate attentions between the utterance embeddings and the schema embeddings, and a decoder to generate pointers to represent the current state of dialogue. Seq2Seq-DU has the following advantages. It can jointly model intents, slots, and slot values; it can leverage the rich representations of utterances and schemas based on BERT; it can effectively deal with categorical and non-categorical slots, and unseen schemas. In addition, Seq2Seq-DU can also be used in the NLU (natural language understanding) module of a dialogue system. Experimental results on benchmark datasets in different settings (SGD, MultiWOZ2.2, MultiWOZ2.1, WOZ2.0, DSTC2, M2M, SNIPS, and ATIS) show that Seq2Seq-DU outperforms the existing methods.

pdf bib
Evaluating Document Coherence Modeling
Aili Shen | Meladel Mistica | Bahar Salehi | Hang Li | Timothy Baldwin | Jianzhong Qi
Transactions of the Association for Computational Linguistics, Volume 9

While pretrained language models (LMs) have driven impressive gains over morpho-syntactic and semantic tasks, their ability to model discourse and pragmatic phenomena is less clear. As a step towards a better understanding of their discourse modeling capabilities, we propose a sentence intrusion detection task. We examine the performance of a broad range of pretrained LMs on this detection task for English. Lacking a dataset for the task, we introduce INSteD, a novel intruder sentence detection dataset, containing 170,000+ documents constructed from English Wikipedia and CNN news articles. Our experiments show that pretrained LMs perform impressively in in-domain evaluation, but experience a substantial drop in the cross-domain setting, indicating limited generalization capacity. Further results over a novel linguistic probe dataset show that there is substantial room for improvement, especially in the cross- domain setting.

pdf bib
AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization
Xinsong Zhang | Pengshuai Li | Hang Li
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Secoco: Self-Correcting Encoding for Neural Machine Translation
Tao Wang | Chengqi Zhao | Mingxuan Wang | Lei Li | Hang Li | Deyi Xiong
Findings of the Association for Computational Linguistics: EMNLP 2021

This paper presents Self-correcting Encoding (Secoco), a framework that effectively deals with noisy input for robust neural machine translation by introducing self-correcting predictors. Different from previous robust approaches, Secoco enables NMT to explicitly correct noisy inputs and delete specific errors simultaneously with the translation decoding process. Secoco is able to achieve significant improvements over strong baselines on two real-world test sets and a benchmark WMT dataset with good interpretability. We will make our code and dataset publicly available soon.

pdf bib
CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
Hang Li | Wenbiao Ding | Yu Kang | Tianqiao Liu | Zhongqin Wu | Zitao Liu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Existing audio-language task-specific predictive approaches focus on building complicated late-fusion mechanisms. However, these models are facing challenges of overfitting with limited labels and low model generalization abilities. In this paper, we present a Cross-modal Transformer for Audio-and-Language, i.e., CTAL, which aims to learn the intra-modality and inter-modality connections between audio and language through two proxy tasks on a large amount of audio-and-language pairs: masked language modeling and masked cross-modal acoustic modeling. After fine-tuning our pre-trained model on multiple downstream audio-and-language tasks, we observe significant improvements across various tasks, such as, emotion classification, sentiment analysis, and speaker verification. On this basis, we further propose a specially-designed fusion mechanism that can be used in fine-tuning phase, which allows our pre-trained model to achieve better performance. Lastly, we demonstrate detailed ablation studies to prove that both our novel cross-modality fusion component and audio-language pre-training methods significantly contribute to the promising results. The code and pre-trained models are available at https://github.com/tal-ai/CTAL_EMNLP2021.

pdf bib
Mathematical Word Problem Generation from Commonsense Knowledge Graph and Equations
Tianqiao Liu | Qiang Fang | Wenbiao Ding | Hang Li | Zhongqin Wu | Zitao Liu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

There is an increasing interest in the use of mathematical word problem (MWP) generation in educational assessment. Different from standard natural question generation, MWP generation needs to maintain the underlying mathematical operations between quantities and variables, while at the same time ensuring the relevance between the output and the given topic. To address above problem, we develop an end-to-end neural model to generate diverse MWPs in real-world scenarios from commonsense knowledge graph and equations. The proposed model (1) learns both representations from edge-enhanced Levi graphs of symbolic equations and commonsense knowledge; (2) automatically fuses equation and commonsense knowledge information via a self-planning module when generating the MWPs. Experiments on an educational gold-standard set and a large-scale generated MWP set show that our approach is superior on the MWP generation task, and it outperforms the SOTA models in terms of both automatic evaluation metrics, i.e., BLEU-4, ROUGE-L, Self-BLEU, and human evaluation metrics, i.e., equation relevance, topic relevance, and language coherence. To encourage reproducible results, we make our code and MWP dataset public available at https://github.com/tal-ai/MaKE_EMNLP2021.

2020

pdf bib
Fact-based Text Editing
Hayate Iso | Chao Qiao | Hang Li
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We propose a novel text editing task, referred to as fact-based text editing, in which the goal is to revise a given document to better describe the facts in a knowledge base (e.g., several triples). The task is important in practice because reflecting the truth is a common requirement in text editing. First, we propose a method for automatically generating a dataset for research on fact-based text editing, where each instance consists of a draft text, a revised text, and several facts represented in triples. We apply the method into two public table-to-text datasets, obtaining two new datasets consisting of 233k and 37k instances, respectively. Next, we propose a new neural network architecture for fact-based text editing, called FactEditor, which edits a draft text by referring to given facts using a buffer, a stream, and a memory. A straightforward approach to address the problem would be to employ an encoder-decoder model. Our experimental results on the two datasets show that FactEditor outperforms the encoder-decoder approach in terms of fidelity and fluency. The results also show that FactEditor conducts inference faster than the encoder-decoder approach.

pdf bib
Spelling Error Correction with Soft-Masked BERT
Shaohua Zhang | Haoran Huang | Jicong Liu | Hang Li
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Spelling error correction is an important yet challenging task because a satisfactory solution of it essentially needs human-level language understanding ability. Without loss of generality we consider Chinese spelling error correction (CSC) in this paper. A state-of-the-art method for the task selects a character from a list of candidates for correction (including non-correction) at each position of the sentence on the basis of BERT, the language representation model. The accuracy of the method can be sub-optimal, however, because BERT does not have sufficient capability to detect whether there is an error at each position, apparently due to the way of pre-training it using mask language modeling. In this work, we propose a novel neural architecture to address the aforementioned issue, which consists of a network for error detection and a network for error correction based on BERT, with the former being connected to the latter with what we call soft-masking technique. Our method of using ‘Soft-Masked BERT’ is general, and it may be employed in other language detection-correction problems. Experimental results on two datasets, including one large dataset which we create and plan to release, demonstrate that the performance of our proposed method is significantly better than the baselines including the one solely based on BERT.

2018

pdf bib
Paraphrase Generation with Deep Reinforcement Learning
Zichao Li | Xin Jiang | Lifeng Shang | Hang Li
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Automatic generation of paraphrases from a given sentence is an important yet challenging task in natural language processing (NLP). In this paper, we present a deep reinforcement learning approach to paraphrase generation. Specifically, we propose a new framework for the task, which consists of a generator and an evaluator, both of which are learned from data. The generator, built as a sequence-to-sequence learning model, can produce paraphrases given a sentence. The evaluator, constructed as a deep matching model, can judge whether two sentences are paraphrases of each other. The generator is first trained by deep learning and then further fine-tuned by reinforcement learning in which the reward is given by the evaluator. For the learning of the evaluator, we propose two methods based on supervised learning and inverse reinforcement learning respectively, depending on the type of available training data. Experimental results on two datasets demonstrate the proposed models (the generators) can produce more accurate paraphrases and outperform the state-of-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.

2017

pdf bib
Deep Active Learning for Dialogue Generation
Nabiha Asghar | Pascal Poupart | Xin Jiang | Hang Li
Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)

We propose an online, end-to-end, neural generative conversational model for open-domain dialogue. It is trained using a unique combination of offline two-phase supervised learning and online human-in-the-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on hamming-diverse beam search for response generation and one-character user-feedback at each step. Experiments show that our model inherently promotes the generation of semantically relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.

pdf bib
Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization
Piji Li | Wai Lam | Lidong Bing | Weiwei Guo | Hang Li
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

When people recall and digest what they have read for writing summaries, the important content is more likely to attract their attention. Inspired by this observation, we propose a cascaded attention based unsupervised model to estimate the salience information from the text for compressive multi-document summarization. The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience. By adding sparsity constraints on the number of output vectors, we can generate condensed information which can be treated as word salience. Fine-grained and coarse-grained sentence compression strategies are incorporated to produce compressive summaries. Experiments on some benchmark data sets show that our framework achieves better results than the state-of-the-art methods.

pdf bib
Context Gates for Neural Machine Translation
Zhaopeng Tu | Yang Liu | Zhengdong Lu | Xiaohua Liu | Hang Li
Transactions of the Association for Computational Linguistics, Volume 5

In neural machine translation (NMT), generation of a target word depends on both source and target contexts. We find that source contexts have a direct impact on the adequacy of a translation while target contexts affect the fluency. Intuitively, generation of a content word should rely more on the source context and generation of a functional word should rely more on the target context. Due to the lack of effective control over the influence from source and target contexts, conventional NMT tends to yield fluent but inadequate translations. To address this problem, we propose context gates which dynamically control the ratios at which source and target contexts contribute to the generation of target words. In this way, we can enhance both the adequacy and fluency of NMT with more careful control of the information flow from contexts. Experiments show that our approach significantly improves upon a standard attention-based NMT system by +2.3 BLEU points.

pdf bib
Chunk-Based Bi-Scale Decoder for Neural Machine Translation
Hao Zhou | Zhaopeng Tu | Shujian Huang | Xiaohua Liu | Hang Li | Jiajun Chen
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

In typical neural machine translation (NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN. In this paper, we propose a new type of decoder for NMT, which splits the decode state into two parts and updates them in two different time-scales. Specifically, we first predict a chunk time-scale state for phrasal modeling, on top of which multiple word time-scale states are generated. In this way, the target sentence is translated hierarchically from chunks to words, with information in different granularities being leveraged. Experiments show that our proposed model significantly improves the translation performance over the state-of-the-art NMT model.

pdf bib
Variation Autoencoder Based Network Representation Learning for Classification
Hang Li | Haozheng Wang | Zhenglu Yang | Masato Odagaki
Proceedings of ACL 2017, Student Research Workshop

2016

pdf bib
Memory-enhanced Decoder for Neural Machine Translation
Mingxuan Wang | Zhengdong Lu | Hang Li | Qun Liu
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
A Novel Approach to Dropped Pronoun Translation
Longyue Wang | Zhaopeng Tu | Xiaojun Zhang | Hang Li | Andy Way | Qun Liu
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Recent Progress in Deep Learning for NLP
Zhengdong Lu | Hang Li
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorial Abstracts

pdf bib
Neural Enquirer: Learning to Query Tables in Natural Language
Pengcheng Yin | Zhengdong Lu | Hang Li | Kao Ben
Proceedings of the Workshop on Human-Computer Question Answering

pdf bib
Neural Generative Question Answering
Jun Yin | Xin Jiang | Zhengdong Lu | Lifeng Shang | Hang Li | Xiaoming Li
Proceedings of the Workshop on Human-Computer Question Answering

pdf bib
Interactive Attention for Neural Machine Translation
Fandong Meng | Zhengdong Lu | Hang Li | Qun Liu
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Conventional attention-based Neural Machine Translation (NMT) conducts dynamic alignment in generating the target sentence. By repeatedly reading the representation of source sentence, which keeps fixed after generated by the encoder (Bahdanau et al., 2015), the attention mechanism has greatly enhanced state-of-the-art NMT. In this paper, we propose a new attention mechanism, called INTERACTIVE ATTENTION, which models the interaction between the decoder and the representation of source sentence during translation by both reading and writing operations. INTERACTIVE ATTENTION can keep track of the interaction history and therefore improve the translation performance. Experiments on NIST Chinese-English translation task show that INTERACTIVE ATTENTION can achieve significant improvements over both the previous attention-based NMT baseline and some state-of-the-art variants of attention-based NMT (i.e., coverage models (Tu et al., 2016)). And neural machine translator with our INTERACTIVE ATTENTION can outperform the open source attention-based NMT system Groundhog by 4.22 BLEU points and the open source phrase-based system Moses by 3.94 BLEU points averagely on multiple test sets.

pdf bib
Modeling Coverage for Neural Machine Translation
Zhaopeng Tu | Zhengdong Lu | Yang Liu | Xiaohua Liu | Hang Li
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Incorporating Copying Mechanism in Sequence-to-Sequence Learning
Jiatao Gu | Zhengdong Lu | Hang Li | Victor O.K. Li
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Encoding Source Language with Convolutional Neural Network for Machine Translation
Fandong Meng | Zhengdong Lu | Mingxuan Wang | Hang Li | Wenbin Jiang | Qun Liu
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
genCNN: A Convolutional Architecture for Word Sequence Prediction
Mingxuan Wang | Zhengdong Lu | Hang Li | Wenbin Jiang | Qun Liu
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Neural Responding Machine for Short-Text Conversation
Lifeng Shang | Zhengdong Lu | Hang Li
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Context-Dependent Translation Selection Using Convolutional Neural Network
Baotian Hu | Zhaopeng Tu | Zhengdong Lu | Hang Li | Qingcai Chen
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2013

pdf bib
A Dataset for Research on Short-Text Conversations
Hao Wang | Zhengdong Lu | Hang Li | Enhong Chen
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations
Kentaro Torisawa | Hang Li
The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations

2012

pdf bib
String Re-writing Kernel
Fan Bu | Hang Li | Xiaoyan Zhu
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Automatically Mining Question Reformulation Patterns from Search Log Data
Xiaobing Xue | Yu Tao | Daxin Jiang | Hang Li
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2011

pdf bib
A Fast and Accurate Method for Approximate String Search
Ziqi Wang | Gu Xu | Hang Li | Ming Zhang
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

2010

pdf bib
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Hang Li | Lluís Màrquez
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Query Understanding in Web Search - by Large Scale Log Data Mining and Statistical Learning
Hang Li
Proceedings of the Second Workshop on NLP Challenges in the Information Explosion Era (NLPIX 2010)

2009

pdf bib
Learning to Rank
Hang Li
Tutorial Abstracts of ACL-IJCNLP 2009

2008

pdf bib
HTM: A Topic Model for Hypertexts
Congkai Sun | Bin Gao | Zhenfu Cao | Hang Li
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
A Unified Tagging Approach to Text Normalization
Conghui Zhu | Jie Tang | Hang Li | Hwee Tou Ng | Tiejun Zhao
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2004

pdf bib
Word Translation Disambiguation Using Bilingual Bootstrapping
Hang Li | Cong Li
Computational Linguistics, Volume 30, Number 1, March 2004

2003

pdf bib
Uncertainty Reduction in Collaborative Bootstrapping: Measure and Algorithm
Yunbo Cao | Hang Li | Li Lian
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics

2002

pdf bib
Base Noun Phrase Translation Using Web Data and the EM Algorithm
Yunbo Cao | Hang Li
COLING 2002: The 19th International Conference on Computational Linguistics

pdf bib
Exploring Asymmetric Clustering for Statistical Language Modeling
Jianfeng Gao | Joshua Goodman | Guihong Cao | Hang Li
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

pdf bib
Word Translation Disambiguation Using Bilingual Bootstrapping
Cong Li | Hang Li
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics

2000

pdf bib
Topic Analysis Using a Finite Mixture Model
Hang Li | Kenji Yamanishi
2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora

1999

pdf bib
Learning Dependencies between Case Frame Slots
Hang Li | Naoki Abe
Computational Linguistics, Volume 25, Number 2, June 1999

1998

pdf bib
Generalizing Case Frames Using a Thesaurus and the MDL Principle
Hang Li | Naoki Abe
Computational Linguistics, Volume 24, Number 2, June 1998

pdf bib
Word Clustering and Disambiguation Based on Co-occurrence Data
Hang Li | Naoki Abe
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2

pdf bib
Word Clustering and Disambiguation Based on Co-occurrence Data
Hang Li | Naoki Abe
COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics

1997

pdf bib
Document Classification Using a Finite Mixture Model
Hang Li | Kenji Yamanishi
35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics

1996

pdf bib
A Probabilistic Disambiguation Method Based on Psycholinguistic Principles
Hang Li
Fourth Workshop on Very Large Corpora

pdf bib
Clustering Words with the MDL Principle
Hang Li | Naoki Abe
COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics

pdf bib
Learning Dependencies between Case Frame Slots
Hang Li | Naoki Abe
COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics