Hao Yu


2024

pdf bib
An Evaluation of Language Models for Hyperpartisan Ideology Detection in Persian Twitter
Sahar Omidi Shayegan | Isar Nejadgholi | Kellin Pelrine | Hao Yu | Sacha Levy | Zachary Yang | Jean-François Godbout | Reihaneh Rabbany
Proceedings of the 2nd Workshop on Resources and Technologies for Indigenous, Endangered and Lesser-resourced Languages in Eurasia (EURALI) @ LREC-COLING 2024

Large Language Models (LLMs) have shown significant promise in various tasks, including identifying the political beliefs of English-speaking social media users from their posts. However, assessing LLMs for this task in non-English languages remains unexplored. In this work, we ask to what extent LLMs can predict the political ideologies of users in Persian social media. To answer this question, we first acknowledge that political parties are not well-defined among Persian users, and therefore, we simplify the task to a much simpler task of hyperpartisan ideology detection. We create a new benchmark and show the potential and limitations of both open-source and commercial LLMs in classifying the hyper-partisan ideologies of users. We compare these models with smaller fine-tuned models, both on the Persian language (ParsBERT) and translated data (RoBERTa), showing that they considerably outperform generative LLMs in this task. We further demonstrate that the performance of the generative LLMs degrades when classifying users based on their tweets instead of their bios and even when tweets are added as additional information, whereas the smaller fine-tuned models are robust and achieve similar performance for all classes. This study is a first step toward political ideology detection in Persian Twitter, with implications for future research to understand the dynamics of ideologies in Persian social media.

pdf bib
OpenWebAgent: An Open Toolkit to Enable Web Agents on Large Language Models
Iat Long Iong | Xiao Liu | Yuxuan Chen | Hanyu Lai | Shuntian Yao | Pengbo Shen | Hao Yu | Yuxiao Dong | Jie Tang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

We introduce OpenWebAgent, an open toolkit designed to optimize web automation by integrating both large language models (LLMs) and large multimodal models (LMMs). This toolkit focuses on enhancing human-computer interactions on the web, simplifying complex tasks through an advanced HTML parser, a rapid action generation module, and an intuitive user interface. At the core of OpenWebAgent is an innovative web agent framework that uses a modular design to allow developers to seamlessly integrate a variety of models and tools to process web information and automate tasks on the web. This enables the development of powerful, task-oriented web agents, significantly enhancing user experience and operational efficiency on the web. The OpenWebAgent framework, Chrome plugin, and demo video are available at https://github.com/THUDM/OpenWebAgent/.

pdf bib
Context-Aware Non-Autoregressive Document-Level Translation with Sentence-Aligned Connectionist Temporal Classification
Hao Yu | Kaiyu Huang | Anqi Zhao | Junpeng Liu | Degen Huang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Previous studies employ the autoregressive translation (AT) paradigm in the document-to-document neural machine translation. These methods extend the translation unit from a single sentence to a pseudo-document and encodes the full pseudo-document, avoiding the redundant computation problem in context. However, the AT methods cannot parallelize decoding and struggle with error accumulation, especially when the length of sentences increases. In this work, we propose a context-aware non-autoregressive framework with the sentence-aligned connectionist temporal classification (SA-CTC) loss for document-level neural machine translation. In particular, the SA-CTC loss reduces the search space of the decoding path by fixing the positions of the beginning and end tokens for each sentence in the document. Meanwhile, the context-aware architecture introduces preset nodes to represent sentence-level information and utilizes a hierarchical attention structure to regulate the attention hypothesis space. Experimental results show that our proposed method can achieve competitive performance compared with several strong baselines. Our method implements non-autoregressive modeling in Doc-to-Doc translation manner, achieving an average 46X decoding speedup compared to the document-level AT baselines on three benchmarks.

2023

pdf bib
SWEET - Weakly Supervised Person Name Extraction for Fighting Human Trafficking
Javin Liu | Hao Yu | Vidya Sujaya | Pratheeksha Nair | Kellin Pelrine | Reihaneh Rabbany
Findings of the Association for Computational Linguistics: EMNLP 2023

In this work, we propose a weak supervision pipeline SWEET: Supervise Weakly for Entity Extraction to fight Trafficking for extracting person names from noisy escort advertisements. Our method combines the simplicity of rule-matching (through antirules, i.e., negated rules) and the generalizability of large language models fine-tuned on benchmark, domain-specific and synthetic datasets, treating them as weak labels. One of the major challenges in this domain is limited labeled data. SWEET addresses this by obtaining multiple weak labels through labeling functions and effectively aggregating them. SWEET outperforms the previous supervised SOTA method for this task by 9% F1 score on domain data and better generalizes to common benchmark datasets. Furthermore, we also release HTGEN, a synthetically generated dataset of escort advertisements (built using ChatGPT) to facilitate further research within the community.

pdf bib
Continual Learning for Multilingual Neural Machine Translation via Dual Importance-based Model Division
Junpeng Liu | Kaiyu Huang | Hao Yu | Jiuyi Li | Jinsong Su | Degen Huang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

A persistent goal of multilingual neural machine translation (MNMT) is to continually adapt the model to support new language pairs or improve some current language pairs without accessing the previous training data. To achieve this, the existing methods primarily focus on preventing catastrophic forgetting by making compromises between the original and new language pairs, leading to sub-optimal performance on both translation tasks. To mitigate this problem, we propose a dual importance-based model division method to divide the model parameters into two parts and separately model the translation of the original and new tasks. Specifically, we first remove the parameters that are negligible to the original tasks but essential to the new tasks to obtain a pruned model, which is responsible for the original translation tasks. Then we expand the pruned model with external parameters and fine-tune the newly added parameters with new training data. The whole fine-tuned model will be used for the new translation tasks. Experimental results show that our method can efficiently adapt the original model to various new translation tasks while retaining the performance of the original tasks. Further analyses demonstrate that our method consistently outperforms several strong baselines under different incremental translation scenarios.

pdf bib
DUTNLP System for the WMT2023 Discourse-Level Literary Translation
Anqi Zhao | Kaiyu Huang | Hao Yu | Degen Huang
Proceedings of the Eighth Conference on Machine Translation

This paper describes the submission of DUTNLP Lab submission to WMT23 Discourse-Level Literary Translation in the Chinese to English translation direction under unconstrained conditions. Our primary system aims to leverage a large language model with various prompt strategies, which can fully investigate the potential capabilities of large language models for discourse-level neural machine translation. Moreover, we test a widely used discourse-level machine translation model, G-transformer, with different training strategies. In our experimental results, the method with large language models achieves a BLEU score of 28.16, while the fine-tuned method scores 25.26. These findings indicate that selecting appropriate prompt strategies based on large language models can significantly improve translation performance compared to traditional model training methods.

2021

pdf bib
Lexicon-Based Graph Convolutional Network for Chinese Word Segmentation
Kaiyu Huang | Hao Yu | Junpeng Liu | Wei Liu | Jingxiang Cao | Degen Huang
Findings of the Association for Computational Linguistics: EMNLP 2021

Precise information of word boundary can alleviate the problem of lexical ambiguity to improve the performance of natural language processing (NLP) tasks. Thus, Chinese word segmentation (CWS) is a fundamental task in NLP. Due to the development of pre-trained language models (PLM), pre-trained knowledge can help neural methods solve the main problems of the CWS in significant measure. Existing methods have already achieved high performance on several benchmarks (e.g., Bakeoff-2005). However, recent outstanding studies are limited by the small-scale annotated corpus. To further improve the performance of CWS methods based on fine-tuning the PLMs, we propose a novel neural framework, LBGCN, which incorporates a lexicon-based graph convolutional network into the Transformer encoder. Experimental results on five benchmarks and four cross-domain datasets show the lexicon-based graph convolutional network successfully captures the information of candidate words and helps to improve performance on the benchmarks (Bakeoff-2005 and CTB6) and the cross-domain datasets (SIGHAN-2010). Further experiments and analyses demonstrate that our proposed framework effectively models the lexicon to enhance the ability of basic neural frameworks and strengthens the robustness in the cross-domain scenario.

2013

pdf bib
Semi-supervised Classification of Twitter Messages for Organization Name Disambiguation
Shu Zhang | Jianwei Wu | Dequan Zheng | Yao Meng | Hao Yu
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2012

pdf bib
Extracting and Visualizing Semantic Relationships from Chinese Biomedical Text
Qingliang Miao | Shu Zhang | Bo Zhang | Hao Yu
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf bib
Improving Chinese-to-Japanese Patent Translation Using English as Pivot Language
Xianhua Li | Yao Meng | Hao Yu
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

pdf bib
An Adaptive Method for Organization Name Disambiguation with Feature Reinforcing
Shu Zhang | Jianwei Wu | Dequan Zheng | Yao Meng | Hao Yu
Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation

2011

pdf bib
Automatic Wrapper Generation and Maintenance
Yingju Xia | Yuhang Yang | Shu Zhang | Hao Yu
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf bib
Maximum Entropy Based Lexical Reordering Model for Hierarchical Phrase-based Machine Translation
Zhongguang Zheng | Yao Meng | Hao Yu
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf bib
Supervised and Semi-supervised Methods based Organization Name Disambiguity
Shu Zhang | Hao Yu
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf bib
Lexical-based Reordering Model for Hierarchical Phrase-based Machine Translation
Zhongguang Zheng | Yao Meng | Hao Yu
Proceedings of Machine Translation Summit XIII: Papers

pdf bib
Feedback Selecting of Manually Acquired Rules Using Automatic Evaluation
Xianhua Li | Yajuan Lü | Yao Meng | Qun Liu | Hao Yu
Proceedings of the 4th Workshop on Patent Translation

2010

pdf bib
Maximum Entropy Based Phrase Reordering for Hierarchical Phrase-Based Translation
Zhongjun He | Yao Meng | Hao Yu
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Extending the Hierarchical Phrase Based Model with Maximum Entropy Based BTG
Zhongjun He | Yao Meng | Hao Yu
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers

In the hierarchical phrase based (HPB) translation model, in addition to hierarchical phrase pairs extracted from bi-text, glue rules are used to perform serial combination of phrases. However, this basic method for combining phrases is not sufficient for phrase reordering. In this paper, we extend the HPB model with maximum entropy based bracketing transduction grammar (BTG), which provides content-dependent combination of neighboring phrases in two ways: serial or inverse. Experimental results show that the extended HPB system achieves absolute improvements of 0.9∼1.8 BLEU points over the baseline for large-scale translation tasks.

pdf bib
Fault-Tolerant Learning for Term Extraction
Yuhang Yang | Hao Yu | Yao Meng | Yingliang Lu | Yingju Xia
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation

pdf bib
Structure-Aware Review Mining and Summarization
Fangtao Li | Chao Han | Minlie Huang | Xiaoyan Zhu | Ying-Ju Xia | Shu Zhang | Hao Yu
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Learning Phrase Boundaries for Hierarchical Phrase-based Translation
Zhongjun He | Yao Meng | Hao Yu
Coling 2010: Posters

pdf bib
Extracting Product Features and Sentiments from Chinese Customer Reviews
Shu Zhang | Wenjie Jia | Yingju Xia | Yao Meng | Hao Yu
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

With the growing interest in opinion mining from web data, more works are focused on mining in English and Chinese reviews. Probing into the problem of product opinion mining, this paper describes the details of our language resources, and imports them into the task of extracting product feature and sentiment task. Different from the traditional unsupervised methods, a supervised method is utilized to identify product features, combining the domain knowledge and lexical information. Nearest vicinity match and syntactic tree based methods are proposed to identify the opinions regarding the product features. Multi-level analysis module is proposed to determine the sentiment orientation of the opinions. With the experiments on the electronic reviews of COAE 2008, the validities of the product features identified by CRFs and the two opinion words identified methods are testified and compared. The results show the resource is well utilized in this task and our proposed method is valid.

2009

pdf bib
A Bootstrapping Method for Finer-Grained Opinion Mining Using Graph Model
Shu Zhang | Yingju Xia | Yao Meng | Hao Yu
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2

pdf bib
Reducing SMT Rule Table with Monolingual Key Phrase
Zhongjun He | Yao Meng | Yajuan Lü | Hao Yu | Qun Liu
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

pdf bib
Chinese Term Extraction Using Different Types of Relevance
Yuhang Yang | Tiejun Zhao | Qin Lu | Dequan Zheng | Hao Yu
Proceedings of the ACL-IJCNLP 2009 Conference Short Papers

2008

pdf bib
Dimensionality Reduction with Multilingual Resource
YingJu Xia | Hao Yu | Gang Zou
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-II

2006

pdf bib
Chinese-English Term Translation Mining Based on Semantic Prediction
Gaolin Fang | Hao Yu | Fumihito Nishino
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

pdf bib
Infrastructure for Standardization of Asian Language Resources
Takenobu Tokunaga | Virach Sornlertlamvanich | Thatsanee Charoenporn | Nicoletta Calzolari | Monica Monachini | Claudia Soria | Chu-Ren Huang | YingJu Xia | Hao Yu | Laurent Prevot | Kiyoaki Shirai
Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions

2005

pdf bib
A Lexicon-Constrained Character Model for Chinese Morphological Analysis
Yao Meng | Hao Yu | Fumihito Nishino
Second International Joint Conference on Natural Language Processing: Full Papers

pdf bib
Web-Based Terminology Translation Mining
Gaolin Fang | Hao Yu | Fumihito Nishino
Second International Joint Conference on Natural Language Processing: Full Papers

pdf bib
A Hybrid Chinese Language Model based on a Combination of Ontology with Statistical Method
Dequan Zheng | Tiejun Zhao | Sheng Li | Hao Yu
Companion Volume to the Proceedings of Conference including Posters/Demos and tutorial abstracts

pdf bib
Product Named Entity Recognition Based on Hierarchical Hidden Markov Model
Feifan Liu | Jun Zhao | Bibo Lv | Bo Xu | Hao Yu
Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing

pdf bib
Minimum Sample Risk Methods for Language Modeling
Jianfeng Gao | Hao Yu | Wei Yuan | Peng Xu
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
Chinese Named Entity Recognition with Multiple Features
Youzheng Wu | Jun Zhao | Bo Xu | Hao Yu
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

pdf bib
《人民日報》語料庫命名實体分類的研究 (The Chinese Named Entity Categorization Based on the People’s Daily Corpus) [In Chinese]
YingJu Xia | Hao Yu | Fumihito Nishino
International Journal of Computational Linguistics & Chinese Language Processing, Volume 10, Number 4, December 2005: Special Issue on Selected Papers from CLSW-5

2004

pdf bib
Subcategorization Acquisition and Evaluation for Chinese Verbs
Xiwu Han | Tiejun Zhao | Haoliang Qi | Hao Yu
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2002

pdf bib
Automatic Information Transfer between English and Chinese
Jianmin Yao | Hao Yu | Tiejun Zhao | Xiaohong Li
COLING-02: Machine Translation in Asia

pdf bib
An Automatic Evaluation Method for Localization Oriented Lexicalised EBMT System
Jianmin Yao | Ming Zhou | Tiejun Zhao | Hao Yu | Sheng Li
COLING 2002: The 19th International Conference on Computational Linguistics

2000

pdf bib
Statistics Based Hybrid Approach to Chinese Base Phrase Identification
Tie-jun Zhao | Mu-yun Yang | Fang Liu | Jian-min Yao | Hao Yu
Second Chinese Language Processing Workshop