Zhouhong Gu


2024

pdf bib
AutoScraper: A Progressive Understanding Web Agent for Web Scraper Generation
Wenhao Huang | Zhouhong Gu | Chenghao Peng | Jiaqing Liang | Zhixu Li | Yanghua Xiao | Liqian Wen | Zulong Chen
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Web scraping is a powerful technique that extracts data from websites, enabling automated data collection, enhancing data analysis capabilities, and minimizing manual data entry efforts. Existing methods, wrappers-based methods suffer from limited adaptability and scalability when faced with a new website, while language agents, empowered by large language models (LLMs), exhibit poor reusability in diverse web environments. In this work, we introduce the paradigm of generating web scrapers with LLMs and propose AutoScraper, a two-stage framework that can handle diverse and changing web environments more efficiently. AutoScraper leverages the hierarchical structure of HTML and similarity across different web pages for generating web scrapers. Besides, we propose a new executability metric for better measuring the performance of web scraper generation tasks. We conduct comprehensive experiments with multiple LLMs and demonstrate the effectiveness of our framework. Our work is now open-source.

pdf bib
DetectBench: Can Large Language Model Detect and Piece Together Implicit Evidence?
Zhouhong Gu | Lin Zhang | Xiaoxuan Zhu | Jiangjie Chen | Wenhao Huang | Yikai Zhang | Shusen Wang | Zheyu Ye | Yan Gao | Hongwei Feng | Yanghua Xiao
Findings of the Association for Computational Linguistics: EMNLP 2024

Detecting evidence within the context is a key step in the process of reasoning task. Evaluating and enhancing the capabilities of LLMs in evidence detection will strengthen context-based reasoning performance. This paper proposes a benchmark called DetectBench for verifying the ability to detect and piece together implicit evidence within a long context. DetectBench contains 3,928 multiple-choice questions, with an average of 994 tokens per question. Each question contains an average of 4.55 pieces of implicit evidence, and solving the problem typically requires 7.62 logical jumps to find the correct answer. To enhance the performance of LLMs in evidence detection, this paper proposes Detective Reasoning Prompt and Finetune. Experiments demonstrate that the existing LLMs’ abilities to detect evidence in long contexts are far inferior to humans. However, the Detective Reasoning Prompt effectively enhances the capability of powerful LLMs in evidence detection, while the Finetuning method shows significant effects in enhancing the performance of weaker LLMs. Moreover, when the abilities of LLMs in evidence detection are improved, their final reasoning performance is also enhanced accordingly.

2022

pdf bib
Parsing Natural Language into Propositional and First-Order Logic with Dual Reinforcement Learning
Xuantao Lu | Jingping Liu | Zhouhong Gu | Hanwen Tong | Chenhao Xie | Junyang Huang | Yanghua Xiao | Wenguang Wang
Proceedings of the 29th International Conference on Computational Linguistics

Semantic parsing converts natural language utterances into structured logical expressions. We consider two such formal representations: Propositional Logic (PL) and First-order Logic (FOL). The paucity of labeled data is a major challenge in this field. In previous works, dual reinforcement learning has been proposed as an approach to reduce dependence on labeled data. However, this method has the following limitations: 1) The reward needs to be set manually and is not applicable to all kinds of logical expressions. 2) The training process easily collapses when models are trained with only the reward from dual reinforcement learning. In this paper, we propose a scoring model to automatically learn a model-based reward, and an effective training strategy based on curriculum learning is further proposed to stabilize the training process. In addition to the technical contribution, a Chinese-PL/FOL dataset is constructed to compensate for the paucity of labeled data in this field. Experimental results show that the proposed method outperforms competitors on several datasets. Furthermore, by introducing PL/FOL generated by our model, the performance of existing Natural Language Inference (NLI) models is further enhanced.